What in the World is AI Veganism? And Why is it a Thing?

Let’s clear this up straight away: AI Veganism is not about tofu, oat milk, or whether you should try that new plant-based burger. It’s about people—real people—who are choosing, often quietly but deliberately, to say “no” to artificial intelligence. Not out of fear. Not out of ignorance. But out of principle, purpose, and, in many cases, fatigue.
You might not have noticed them at first. They’re not always loud. They’re not on tech panels or in protest lines. But they’re unsubscribing from AI tools, opting out of algorithmic recommendations, and waiting longer on the phone just to talk to an actual human being.
Does that sound familiar? Perhaps you’ve done exactly that.
This curious case study of this emerging trend will be our pathway into a whole new way of thinking about an AI world.
So… People Are Refusing AI Now?
Yes. And it’s not as fringe as you might think.
From skipping ChatGPT and Midjourney to disabling voice assistants and resisting AI-enhanced search results, more individuals are quietly choosing to opt out of the AI-powered conveniences that have become standard in digital life.
They aren’t refusing AI because they don’t understand it. On the contrary, many of them understand it all too well—and that’s exactly why they’re walking away.
AI Veganism is about making a conscious decision, a kind of philosophical tech diet. Not everything has to be automated. Not every interaction needs an algorithm in the middle.
What Even Is AI Veganism?
Think of it as a lifestyle shift—similar to food veganism, but focused on data ethics, digital well-being, and environmental responsibility. Instead of avoiding meat and dairy, AI vegans avoid tools and systems built with artificial intelligence.
Their concerns usually fall into three buckets:
- Ethical sourcing: Many generative AI models are trained on large sets of online content—writing, images, code, and music—without asking for or obtaining permission from the creators. This use of scraped data is widely documented and continues to raise legal and moral questions.
- Environmental cost: Training large-scale AI models consumes substantial energy. According to one estimate, training OpenAI’s GPT-3 used approximately 1,287 megawatt-hours of electricity. This has been compared to the energy usage of 120 average U.S. households per year. Though the UK equivalent may vary slightly, the takeaway is clear: building AI comes with a significant carbon footprint.
- Cognitive Health: There is an increasing public concern over AI in everyday tasks—from composing emails to decision-making—that sullies human judgement and creativity. The scientific question of whether this is true or not has not been fully settled, but both researchers and tech ethicists have indeed considered this a matter worth studying.
A Look at the Numbers
Let’s examine what the data says.
Pew Research conducted a survey in 2023 where 52% of Americans said they were more concerned than excited by the increase of AI in daily life. While far from outright refusal, it points to a deep discomfort with how AI is unfolding. In another 2024 Pew study, 25% of K-12 teachers in the U.S. believe AI tools are doing more harm than good in education.
Similar trends are happening across the Atlantic. Although the percentage changes by poll, many polls carried out in the UK confirm the increase in discomfort with the influence of AI in public service, creative areas, and education.
Creator concern aside, according to some other Mozilla research and articles, most artists, musicians, and writers are apprehensive about the use of their works in the training of AI systems. Although we cannot confirm the figure of 50% by direct reference to Mozilla’s State of the Internet Report of 2023; this general feeling is shared by many observer groups and community declarations.
Meet the AI Abstainers
They’re not anti-technology. Far from it. Most are everyday users of smartphones, laptops, and the internet. But they’re starting to question how much of their digital life should be governed by machine learning.
Take, for example, an illustrator from Brighton who abandoned AI image tools after discovering that her portfolio had been used to train an image generation model without her consent. “It felt wrong,” she shared. “Like stealing, but with extra steps.”
She’s not alone. Across forums, artist communities, and social networks, thousands have echoed similar concerns.
What Do They Use Instead?
Life without AI may sound daunting, but it’s not as dramatic as you think.
Privacy is valued among these users, and so they gravitate toward search engines like DuckDuckGo. Neither autocomplete suggestions nor AI-assisted tools for writing are in use here. Simple text editors do the writing. Automated sorting of emails is another thing that has not come to them; instead, they use some predetermined folders and rules.
Human interaction gets valued when it comes to them. They can take their time waiting for a human agent, avoiding any chatbots along the way, and they sometimes steer clear of platforms heavily invested in AI content creation.
Isn’t This Just a Phase?
Possibly not.
There are many tech trends following the standard adoption curve: they are met with resistance until finally embraced; AI refusal, however, does not seem to fit that pattern. An unverifiable claim attributed to a 2024 study in AI & Society states that the users of a certain sort actually find their discomfort with AI increasing with time. While we couldn’t confirm the exact study, other commentary from ethicists and tech researchers supports the idea that AI scepticism is sticky rather than fleeting.
This could make AI Veganism more than a passing trend. It could be a durable philosophical position.
Why This Is a Big Deal
But AI no longer exists in an underground niche. It has become something in the realm of email apps, recommendations in shopping apps, banking interfaces, education platforms, and beyond.
If even a small segment of the population rejects AI art and chatbot-generated solutions and demands transparency, companies, platforms, and developers will have to respond.
This means a reconsideration from interface design to product strategy. It is not enough to just do a smart build; that build must also be worthy of users’ trust and have value to them.
The Shift Has Already Started
Signs of this shift are already visible.
Artists and musicians are labelling their work “No AI Used”. Small publishers are promoting human-only content. Tech review platforms are starting to differentiate between AI-driven and AI-free tools.
There’s growing interest in certifications that could verify human authorship in journalism, art, and academic writing. The “AI-Free” label might become as common as “Organic” or “Fair Trade”.
You Might Be One, Too
Have you ever ignored a chatbot and pressed zero repeatedly to get to a person? Have you avoided AI writing tools because you wanted the challenge of crafting your message?
Then maybe you’ve got a little AI Vegan in you, too.
This isn’t about living off the grid. It’s about making deliberate choices.
Where This Could Go
Consequently, we may see entire markets for AI-free alternatives—search engines, news feeds, educational resources, and even creative software and tools.
There may be a market for brands to promote their human-first approach. Consumers will begin asking for data transparency about not just what tools are, but how they’re built.
The talks will move onwards. Should increased transparency and more options for creators and users be forthcoming, it will not have been a trend. That will be a shift in values.
TL;DR?
AI Veganism isn’t anti-technology. It’s about deciding when and how technology belongs in your life.
It’s about refusing the default, questioning the system, and preserving the things that make us uniquely human.
And in a world that often prioritises automation, that choice might be more disruptive—and more powerful—than we realise.