Regional
What if AI treats humans the way we treat animals?
What if AI treats humans the way we treat animals?
By now, you may have heard â possibly from the same people creating the technology â that artificial intelligence might one day kill us all. The specifics are hazy, but then, they donât really matter. Humans are very good at fantasizing about being exterminated by an alien species, because weâve always been good at devising creative ways of doing it to our fellow creatures. AI could destroy humanity for something as stupid as, in philosopher Nick Bostromâs famous thought experiment, turning the worldâs matter into paper clips â much like humans are now wiping out our great ape cousins, orangutans, to cultivate palm oil to make junk foods like Oreos.
You might even say that the human nightmare of subjugation by machines expresses a sublimated fear of our treatment of non-human animals being turned back on us. âWe know what weâve done,â as journalist Ezra Klein put it on a May episode of his podcast. âAnd we wouldnât want to be on the other side of it.â
AI threatens the quality that many of us believe has made humans unique on this planet: intelligence. So, as author Meghan OâGieblyn wrote in her book God, Human, Animal, Machine, âWe quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.â We tell ourselves, in other words, that even if AI may one day be smarter than us, unlike the machines, we have subjective experience, which makes us morally special.
The obvious problem with this, though, is that humans arenât special in this way. Non-human animals share many of our capacities for intelligence and perception, yet weâve refused to extend the generosity we might expect from AI. We rationalize unmitigated cruelty toward animals â caging, commodifying, mutilating, and killing them to suit our whims â on the basis of our purportedly superior intellect. âIf there were gods, they would surely be laughing their heads off at the inconsistency of our logic,â OâGieblyn continues. âWe spent centuries denying consciousness in animals precisely because [we thought] they lacked reason or higher thought.â
Why should we hope that AI, particularly if itâs built on our own values, treats us any differently? We might struggle to justify to a future artificial âsuperintelligence,â if such a thing could ever exist, why weâre deserving of mercy when weâve failed spectacularly at offering our fellow animals the same. And, worse still, the dehumanizing philosophy of AIâs prophets is among the worst possible starting points to defend the value of our fleshy, living selves.
Transhumanism is built on a hatred of animality
Although modern humans defend the exploitation of non-human animals in terms of their assumed lack of intelligence, this has never been the real reason for it. If we took that argument at face value and treated animals according to their smarts, we would immediately stop factory-farming octopuses, which can use tools, recognize human faces, and figure out how to escape enclosures. We wouldnât keep elephants in solitary confinement in zoos, recognizing it as a violation of their rights and needs as smart, caring, deeply social creatures. We wouldnât psychologically torture pigs by immobilizing them in cages so small they canât turn around, condemning them to a short lifetime essentially spent in a coffin, all to turn them into cheap cuts of bacon. We would realize that itâs wholly unnecessary to subject intelligent cows to the trauma of repeated, human-induced pregnancies and separation from their newborns, just so we can drink the milk meant for their calves.
In reality, we arenât cruel to animals because theyâre stupid; we say theyâre stupid because weâre cruel to them, inventing fact-free mythologies about their minds to justify our dominance, as political theorist Dinesh Wadiwel lays this out in his brilliant 2015 book The War Against Animals. In a chapter called âThe Violence of Stupidity,â Wadiwel contends that human power over animals enables us to be willfully and unaccountably stupid about what they are really like. âHow else might we describe a claimed superiority by humans over animals (whether based on intelligence, reason, communication, vocalisation, or politics) that has no consistent or verifiable âscientificâ or âphilosophicalâ basis?â he writes. Humans, like animals, are vulnerable, breakable creatures who can only thrive within a specific set of physical and social constraints. We can only hope that future AI, however intelligent, doesnât evince the same stupidity with respect to us.
While we can only guess whether some powerful future AI will categorize us as unintelligent, whatâs clear is that there is an explicit and concerning contempt for the human animal among prominent AI boosters. AI research itself has strong ties to transhumanism, a movement that aims to radically alter and augment human bodies with technology. Its most extreme aspirants hope to merge humanity with computers, excising suffering from life like a tumor from a cancer patient and living in a state of everlasting bliss, as Bostrom, one of the main proponents of transhumanism, has suggested. Elon Musk, for instance, has said that he launched Neuralink, his brain-computer interface startup, in part so that humans can remain competitive in an intelligence arms race with AI. âEven under a benign AI, we will be left behind,â Musk said at a Neuralink event in 2019. âWith a high bandwidth brain-machine interface, we will have the option to go along for the ride.â
This aspiration can be interpreted as an implicit loathing of our animality, or at least a desire to liberate ourselves from it. âWe will be the first species ever to design our own descendants,â technologist Sam Altman, now the CEO of OpenAI, wrote in a 2017 blog post. âMy guess is that we can either be the biological bootloader for digital intelligenceâ â meaning just a stepping stone for advanced AI â âand then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.â
Computer scientist Danny Hillis, co-founder of the now-defunct AI company Thinking Machines, declared in the early â90s that humans are composed of two fundamentally different things: âWeâre the metabolic thing, which is the monkey that walks around, and weâre the intelligent thing, which is a set of ideas and culture,â as historian David Noble quotes in his 1997 book The Religion of Technology. âWhatâs valuable about us,â Hillis continued, âwhatâs good about humans, is the idea thing. Itâs not the animal thing.â Merging with computers signifies our extrication from animal biology.
This human/animal dualism posits a clean cognitive break between us and the rest of the animal evolutionary tree, when in fact no such division exists. It relies on an implausible model of human intelligence as having nothing to do with our physical, animal selves: a notion that âthe mind is computation, that it does not involve the affective dimensions of the human experience, and it doesnât involve the body,â Michael Sacasas, a technology critic who writes The Convivial Society, a popular Substack, told me.
The societal reckoning taking place now over where humans fit in a world of AI, might, as Sacasas hopes, prompt us to start to rethink this dualism, to recognize that the body is ânot just as the firmware for the rational software, but actually an integral part of what we call âmind.ââ Breaking down that dualism ought to also mean giving up the separate status we assign ourselves as human beings. It could help us broaden the definition of intelligence itself to encompass the animal qualities described by OâGieblyn â âemotions, perception, the ability to experience and feel.â There is, after all, no single thing in our brains called âintelligenceâ or âthoughtâ; itâs not a body part, but an emergent property continuous with our other mental processes. Animals share these, and in some cases exceed them.
Migratory birds, for example, can famously navigate by perceiving the Earthâs magnetic field. Raccoons can âseeâ and learn about the world with their hyper-sensitive hands (this is why they can sometimes be seen enthusiastically patting objects and other animals). Pigs are undoubtedly smart, but the widely cited idea that theyâre âas smart asâ 3-year-old children reflects the depressing way that weâve come to measure intelligence against a single-variable, anthropocentric yardstick, rather than recognizing different beings as having different minds. Yet this is dehumanizing to us, too, because it judges our cognition as though it were a computerâs CPU. If we can properly value animalsâ capacities, then we might also see how claiming human exceptionalism through a disembodied view of our minds has done spiritual harm to ourselves.
AI criticism ought to include non-human animals
You donât have to believe that AI could become autonomous and orchestrate our extinction to see how, for example, chatbots are already blurring the line between humans and machines, creating the illusion of sentience where it doesnât exist, a critique made by linguist Emily Bender. Others, like Sacasas, point to how AI replacing humans represents the culmination of modernityâs drive to eliminate inefficiency from life. âBy the logic of the market and of techno-capitalism, if you like, the inefficiencies of the human being were always ultimately meant to be disposed of,â he said. âAI, in a sense, just kind of furthers that logic ⦠and brings it to its logical conclusion, which is, youâre just getting rid of people.â
These kinds of critiques ring true to me â yet they also have a way of fixating on the ethical and spiritual uniqueness of human beings, to the exclusion of the other sentient, intelligent creatures with whom weâve always shared the planet. âOne of the anxieties generated by AI is built upon how we have sought to distinguish the human, or to elevate the human, or to find the unique thing about the human,â Sacasas points out. Humans are, in important ways, obviously unique among animals. But the critical discourse about AI has shown little interest in thinking beyond ourselves, or reckoning with what implications this moment has for our undervaluing of animals.
One of the best-known critiques of AI large language models, or LLMs, for example, compares AIâs lack of language understanding to that of an animal: the concept of the âstochastic parrot,â which refers to how chatbots, not having minds, spit out language based on probabilistic models with no regard for meaning. âYou are not a parrot,â proclaimed the headline of a widely read March profile of Emily Bender in New York magazine.
Iâm sure Bender has nothing against parrots â exceptionally smart animals that are thought to reproduce sounds with astonishing fidelity as part of their communication with one another and with humans. But parrots arenât machines, and imagining them as such only reinforces the human/animal dualism that gave us the disembodied view of our own minds. Itâs as if we have no language for affirming our worth as humans without repudiating animality.
The ascendance of AI should be a pivotal moment from which to start to come to grips with our relationship to other sentient, biological life. If AI were ever in a position to make judgments about us, we should hope that itâs far more charitable than we have been, that it doesnât nitpick, mock, or nullify our capacities and needs as weâve done to other animals. If we wouldnât want to be tyrannized by a more powerful intelligence, we have no credible defense for continuing to do the same.
We donât know if sentient AI is possible, but if it is, we shouldnât build it
None of this necessarily tells us whether the machines themselves could ever become sentient, or how we should proceed if they can. I used to find the idea of sentient AI risible, but now Iâm not so sure. The scientific method has not figured out how to explain consciousness, as OâGieblyn points out. Modern science, she writes, âwas predicated in the first place on the exclusion of the mind.â
If we donât know where consciousness comes from, we may want to be careful about assuming it can only arise from biological life, especially given our poor track record of appreciating it in animals. âEvolution was just selecting repeatedly on ability to have babies, and here we are. We have goals,â as Voxâs Kelsey Piper said on The Ezra Klein Show in March. âWhy does that process get you things that have goals? I donât know.â
We have no reason to believe any current AIs are sentient, but we also have no way of knowing whether or how that could change. âWeâre kind of at the point where we can make fire but do not even have the rudiments of what weâd need to understand it,â my friend Luke Gessler, a computational linguist, told me.
If sentience in AI could ever emerge (a big if), Iâm doubtful weâd be willing to recognize it, for the same reason that weâve denied its existence in animals. Humans are very good at dismissing or lying about the interests of beings that we want to exploit (including not just animals but also, of course, enslaved humans, women, and any other class of people who have been excluded from moral consideration). Creating sentient AI would be unethical because weâd be bringing it into the world as chattel. Consigning sentient beings to property status, as we know from the experience of non-human animals, is inherently unjust because their welfare will always be subordinated to economic efficiency and the desires of their owners. âWe will inevitably inflict suffering on them,â science fiction author Ted Chiang said of building sentient AI in 2021. âThat seems to me clearly a bad idea.â
In a May essay, Columbia philosopher Dhananjay Jagannathan offered a different perspective on the AI minds question. Drawing from Aristotle, he suggests that the nature of thought isnât something that can be scientifically deduced or implanted into a computer, because itâs an irreducible part of our lives as biological animals. âThinking is life,â the Aristotelian idea puts it. A raccoon who pats things to learn about her environment, for example, or a baby bird who pecks around at objects to do the same, or a human whose sense of smell vividly triggers a distant memory are all having experiences of thinking that are inextricable from the biological organs through which theyâre engaging with the world.
One upshot of this, Jagannathan writes, is that the transhumanist dream of digitally uploading our consciousness and splitting from our bodies, far from being any sort of liberation, amounts to âself-annihilation.â The idea of thinking as inseparable from animality can be hard for modern people to comprehend because, as OâGieblyn writes, our concept of the mind pulls so heavily from computational metaphors. Because we imagine our cognition as a computer, we start to imagine, erroneously, that computers can think.
AI evokes our anxieties about the fragility and mistreatment of animality
Jagannathanâs view, that we can understand thought through our kinship with non-human animals, helps clarify what is disconcerting about the dualist, computational view of experience, taken to its logical endpoint by AI and transhumanist philosophy. The assumption that we can apprehend, measure, and perfect subjective experience, rendering life as though it were bits of information encoded on a computer, can lead to conclusions that are obviously repugnant. It has made the annihilation of biological life, both human and non-human, imaginable.
Prominent philosopher Will MacAskill, for example, proposed in his 2022 book What We Owe the Future that declining populations of wild animals (we are, if you havenât heard, in the middle of a mass extinction) may actually be desirable. Their lives might be âworse than nothing on average, which I think is plausible (though uncertain),â he writes, because they may consist more of suffering, from things like predation and disease, than of pleasure. Perhaps, then, theyâd be better off if theyâd never been born â an argument that springs from the same well as the transhumanist impulse to remove suffering from life and colonize the universe with beings merged with machines.
The idea of wild animal eradication represents one of the more extreme manifestations of the drive to denude life of physical content. In a similar vein, transhumanist philosopher David Pearce, who sits on the board of the organization Herbivorize Predators (it aims to do what the name implies), hopes to technologically âeliminate all forms of unpleasant experience from human and non-human life, replacing suffering with âinformation-sensitive gradients of bliss.ââ
In the actual world, where wild animals are often exterminated wholesale when their presence is inconvenient for us, the notion that it could actually be morally righteous to get rid of them might provide a justification for the ecocide that humans are engaged in anyway. Whoâs to say that an AI wonât one day say the same thing about us, deciding that itâs best to put us out of our misery based on its cold calculation of our pains and pleasures? That would be consistent with the transhumanist ethos of transcending the hardship of physical existence.
Yet this dim estimation of our biological selves, as well as those of animals, forecloses the possibility of valuing or interpreting life in other ways. We can hardly access an animalâs interiority, much less be able to say whether they think their lives are worth living. If a utilitarian bean counter told me that the rest of my life would be 70 percent suffering, I wouldnât choose to die, even if I truly believed them; I would want to live out my life.
A very different, more integrated interpretation of animal life, one that I return to again and again, can be found in a work by the poet Alan Shapiro. His 2002 poem âJoyâ gives expression to the strange entanglement of joy, fear, and tragedy that defines our lives, and, he imagines, perhaps those of wild animals also. âJoy,â he writes, is the thing that is âSavagely beautiful,â likening it to antelope evading a lion:
This vision doesnât, to me, suggest that the suffering of wild animals doesnât matter, but rather that the vulnerable, mysterious fullness of their lives is worth living. AI evokes our anxieties about the fragility and mistreatment of animality â our own, as well as that of nonhuman animals. It reminds us of our own vulnerability, the parts of us that are unfathomable or expendable in mechanistic terms. In a world where the ability to manipulate language is no longer a uniquely human capacity, the rationalizing impulse might ask us to co-sign our own obsolescence. We might, instead, decide that our creaturely selves are worth holding on to, and, in doing so, invite our fellow animals into our moral circle.