Regional
3 Body Problem’s most mind-bending question isn’t about aliens
Based on Cixin Liu’s sci-fi novel, the show asks: Would human extinction be such a terrible thing? Ask Jin, Auggie, or Will.
Stars that wink at you. Protons with 11 dimensions. Computers made of rows of human soldiers. Aliens that give virtual reality a whole new meaning.
All of these visual pyrotechnics are very cool. But none of them are at the core of what makes 3 Body Problem, the new Netflix hit based on Cixin Liuâs sci-fi novel of the same name, so compelling. The real beating heart of the show is a philosophical question: Would you swear a loyalty oath to humanity â or cheer on its extinction?
Thereâs more division over this question than you might think. The show, which is about a face-off between humans and aliens, captures two opposing intellectual trends that have been swirling around in the zeitgeist in recent years.
One goes like this: âHumans may be the only intelligent life in the universe â we are incredibly precious. We must protect our species from existential threats at all costs!â
The other goes like this: âHumans are destroying the planet â causing climate change, making species go extinct. The world will be better off if we go extinct!â
The first, pro-human perspective is more familiar. Itâs natural to want your own species to survive. And thereâs lots in the media these days about perceived existential threats, from climate change to rogue AI that one day could wipe out humanity.
But anti-humanism has been gaining steam, too, especially among a vocal minority of environmental activists who seem to welcome the end of destructive Homo sapiens. Thereâs even a Voluntary Human Extinction Movement, which advocates for us to stop having kids so that humanity will fade out and nature will triumph.
And then thereâs transhumanism, the Frankensteinish love child of pro-humanism and anti-humanism. This is the idea that we should use tech to evolve our species into Homo sapiens 2.0. Transhumanists â who span the gamut from Silicon Valley tech bros to academic philosophers â do want to keep some version of humanity going, but definitely not the current hardware. They imagine us with chips in our brains, or with AI telling us how to make moral decisions more objectively, or with digitally uploaded minds that live forever in the cloud.
Analyzing these trends in his book Revolt Against Humanity, the literary critic Adam Kirsch writes, âThe anti-humanist future and the transhumanist future are opposites in most ways, except the most fundamental: They are worlds from which we have disappeared, and rightfully so.â
If youâve watched 3 Body Problem, this is probably already ringing some bells for you. The Netflix hit actually tackles the question of human extinction with admirable nuance, so letâs get into the nuance a bit â with some mild spoilers ahead.
What does 3 Body Problem have to say about human extinction?
It would give too much away to say who in the show ends up repping anti-humanism. So suffice it to say that thereâs an anti-humanist group in play â people who are actually trying to help the aliens invade Earth.
Itâs not a monolithic group, though. One faction, led by a hardcore environmentalist named Mike Evans, believes that humans are too selfish to solve problems like biodiversity loss or climate change, so we basically deserve to be destroyed. Another, milder perspective says that humans are indeed selfish but may be redeemable â and the hope is that the aliens are wiser beings who will save us from ourselves. They refer to the extraterrestrials as literally âOur Lord.â
Meanwhile, one of the main characters, a brilliant physicist named Jin, is a walking embodiment of the pro-human position. When it becomes clear that aliens are planning to take over Earth, she develops a bold reconnaissance mission that involves sending her brainy friend, Will, into space to spy on the extraterrestrials.
Jin is willing to do whatever it takes to save humanity from the aliens, even though theyâre traveling from a distant planet and their spaceships wonât reach Earth for another 400 years. Sheâs willing to sacrifice Will â who, by the way, is madly in love with her â for later generations of humans who donât even exist yet.
Jinâs best friend is Auggie, a nanotechnology pioneer. When sheâs asked to join the fight against the aliens, Auggie hesitates, because it would require killing hundreds of humans who are trying to help the aliens invade. Yet she eventually gives in to Jinâs appeals â and lots of people predictably wind up dead, thanks to a lethal weapon created from her nanotechnology.
As Auggie walks around surveying the carnage from the attack, she sees a childâs severed foot. Itâs a classic âdo the ends justify the means?â moment. For Auggie, the answer is no. She abandons the mission and starts using her nanotech to help people â not hypothetical people 400 years in the future, but disadvantaged people living in the here and now.
So, like Jin, Auggie is also a perfect emblem of the pro-human position â and yet she lives out that position in a totally different way. She is not content to sacrifice people today for the mere chance at helping people tomorrow.
But the most interesting character is Will, a humble science teacher who is given the chance to go into space and do humanity a major solid by gathering intel on the aliens. When the man in charge of the mission vets Will for the gig, he asks Will to sign a loyalty oath to humanity â to swear that heâll never renege and side with the aliens.
Will refuses. âThey might end up being better than us,â he says. âWhy would I swear loyalty to us if they could end up being better?â
Itâs a radical open-mindedness to the possibility that we humans might really suck â and that maybe we donât deserve to be the protagonists of the universeâs story. If another species is better, kinder, more moral, should our allegiance be to furthering those values, or to the species we happen to be part of?
The pro-humanist vision
As weâve seen, there are different ways to live out pro-humanism. In philosophy circles, there are names for these different approaches. While Auggie is a âneartermist,â focused on solving problems that affect people today, Jin is a classic âlongtermist.â
At its core, longtermism is the idea that we should care more about positively influencing the long-term future of humanity â hundreds, thousands, or even millions of years from now. The idea emerged out of effective altruism (EA), a broader social movement dedicated to wielding reason and evidence to do the most good possible for the most people.
Longtermists often talk about existential risks. They care a lot about making sure, for example, that runaway AI doesnât render Homo sapiens extinct. For the most part, Western society doesnât assign much value to future generations, something we see in our struggles to deal with long-term threats like climate change. But because longtermists assign future people as much moral value as present people, and there are going to be way more people alive in the future than there are now, longtermists are especially focused on staving off risks that could erase the chance for those future people to exist.
The poster boy for longtermism, Oxford philosopher and founding EA figure Will MacAskill, published a book on the worldview called What We Owe the Future. To him, avoiding extinction is almost a sacrosanct duty. He writes:
With great rarity comes great responsibility. For thirteen billion years, the known universe was devoid of consciousness ... Now and in the coming centuries, we face threats that could kill us all. And if we mess this up, we mess it up forever. The universeâs self-understanding might be permanently lost ... the brief and slender flame of consciousness that flickered for a while would be extinguished forever.
There are a few eyebrow-raising anthropocentric ideas here. How confident are we that the universe was or would be barren of highly intelligent life without humanity? âHighly intelligentâ by whose lights â humanityâs? And are we so sure that the universe would be meaningless without human minds to experience it?
But this way of thinking is popular among tech billionaires like Elon Musk, who talks about the need to colonize Mars as âlife insuranceâ for the human species because we have âa duty to maintain the light of consciousnessâ rather than going extinct.
Musk describes MacAskillâs book as âa close match for my philosophy.â
The transhumanist vision
A close match â but not a perfect match.
Musk has a lot of commonalities with the pro-human camp, including his view that we should make lots of babies in order to stave off civilizational collapse. But heâs arguably a bit closer to that strange combo of pro-humanism and anti-humanism that we know as âtranshumanism.â
Hence Muskâs company Neuralink, which recently implanted a brain chip in its first human subject. The ultimate goal, in Muskâs own words, is âto achieve a symbiosis with artificial intelligence.â He wants to develop a technology that helps humans âmerg[e] with AIâ so that we wonât be âleft behindâ as AI becomes more sophisticated.
In 3 Body Problem, the closest parallel for this approach is the anti-humanist faction that wants to help the aliens, not out of a belief that humans are so terrible they should be totally destroyed, but out of a hope that humans just might be redeemable with an infusion of the right knowledge or technology.
On the show, that technology comes via aliens; in our world, itâs perceived to be coming via AI. But regardless of the specifics, this is an approach that says: Let the overlords come. Donât try to beat âem â join âem.
It should come as no surprise that the anti-humanists in 3 Body Problem refer to the aliens as âOur Lord.â That makes total sense, given that theyâre viewing the aliens as a supremely powerful force that exists outside themselves and can propel them to a higher form of consciousness. If thatâs not God, what is?
In fact, transhumanist thinking has a very long religious pedigree. In the early 1900s, French Jesuit priest and paleontologist Pierre Teilhard de Chardin argued that we could use tech to nudge along human evolution and thereby bring about the kingdom of God; melding humans and machines would lead to âa state of super-consciousnessâ where we become a new enlightened species.
Teilhard influenced his pal Julian Huxley, an evolutionary biologist who popularized the term âtranshumanismâ (and the brother of Brave New World author Aldous Huxley). That influenced the futurist Ray Kurzweil, who in turn shaped the thinking of Musk and many Silicon Valley tech heavyweights.
Some people today have even formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblattâs Terasem movement to the Mormon Transhumanist Association to Anthony Levandowskiâs short-lived Way of the Future church. âOur Lord,â indeed.
The anti-humanist vision
Hardcore anti-humanists go much farther than the transhumanists. In their view, thereâs no reason to keep humanity alive.
The philosopher Eric Dietrich, for example, argues that we should build âthe better robots of our natureâ â machines that can outperform us morally â and then hand over the world to what he calls âHomo sapiens 2.0.â Here is his modest proposal:
Letâs build a race of robots that implement only what is beautiful about humanity, that do not feel any evolutionary tug to commit certain evils, and then let us â the humans â exit stage left, leaving behind a planet populated with robots that, while not perfect angels, will nevertheless be a vast improvement over us.
Another philosopher, David Benatar, argued in his 2006 book Better Never to Have Been, that the universe would not be any less meaningful or valuable if humanity were to vanish. âThe concern that humans will not exist at some future time is either a symptom of human arrogance ⦠or is some misplaced sentimentalism,â he wrote.
Whether or not you think weâre the only intelligent life in the universe is key here. If there are lots of civilizations out there, the stakes of humanity going extinct are much lower from a cosmic perspective.
In 3 Body Problem, the characters know for a fact that thereâs other intelligent life out there. This makes it harder for the pro-humanists to justify their position: on what grounds, other than basic survival instinct, can they really argue that itâs important for humanity to continue existing?
Will might be the character with the most compelling response to this central question. When he refuses to sign the loyalty oath to humanity, he shows that he is neither dogmatically pro-humanist nor dogmatically anti-humanist. His loyalty is to certain values, like kindness.
In the absence of certainty about who enacts those values best â humans or aliens â he remains species-agnostic.