Duck of Vaucanson
Published as ‘After Humanity‘ in Right Now, Technology and Human Rights Issue
Little has been keen to show me his wireless neuroheadset for a while. It’s his latest gadget, and this is one of many nights we will stay up late talking about new technologies.
It is perched on my head: two of its plastic tentacles press against the cartilage behind my ear lobes, another six curl around each hemisphere of my skull. On the screen is a schematic of a brain with 16 dots at various nodes; some are black and grey, others traffic light green and red. Little adjusts the headset to reposition the contact buds against my scalp, and the dots change colour. More parts of the brain light up green now, others remain a dead grey.
The headset detects brain signals and passes them on to the computer via Bluetooth, to be interpreted by software presumably designed by a gaming company or some shadowy paramilitary organisation. On screen, the cursor follows my gaze.
Little leans over my shoulder with a grin.
“The contact’s not perfect, but I reckon you’re good to go.”
“Do you think my hair’s in the way?”
He laughs, “Scott, why do you think I shaved my head?”
Little opens the game that came with the headset, called Spirit Mountain. It’s got polygon graphics, New Age music, and dubious Orientalist overtones.
The game’s spirit guide tells me to move boulders, pull down reeds, raise a temple and gather the mountain’s spirits in an urn with my mind by focusing on pushing, pulling, lifting, and trying to think about nothing – but not before I have to growl at some mischievous spirits to scare them away. Little loses his shit laughing. I struggle badly. Soon my neck and brain are too sore to continue, and we leave the headset alone to go outside, have another drink, and talk cyborg politics.
Little is tech-crazy: he assembles his own computers and works in IT, and though he used to study psychology, he quit on the cusp of completion and took up computer science. Computers, he tells me, are easier to fix than people. Though in Little’s mind, the two goals aren’t so far removed.
He is a proponent of transhumanism, a movement united by the belief that humans can, and should, transcend biology through technology. As humanity merges with ever more advanced machines, they say, we will evolve into a new species that blends human and technological traits – the posthuman. In Little’s view, this is just swell, and he wants to become a cyborg as soon as possible.
Transhumanists believe that the coming posthuman species will be smarter, live longer, and overcome many of our present, all-too-human frailties. Their dreams of silicon ascent lead critics to suggest that transhumanists hold the human body in contempt. Indeed, many transhumanists disparage the body and its failings, and Little is no exception, often joking that he deems his body a poorly functioning appendage to his brain.
Little’s desire to transcend natural limitations can be partly explained by his fascination with science and computers. Surely though, nobody’s going to turn ourselves into a machine just because the idea of it is sort of interesting and cool. True, our relationship with technology is more than it has ever been, and it may be that technologies for enhancement will soon exist. But who would use them? What are these cyborg fantasies if not products of corporate brainstorms in Silicon Valley, of overstimulated minds, warped by too much screen time, caffeine, and an excessive love of gimmicks and novelty?
Yet the promise of overcoming the weakness bears more than mere novelty value for my friend.
In 1995, when Little was five years old, he ate some mettwursst from the supermarket. Days afterwards, he became ill and was soon admitted to hospital as, one by one, his internal organs shut down. The mettwursst was tainted with an antibiotic resistant variant of E. coli bacteria. Little’s kidneys, pancreas, and digestive system stopped working, and he was put on a drip for nutrition. His brain, heart and lungs kicked along; though they did not escape unaffected, and he was subject to a number of seizures. After four weeks of total organ failure and constant dialysis, Little began recovering and was released from hospital two weeks later.
Life returned more or less to normal until he hit puberty. During the summer of 2002, just before he entered Year Eight, Little noticed strange things happening to him. He was drinking more water, becoming sweaty and clammy; he needed to piss more often, lacked energy, felt unusually tired. He presumed it was due to the heat wave, but after he lost ten kilograms, he and his mum realised something was wrong. He went to the doctor and took a blood test; the doctor came to his house that night and informed him that he must go to the emergency room immediately.
The test had revealed that Little’s blood sugar level had risen to 82.1. Given that people can go into comas with levels between 20 and 30, doctors were astonished he was alive, let alone conscious. They put him on an insulin drip straight away. Flippant, Little tells me “They thought I was the golden child or something.”
Though fortunate to be alive, Little was by no means lucky. He was admitted to hospital again, and told that the additional stress on his body caused by puberty had finished off his already-weakened pancreas; though the organ still had partial function, it could not produce sufficient insulin to keep him alive.
Little was one of many who suffered from food poisoning from the tainted batch of Garibaldi Mettwursst. A four year old girl died, and at least 23 others suffer from severe health problems to this day. Little is one of the 23 victims who joined in a class action against Garibaldi Small Goods, which spanned 16 years and eventually won a settlement. The class action confirmed that Little’s condition was caused by his food poisoning when he was five, making him the first person in Australia legally determined to suffer from diabetes as a result of criminal negligence.
Little was forced to quickly overcome his phobia of needles. Like many people who suffer from diabetes, he has to manually compensate for his organ’s malfunction; to stay alive, he must constantly monitor his blood sugar levels, keeping them stable with carefully timed meals and injections of insulin. Little talks about his experience openly and without a shred of a self pity. “People kind of struggle to understand it,” he says. “If they ask me what it’s like, I tell them to imagine their lungs didn’t work automatically, and they had to always remember to breath. That might overstate things, but it gets the idea across.”
It is not so surprising, then, that Little should be unimpressed with the natural body and its failings, or that he should look to posthuman technologies for hope. For Little, they offer the possibility of a cure. The most significant of these technologies can be loosely divided into four areas – nanotech, biotech, information technology (IT), and neuroscience.
Nanotech is the manipulation of matter on an atomic and molecular level, and gives rise to nanorobotics. Basic nanomachines have already been created, and this technology is advancing fast as researchers, corporations and medical professionals rush to get involved.
Biotech encompasses innovations that could modify us on a biological level, like cloning, genetic engineering, life extension, and organs grown from stem cells. Biotech includes technologies already in common use, like pharmaceuticals and genetically modified foods.
Like biotech, IT has been around for a while, and includes anything used to store, transmit or manipulate data. Computers and the internet are the poster children of IT, but it arguably includes much older tools like printing, writing, and language itself. IT already plays a part in insulin delivery implants, controlled via Wi-Fi; and as time goes on, they may help in the creation of fully functioning cybernetic organs. Further, IT may lead to the creation of artificial intelligence that rivals or surpasses humanity’s. This may seem hard to believe, as at present, computers, however powerful, have a limited range of thought processes, and are distinctly lacking in areas like emotion and creativity.
This is where neuroscience comes in. For the time being, it allows for such IT gadgets as the neuroheadset. Soon, however, neuroscience and related disciplines may allow humans to map the brain well enough for it to be used as a model for computer designs. As it happens, scientists are already creating simulations of human brains using networks of computers, akin to the networks of neurones that make up our grey matter. These networks have demonstrated the ability to learn independently. And, like humans, when one was given the chance to access the internet, it spent most of its time learning about cats. It formed its own image of a cat based on what it had seen, demonstrating a nascent form of imagination. Another network was used to simulate schizophrenia to test a psychological theory of how the illness works; the AI became confused about its identity, began referring to itself in the third person, and (falsely) claimed credit for a terrorist attack.
As they advance, these technologies tend to converge; scientists are already experimenting with biological computers, and DNA could become just another medium for information technology. And nanotech may one day be used to inject tiny machines into the bloodstream which help prevent ageing, fight cancer or monitor insulin levels. In coming years, these technologies will keep feeding into one another, mutually accelerating their progress.
It is tempting to dismiss such developments as sensationalist exaggeration; they sound too strange, too unsettling, too much like science fiction. Yet there is danger and naïvety in unconditional scepticism. Many works of fiction are created not as pure escapism, but as thought experiments, warnings of the future, warped mirrors of the present, echo chambers of our very real anxieties. We risk missing the fact that we are already living in what, several years ago, was considered the stuff of fantasy.
Battered with novelty and commercials, strangeness and fiction, we become numb to the realities unfolding. A sense of detachment can creep over us as we hear of new treatments for Parkinson’s disease, involving electrodes placed in the brain; when we hear claims of cyborg hate crime, as when a man with enhanced reality glasses surgically attached to his head was allegedly assaulted by the staff of a Parisian McDonald’s. We know that drone warfare is real, yet those of us lucky enough never to have been bombed by flying, remote control robots struggle to believe in something so perfectly absurd, let alone debate its merits, stage a protest, or seek means of resistance.
Some theorists worry that these new technologies may dehumanise us, endangering the foundations on which we build meaning. After all, if the human individual is universal, and the basic building block of values, then rapidly changing it would lead to chaos: if you’re uploaded to a computer, are you still part of the moral community? Is a genetically enhanced person still human? Are other humans still equal with them? Confronted with these problems, maybe we would prefer to simply reject such technology and go back to being plain old humans. The trouble is, it has never been that simple.
Since our ancestors starting using tools and fire hundreds of thousands of years ago, technology has been a part of us and defined us. Tools like language have shaped us throughout history, and it’s difficult to make any clean distinction between technologies and the humans they use – the lines have always been blurred.
Still, there is a difference between technologies which exist outside of us, and ones which are inside us, plugged into our brains and reshaping our bodies. Though this difference may not be so definitive as it seems. Belief in this difference is based, in part, on the idea that we are separate from the world around us. If this idea is false to begin with, if we and all we experience are just pieces of the surrounding world, then we are already creations of technology and social relations. Posthuman technologies would not be a rupture in our identities, but the next phase of perpetual change.
While such ideas may be common among scientists, sociologists and stoned teenagers, people living in late capitalist economies rarely carry their implications into daily life. In Western culture especially, we think of ourselves as individuals who inhabit bodies and environments, rather than bodies which are part of these environments. Many of us implicitly believe, on some level, in a soul – even if we are secular, and don’t call it that or think it exists. Coupled with the faith that such human essence is universal, this belief forms, in many people’s minds, the foundation of human rights. In this sense, posthuman technologies are unsettling not because of what they might do to a previously universal and shared humanity, but because of the questions they raise about its existence.
These questions aren’t altogether new: they just become harder to avoid. After all, what often passes for human nature is far from universal. In a society like ours, denying universality awakens the fear of difference, never truly overcome in liberal cultures, but lulled to sleep by the poets of assimiliation. Worse still, we risk revealing the structures of power lurking just below the surface of formal equality.
Better they come to light. Although its disappearance will create new challenges, belief in a homogenous humanity should not be mourned; such a denial of complexity should not be necessary for us to treat other feeling, thinking beings with kindness and respect.
For those attached to traditional ideas about personhood, such assurances aren’t entirely comforting. And the truth is, even for those who don’t believe in a sacred, unchanging humanity, there are legitimate concerns about posthuman technologies. The turmoil of the past century and the use of nuclear weapons on Hiroshima and Nagasaki have made it pretty clear that new technologies can be abused; arms races, our struggle with fossil fuels and the reshaping of our minds by the internet highlight that technology can take on a life of its own.
Of course, the question isn’t merely whether humans retain control of technology; it is also – which humans? Even if business, government and paramilitary groups prove less dangerous and opportunistic than we’ve learned to expect, a global market economy ensures posthuman tech will be unevenly distributed. The rich will have disproportionate access to life-extension, brain enhancements and genetic engineering, leading to biological entrenchment of financial inequality. If this sounds far-fetched, it’s worth considering that even the medical treatment available to the rich at present is a form of radical life extension, giving them more time to accumulate vast power and wealth.
Many transhumanists aren’t exactly at pains to calm their critics’ fears. Having assumed a name fit for a super villain, the prominent transhumanist Max More also speaks like one: “No more gods, no more faith, no more timid holding back,” he declares. “Let us blast out of our old forms, our ignorance, our weakness, and our mortality. The future belongs to posthumanity.”
For all this, it is difficult to imagine my friend Little as a megalomaniacal overlord, cyborg or otherwise. Laid-back and blessed with incisive, self-deprecating wit, he seems closer to a cartoon sloth-man than a scheming HAL 9000. And really, can any of this paranoia really justify denying him, or those in similar situations, the best medical treatment available? It seems heartless to dash someone’s hope of a cure, even if it involves technologies that challenge our understandings of what it means to be human.
“Perhaps”, one might say, “we should just to leave it to individual choice, allowing those who don’t want to change to take a conscientious stand and refuse to be enhanced.” And in doing so they would consign themselves to servitude or the dustbin of obsolescence, as others without such scruples race ahead. In a competitive market society, leaving it to individual choice leaves no choice at all. Though we should not simply reject these technologies, leaving it to the market to decide would be madness.
In the present global order, it will be almost impossible to prevent these technologies from spreading. This leaves us with two options: change the world economy altogether, or settle for trying to regulate them and provide equal access; when it comes to public healthcare, we could draw the line at artificial organs, and other treatments that bring people to normal functioning rather than enhancing them. Though as the public sector grows smaller, the second option looks almost as improbable as the first. At any rate, our concept of normal functioning is relative in nature, shaped by the abilities and expectations of those around us. And why stop there? And how? These questions demand urgent answers, for we are poised on a precipice, if indeed we are not already falling. Any adequate response will require an excavation and reassessment of the values, rights and responsibilities underpinning our cultures and economies, and a new way forward in making decisions as a collective. Because the question of how we distribute posthuman technologies returns us to older questions: what it means to be human, and how we are to distribute wealth and capital – only the stakes have been raised, and the answers we choose may be irrevocable.
Back at Little’s house, the night has grown old, and we’re no closer to agreeing on solutions. Before I go curl up on the couch, our conversation turns back to Little’s hopes. A while ago, after he won the court case, it looked like there was a stem cell cure available in Germany. The news proved premature: further scrutiny revealed that for the time being, the cure was more dangerous than the disease. With a grin and a shade of irony in his voice, Little tells me, “All I want is for my functions to be automated again. I mean, is that so much to ask?”