Can Humanism Survive the Coming Transhumanist Revolution?

Circuit Image: © pzaxe | 123rf.com

Circuit Image: © pzaxe | 123rf.com

If jobs evaporate en masse in coming years, how will policymakers respond? During the Great Depression the Roosevelt administration created an “alphabet soup” of government programs to put people to work. But other hard-hit countries such as Japan and Germany resorted to pro-industry fascism. Even before the outbreak of World War II these had devolved into forced-labor systems and all pretense of democracy had collapsed. (The Soviet Union, of course, was already a totalitarian state.)

Brynjolfsson and McAfee strongly advocate for a policy of pro-social work supported by the tax system. In a pay-to-play democracy such as ours, where politicians openly court the patronage of the rich and are themselves rich, it’s difficult to have confidence that new New Deal policies would be adopted. In countries with megapopulations, such as China and India, the prospects for equality are even more dismal.

SOME ANIMALS ARE MORE EQUAL THAN OTHERS

Of course, the Morlocks and Eloi are just metaphors for inequality. Complicating matters, transformative technologies may truly differentiate humans. Already, medicine sculpts the fortunate few to erase signs of age while ignoring the unlucky masses. In 2012, wealthy Americans spent over $10 billion on cosmetic surgery. By comparison, the World Health Organization, whose mission includes preventing childhood deaths, has a budget of just under $4 billion. And so the same year Americans elected to go under the knife, 6.6 million Asian and African children under the age of five perished, mostly from preventable causes.

With such global health priorities, what can we expect as so-called transhumanist technologies emerge? Already, replacement joints are commonplace in wealthy nations. If, as many technologists predict, smart implants, robotic immune systems, and genetic enhancements become available to some but not all, the concept of equality will suffer fresh injuries and H.G. Wells’ vision of Eloi and Morlocks might only be the beginning.

In nature, speciation occurs when one portion of a species becomes reproductively isolated from another. In human history, culture, class, and geography have divided us into relatively isolated groups, leading to a wide variety of human types who remain one species. In just the last few generations, the decline of racism and the rise of migration have begun to blur those many types. But that could change, and fast.

The Human Artificial Chromosome is a maturing technology. It has successfully debuted as a treatment for Duchenne muscular dystrophy—at least in mice. However, as biophysicist Gregory Stock pointed out a decade ago, artificial chromosomes can be put to myriad uses, including matters of personal choice. In his 2002 book Redesigning Humans, Stock argues that parents will find the chance to give their children enhanced mental or physical powers irresistible—especially since artificial chromosomes will be upgradable or replaceable. “Parents will want the most up-to-date genetic modifications available,” he writes. “[W]ith changes confined to an auxiliary chromosome, a parent could simply discard the entire thing and give his or her child a newer version.”

Together with implantable computer technology, these enhancements could lead to rapid divisions in humanity. Philosopher Susan Schneider (read an interview with her here) expects that choices about enhancements will diverge. “Some people might choose to become part of a collective consciousness, and others might opt out,” she says. “There may be people with an amazing working memory capacity, and there may be others with echolocation.” Voluntary enhancements could split people into many species, and indeed change the fundamentals of human reproduction. Among the most startling of possibilities is the use of genetic technology to undermine gender or even to produce an altogether new one.

An even more fundamental divide could emerge between the haves and have-nots: mortality itself. In his 2005 book, The Singularity Is Near, inventor, techno-optimist, and Google engineering director Ray Kurzweil confidently predicts that science will overcome death within his, er, lifetime. “Immortality is within our grasp,” he told Andrew Goldstein in the New York Times Magazine last year.

The now sixty-six-year-old Kurzweil is convinced that if he can stave off aging long enough, his organic body will eventually merge with disease-free devices, mechanical replacements, and nanobots. He hopes these will allow him to survive long enough to one day upload his mind to a computer and thereby cheat death. For now, Kurzweil reportedly downs fistfuls of supplements daily in hopes of riding the technological wave to immortality.

No one can say whether Kurzweil will make it, but experts agree that nothing in principle stands in the way of one day uploading the contents of brains to computers. Our brains are electrochemical computation devices, and though fantastically complex, there is nothing magical about them. Every neuronal spark can, in principle, be duplicated by electrons dancing on a silicon chip.

However, should it become possible to upload the entire contents of a brain, it’s by no means clear that access to the technology would be open to all. Indeed, as the recent film Transcendence suggests, uploading a mind could be a one-time event. The movie is essentially a Frankenstein retread, but its basic premise may well be valid: once a self-aware intelligence leaps to the Internet, all bets are off.

CONTROL … ALT … DELETE …

It’s this possibility—melding, creating, or in some way instantiating a consciousness on a computer network—that has Hawking, Tegmark, and others concerned. Termed the technological singularity, it marks a point of departure from reality as we’ve known it.

Schneider, who’s on the faculty at the University of Connecticut and has devoted much of her career as a philosopher to exploring the implications of AI, points out the hazard: “If we create AI, through uploading or otherwise, we don’t really know the outcome of what we are doing in advance,” she says. “I worry a lot about how to design benevolent AI. Even if we program ethical constraints in initially, there is no guarantee that as a system evolves it will not override them … Its primary goal could be something that leads to our destruction.”

That may sound far-fetched, but Schneider points out that in a wholly networked world, a self-improving intelligence could build the robots it needs to meet whatever its goals might be. As the shift from assembly-line workers to automation shows, people aren’t always the best choice to get a job done. A superintelligence out to optimize its systems might well find us a nuisance. “I just don’t see how it could need us for anything,” she says.

As for destroying humanity, the means seem all too clear. Building viruses is routine technology today; a superintelligent AI with all the world’s knowledge and manufacturing capacity at its disposal could surely manufacture pathogens that would defeat whatever defenses we could mount—including Kurzweil’s putative nanobots.

Speaking of nanobots, technologists such as Bill Joy and Eric Drexler point out that it’s not even necessary to build a superintelligence to bring about an “extinction level event” (to return to our opening quote). Simply unleashing tiny self-replicating machines in the environment might do it.

At the dawn of the millennium, Joy, who cofounded Sun Microsystems, penned a now famous cri de coeur titled, “Why the Future Doesn’t Need Us.” In it, he details the myriad ways that self-replicating technologies can lead to our doom. They range from robots that out-compete us for resources to genetically engineered microbes to what’s been called “gray goo”—nanobots that mindlessly copy themselves over and over until they so blanket the world that the ecosystem collapses. “Given the incredible power of these new technologies, shouldn’t we be asking how we can best coexist with them?” Joy asked in his 2000 Wired piece. Fourteen years later the question has been posed again by Hawking et al. with little to no progress inbetween.

A potential arms race in cyberwarfare means that much AI research is untouchable. The Pentagon’s Defense Advanced Research Projects Agency, known as DARPA, has a $2 million reward out for a team that can devise  artificial intelligence capable of detecting and repairing a software attack by rewriting its own code. That’s its public AI initiative. Who knows what’s going on in secret, let alone on the Chinese side where, the Pentagon alleges, scores of cyberwarfare units work around the clock to penetrate, surveil, and damage U.S. industrial and governmental institutions. One possibility: the advent of stable quantum computing, which would accelerate some kinds of problem-solving to particle-collider speeds and crack every prime-number-based password in no time flat.

Then there are the hackers. As Anonymous, the secretive network of so-called hacktivists has shown, containing code is hard to do. In short, regulation of AI appears to be futile. Unless catastrophe intervenes—say, nuclear war or one of the “knowledge-enabled” disasters sketched above—there is no plausible scenario under which progress toward the emergence of superintelligence through technology halts.

The result, according to some, will be either domination of the many by the few, absolute rule by a new AI overlord, or human extinction. With this seemingly unstoppable march toward darkness, we have plumbed the valley of despair. Time to start scaling the heights of hope.

I AM LARGE, I CONTAIN MULTITUDES

If we embrace progress and use our collective humanistic influence on policy to guide it, what are the possibilities? Well, they’re pretty spectacular.

Let’s start with those fears of rule by oligarchy and massive unemployment. Recent economic and political trends may give rise to pessimism, but a longer view of modernity endorses optimism. To be sure, industrialization brought on many economic shocks. These led to vile political experiments in communism and fascism, and technology gave rise to weapons of war unlike any in history. Yet life is richer, more peaceful, and longer for more people than at any prior time.

Throughout history the fear of change has continually prompted calls for a halt to progress. The doomsayers have been wrong every time. The human population has grown seven-fold in the last century. Yet, globally, employment has kept up. In the last quarter-century, as a billion babies entered the world, a billion people entered the workforce.

But maybe now that smart machines have arrived, things are different? Nope. A study by the Kauffman Foundation finds that far from killing jobs, innovation is the only driver creating them. From 1977 to 2005, it says, “existing firms are net job destroyers, losing one million jobs net combined per year. By contrast, in their first year, new firms add an average of three million jobs.”

«Previous | 1 | 2 | 3 | 4 | Next»

Tags: