The Human Future: Upgrade or Replacement?

© Rolffimages | Dreamstime.com

A computer can be upgraded by adding memory, or a whizzier operating system—but eventually it’s time to just get a new computer. Is humanity’s fate similar?

Ray Kurzweil (futurist author and Google’s director of engineering) thinks so, sort of. He sees a “singularity” coming in a few decades, and by this he means technological advancement producing a change in human life so profound, it’s a discontinuity from what came before.

Two centuries ago, Thomas Malthus famously foresaw exponential population growth outrunning the food supply, whereas what really happened was that agricultural productivity improvements outpaced population growth. Kurzweil sees technology in general growing exponentially. To demonstrate what this means, my favorite illustration is the fable of a king offering an underling to name his own reward for some great service. The guy unfolds a checkerboard, and asks only for one grain of rice on the first square. Then two on the next square, then four, and so on. The king readily agrees, thinking he’s getting off cheap. And for a good many squares, the repeated doublings still don’t amount to much. But you know how this ends: there’s not enough rice in the world. That’s exponential growth.

Kurzweil’s book, The Singularity Is Near: When Humans Transcend Biology, was written in 2005, and it’s easy right now to mock his assurances of a continuing upward spiral of economic expansion. He was similarly sanguine about gains in productivity, which many observers also now worry is stagnating. And Kurzweil’s main theme of exponential technology advancement contrasts against a recent feature in The Economist reporting a widespread belief that that too has stalled, with the innovation well running dry.

The Economist wasn’t quite buying that notion, though nor would it buy Kurzweil’s uber-optimism. It’s hard to make predictions, especially about the future (as Yogi Berra, or Sam Goldwyn, or many other wags supposedly said). And it’s a good bet that any sentence beginning with “if present trends continue” will end foolishly.

But regarding the future of science, technology, and innovation, we may not be fishing an inexhaustible sea. While we laugh about the U.S. patent chief who proposed closing his office because every invention had been invented—in 1899 (true story)—our vast advancement since then may really represent the picking of low-hanging fruit. In physics, for example, we’ve done the “easy” stuff, and further steps seem more and more like squeezing blood from a stone. We have indeed already invented the obvious big things, with further innovation being mainly tweaking and improvement.

The computer was a comprehensive game changer—but it’s hard to imagine what an analogous future game changer might be. Teleportation? Of course teleportation is an extremely hard problem that may be (though only maybe—I am an optimist) impossible. So, again, we’ve grabbed the low-hanging fruit, and perhaps technology has reached a point of diminishing returns.

At least that’s the argument some make. But my phrase, “it’s hard to imagine,” is telling. I’ll never forget the look on Naoh’s face in the movie Quest for Fire when he watched the girl, who knew how, make fire. He hadn’t imagined it. Maybe things we can’t imagine are possible.

But meanwhile there is something big out there we can imagine. Steven Spielberg even made a film about it, though hardly scratching the surface of its true implications. It’s artificial intelligence (AI), and it’s a bomb waiting to go off.

There’s a widespread misconception that AI research has been a bust, a dead end. What’s true is that some early over-enthusiasm has proven misplaced, and replicating human intelligence, in all its multifold aspects, is another very hard problem—but only because of its breadth and complexity, not any insurmountable physical limitations (as with teleportation). And while our brain architecture is admittedly very complex, its blueprint comes from a quite limited set of genetic instructions that merely provide general guidelines by which the developing brain wires itself. AI is moving in a similar direction, creating systems that can learn and increase their own complexity. Not only are such systems being developed, in fact they’re already ubiquitous. You probably have one in your pocket.

In the aforementioned Spielberg film, AI, the character David seems altogether human except that he isn’t made of flesh and blood. Adding a few like him to our already sizable population won’t make a huge difference. However, what will bring shattering change is when they become smarter than us. So far we haven’t created a machine that even comes close, but we’re headed inexorably in that direction, and even if Kurzweil’s talk about exponentiality is overblown, we will, in due course, make machines as smart as people. And it won’t stop there. The machines will become smarter than us. And then it’s off to the races. That’s the bomb. That’s the singularity.

Because, you see, once machines are smarter, and soon a lot smarter than us, technological advancement goes into overdrive, into warp speed. Scientific and technological problems will be attacked with brainpower far beyond our own. And, importantly, that will include the metastasizing of that brainpower itself. The smart machines will take over their own further improvement. And remember that artificial systems can share the contents of their “minds” more directly than humans. Thus we can envision the intelligence not just of self-contained machines, but of a worldwide network—a global network if you will—thus again unleashing synergized brainpower that totally dwarfs what humans can currently deploy. This is not speculative sci-fi. To the contrary, it’s inevitable.

Teleportation is a very hard problem; but maybe not for intelligences ten times our own. Or a million times. Furthermore, they will have vastly superior tools as well—nanotechnology, or atomically precise manufacturing. It’s like what nature does on a molecular level, with DNA instructing RNA to make particular proteins. If nature can do it, so can we, and here too we’re already moving in that direction. One small example of the potential is to dramatically reduce the cost of solar energy. Nanotech can also be deployed inside our own bodies.

All this is what the “limits to growth” alarmists who believe we’re destroying our future overlook. They fail to realize how different the future will actually be. Our environmental and resource challenges, too, will be tackled by intelligences and capabilities vastly greater than ours today. (For example, solar power has been mentioned. The Sun showers us with energy ten thousand times our current usage, so capturing more of it holds vast promise for revolutionizing our energy picture.)

Some may answer all this with the cynical mantra, “But people never learn.” It’s untrue. In fact, we’ve learned a great deal. For example, the notion of some inveterate human war lust is refuted by modern experience, wherein people and nations grow to see their interests better served by making deals than making war (as Steven Pinker elucidated in The Better Angels of Our Nature: Why Violence has Declined). We’re acting smarter across a broad range of concerns. But even if it were true that people never learn, it certainly isn’t true of intelligent machines.

Will they, however, always remain just glorified machines—or will they become something more? Apple’s personal assistant Siri is pretty smart, and in some ways mimics personhood, but clearly lacks a sense of self like ours. Yet such consciousness is not some ineffable, mystical property; while we don’t yet deeply understand it, nevertheless we can say with high assurance that it’s an emergent property devolving out of the complexity of the signaling among the brain’s neurons. (Such emergence is also illustrated by an ant colony whose complex architecture comes out of ant behavior, following basically simple rules, with no master architect.) In principle, consciousness does not require flesh and blood, and if that kind of signaling complexity can be mirrored in an artificial system there’s no reason it cannot be self-aware like you and me.

The analogy between a human brain and a computer is imperfect, but in broad concept a brain does work along similar lines (with massively parallel processing). Thus, again, it’s no stretch to think that when artificial systems reach sophistication comparable to our brains, they will generate the same sort of subjective experience we call the self. Indeed, dare I say this: if the machines can (by far) outstrip our intelligence, could they not also attain some even higher form of consciousness than ours?

So what then becomes of us, the primitive 1.0 version, the Model-T of intelligence? Upgrade or replacement? Dystopian visions have the machines taking over, brushing humans aside, or enslaving us, or keeping us as pets. But rather than a divergence between the world of fleshly humanity and mechanical super-intelligence we should actually expect more of a merging. We’re already seeing the beginnings of our de-biologization when quadriplegics can manipulate physical objects with their minds, and we debate whether a runner should be allowed to compete because his artificial legs are better than real ones. Everything our biological selves can do can be enhanced technologically—including what our brains do.

We should remember that when you do junk an old computer, it’s not the death of your computing life, which you actually migrate to a new machine. For humans of version 1.0, the ascent to 2.0 will probably be like that. So those future super-intelligences, whether carbon-based or silicon-based (or something else) will be our own progeny; they will be us, humanity 2.0, or 10.0, or 1022.0.

Much has been written lately about how our evolutionary biological past, embedded in our genes, shapes who we are, and not entirely in a good way. We carry a lot of such baggage. We’ve overcome many of its limitations through knowledge and technology, performing thereby a kind of evolutionary hat trick. Our next evolutionary hat trick will be to simply leave all that biological baggage behind.

Will there be problems and downsides? Hoo boy. There will likely be a transition period marked by wicked inequality between 2.0s and 1.0s; indeed, between people who die and those who, effectively, do not. And the naysayers and negativists, like those who today rail against genetic modification and nanotech and “playing God” will have a field day. Bill McKibben has actually argued that we’ve had enough progress, and it should be brought to an end. Others contend you can’t even discriminate “good” versus “bad” technology because it’s all interconnected, thus calling for a full stop. But, as Kurzweil says, “the only viable and responsible path is to set a careful course that can realize the benefits while managing the dangers.” McKibben and his ilk would throw out the baby with the bath water. Like always, progress will ultimately blast past such Luddites, and, notwithstanding the inevitable problems, the bigger picture will be human improvement so vast that future anti-evolutionists will disbelieve their descent from lesser creatures made of (yuck) flesh and blood.

Kurzweil posits six epochs of evolution. In the final stage, intelligence pervades all matter or, as he puts it, “the universe wakes up.”

There is no god—yet.