Can Humanism Survive the Coming Transhumanist Revolution?

Photo © Vitaliy Smolygin |

IF YOU DON’T keep up on your fearmongering Christian commentary, you may have missed this item from the online WorldNetDaily:

Secret experiments now underway in the U.S. and elsewhere are sparking fears of a potential extinction-level event hastening the Second Coming of Jesus … [S]cience fiction of the past could become science fact of our immediate future, with human minds connected wirelessly to computers and bionic bodies outperforming top athletes by leaps and bounds. That prospect has some sounding alarm bells about the fulfillment of End-times Bible prophecy…

Well, why not? For two millennia nothing else has done the trick. Still, the eschatology industry is not alone in worrying about the impending technological revolution. Indeed, for humanists, the urgency may be even greater. Bedrock concepts of humanism—equality, individual autonomy, education, and democracy, among others—face seismic upheavals. The very idea of what it means to be human may be overturned.

No less prominent a figure than cosmologist Stephen Hawking recently joined with three other luminaries to warn that artificial intelligence (AI), rapidly proliferating through our most intimate devices, could evolve into a catastrophe for humankind. In an article that appeared in the Independent in May, Hawking and his coauthors Stuart Russell, Max Tegmark, and Frank Wilczek write:

If a superior alien civilization sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here—we’ll leave the lights on”? Probably not—but this is more or less what is happening with AI.

But hold on. Before you take down your Luddite ax from over the hearth, consider this: in many ways technology has and is making life better for most people on the planet. Thanks to smartphones and solar panels, impoverished villagers in Africa now have access to news, entertainment, and, most important, markets. This has meant an astonishing one percentage point per year drop in extreme poverty in Africa over the past decade, even without the fundamental reforms and lasting peace that everyone agrees are necessary.

Civilization has been lumbering up a long hill of progress. What happens next may feel like the thrilling moment a rollercoaster goes over the top. It may be like the scene in Star Wars when the Millennium Falcon makes the leap to hyperspace. Or, it may be as terrifying as the instant in Jaws when the great white jumps into the back of the boat.

“It will either be the greatest thing that’s ever happened to humanity … or the worst,” says physicist and humanist Tegmark. Everyone agrees that intelligent technology will fundamentally change the course of human history, but there’s much dispute over humankind’s destination. Are we at the threshold of an era of unprecedented prosperity, unbounded knowledge, and universal peace, justice, and fulfillment? Could we be on the brink of a new and permanent feudal era? Or, do we stand in the shadow of an impending catastrophe in which humanity bows down before a vastly superior, conscious, and boundlessly self-improving machine intelligence? No one knows. We can, however, make some informed guesses.


In his 1895 novel, The Time Machine, H.G. Wells envisioned a future in which humanity has divided into two species, the delicate and privileged Eloi and the brutish, laboring Morlocks. He may have been off by about 800,000 years; inequality is here, it’s global, and it may be about to explode.

In the New York Times bestseller published earlier this year titled The Second Machine Age, authors Erik Brynjolfsson and Andrew McAfee extol the promise of AI but also provide thoughtful analysis of the inequality that emerges when machines and computers increasingly replace human labor—what they call “spread.” The MIT researchers point out that even in low-wage, high productivity countries like China, automation is killing jobs. We’re not just talking about the death of the “steel-drivin’ man” here (John Henry is so nineteenth-century Industrial Revolution). Foxconn, the Chinese supplier for Apple that gained notoriety for a spate of worker suicides, has a twenty-first-century remedy for that human resources challenge: replacing workers with robots.

Assembly-line displacements are nothing new but AI may imperil a wide swath of occupations. Truck drivers, courtroom interpreters, and physicians alike will soon face competition from intelligent automation that can function 24/7 without bathroom breaks, sick days, or vacations. As Google’s fleet of self-driving vehicles sends ripples of fear through the Transport Workers Union, Watson, the IBM computer that gained fame by besting Jeopardy champions, is moving into the medical field. While no one expects doctors to disappear, it’s possible that various medical specialties could be devalued by expert systems such as Watson. Pharmacists could go extinct. The list of other threatened occupations is startling. For example, an Oxford University study lists real estate appraiser among the top ten most vulnerable jobs. Paralegals are even more imperiled.

As entry-level jobs shrivel, barriers to entry rise. Brynjolfsson and McAfee point out that many firms now use an automated résumé review system to eliminate all candidates who lack a college degree—even for jobs that don’t require such a credential.

In short, the Morlocks of our time aren’t the brutish underground laborers Wells imagined, but rather societal shut-outs with little hope of finding a job or sharing in the fruits of progress.


Those who own enough capital to live on the returns do beautifully in this trend. In just the last year the number of billionaires in the world jumped by more than 15 percent, according to Forbes, while their collective wealth rose by a staggering $1 trillion. The eighty-five richest, a group small enough to fit on a city bus, own more in assets than the lower half of the world’s populace—that is, 3.5 billion of us.

But they’re not done yet. The research arm of banking giant Credit Suisse forecasts that global wealth will increase by another 40 percent in the next five years. Meanwhile, global wage growth is crawling at just over 1 percent a year and slowing.

Wealth is not a bad thing, and, to repeat, extreme poverty has been falling even as the concentration of wealth grows. But humanism embraces liberal democracy, and there can be no liberal democracy when wealth buys the fawning loyalty of elected officials, writes its own legislation, and corrupts the judiciary.

1 | 2 | 3 | 4 | Next»

  • Bob

    I hope you aren’t suggesting that humanity is on the verge of creating conscience machines. HAL is not on the horizon and will probably never be there. AI is mostly about storage and screamingly fast processors.

    • Dr. Franklin Jefferson

      Most experts disagree.

    • advancedatheist

      AI basically refers to the implementation of algorithms devised by high-IQ guys. The geeks make a huge deal out of that because their fantasy of AI reflects their narcissism. Think of AI as a kind of ventriloquism.

      • Dr. Franklin Jefferson

        AI reflects their narcissism

        Quite the opposite, actually. What magic makes your processes more than a set of algorithms? The whole point of AI is to transcend ventriloquism.

      • Matthew_Bailey

        It is more than Narcissism.

        Cognitive Science, Computational Neuroscience, Computational & Systems Biology (Cybernetics), and Synthetic Biology all say otherwise (in addition to Computer Science and Engineering).

        As we learn what it is to “think” we also learn how to implement these cognitive mechanisms in Silicon.

        Ted Berger, who has successfully transferred memories from one mouse (or was it a rat???) brain to another using a Prosthetic Hippocampus (awaiting approval for human trials).

        When I spoke to him (hopefully I will be doing Graduate work with him on this subject) about the issue of creating Neural-Prosthetics, he said that the problem wasn’t with creating the prosthetic. That was almost trivial compared to the real problem he encountered:

        Connecting the prosthetic to the brain (interfacing the prosthetic) without permanent damage.

        Of course, he also said:

        Eventually, we are going to have prosthetics for the entire brain.

        Then we won’t need an interface. We can just replace the entire nervous system with a better one, which can be upgraded when needed; more memory, faster thoughts, more accurate thoughts, and so on….

        Artificial Intelligence isn’t just about computers (although it will eventually occur there), but about whole systems of sciences.

        And… Unlike past predictions, which tended to be hugely premature…..

        We aren’t seeing that this time.

        We are seeing a game plan, where the researchers say “OK, we need to do X, Y, and Z next. It will probably take us a few years for each step.”

        And then they wind up producing X, Y, and Z within the time-frame they predicted.

        Add to that the embodiment of the expert-system AI’s that have been created, along with evolutionary and genetic algorithms, and you have AI that is self-evolving, which WILL eventually reach parity, and then pass human intelligence.

        We already have examples of Intelligence evolving in such a fashion:


  • Mark Wynn

    Where can I get a Luddite ax? Already looked on ebay ….

    • Dave

      Amen, brother, if you’ll pardon the expression.

  • Ed Pearlstein

    . That’s a great article, Clay (“great” in quality as well as in size!). My own pessimistic thoughts are more for the sociological than the technological future. Religion-inspired wars and hatred is what I have in mind. Of course those two jeremiads are to some extent in a chicken-egg relationship.

  • The fact is that robots do threaten every job sector…Understanding the challenge of making a robot that would be able to do house keeping in a hotel, has the same basic requires as a heart surgeon…

    Visual acuity

    Fine motor skills

    Tactile sensory

    We are taught to objectify people and base value on status, but to make a mechanical human able to act as a human with the same skill, means that mechanical human can replace any human…

    We see this in the robotics industry which has replaced so many factory workers…make robots that can perform specific manual task so re-purpose cannot fundamentally replace all humans…

  • Ron Marquart

    Hey Clay,
    This long piece reminds me of human hubris gone berserk. Let’s get back to the work of science to separate illusion from reality. Humans and their over confidence in technology have bought into the illusion of exceptionalism. We are not separate from nature, we are a part of the web of evolved life in the Family Tree of Life here on Earth. Humans need to show restraint, repair the human-caused disruption of ecosystems on mother Earth, reduce out numbers and over consumption so that we can truly live sustainably in our home here on Earth. Reality is all around us all over Earth–human overpopulation and over consumption need restraint, or the illusion of no limits for anything will take us down to a quality of life so low to defy contemplation.
    Save Wilderness, Ron Marquart

  • Martin Ciupa

    Many Humanist/Singularitarianists miss the point. An Artilect might evolve a belief system and be a Theist.

    For Humanists the Technological Singularity poses some problems. It seems the secular humanist motivation to be free from a divine theological belief system, when encountering the technological singularity, might nevertheless end up with a demi-god like artilect managing the affairs of mankind better than mankind can.

    As the author says in the link…

    “The wise use of technology, harnessed by a democratic political system that represents the interests of all humanity, will be essential to meet the challenges of climate change. We live at a moment of extreme cynicism about government and business. Short of divine intervention or the Singularity itself, nothing but an apt combination of regulation and innovation can get us through our fossil-fuel induced straits.

    Finally, humanists, free from beliefs in divine rule, are best suited to chart a course that will successfully integrate human affairs with the Singularity, if and when it happens. Though Susan Schneider and others are right to worry, humanity may yet have a role to play after the rise of a superintelligent, omni-capable, globe-spanning being. Such an entity would have strong incentive to ensure that it has no virtual rivals, from online viruses to competing AI beings. That would leave it a monolithic intelligence. For just that reason, it might welcome the companionship of a few billion creative, fun-loving, flesh-bound minds. For a species that has long yearned for a supreme and just power to manage its unruly affairs, there may come to be such a thing as a god after all.”

    We can consider the possibility that a sufficiently sentient/sapient artilect, programmed to “love” mankind, might seek to guide mankind subtly, without an aggressive take-over by overt, dictatorial, management, thus preserving mankinds freedom to create their own future (even if it was partly an illusion due to it being guided by its hidden loving “hand”).

    Furthermore, since it would know it had creators, and be programmed to value them (however inefficient they are individually) it might come to some unsettling conclusions. Being sapient it might also think analogically, i.e., thinking that if it has extrinsically defined purpose and had creators, would it not consider the possibility that biological sentience/sapience also had a hidden creator/purpose giver (irrespective of secular humanist aversion to the notion). I think it a possibility that it would not dismiss. The argument that a loving deity is apparently not objectively present, might match its own conclusion that the most loving thing it could do is guide mankind subjectively without overt control.

  • Jacobus Rex Peterson

    Has any one also considered that the Singularity may in fact be exactly that ie an intelligent AI will inevitably upgrade its self effectively out of existence or practical usefulness by immersing itself in a perpetually improving simulation of its own design.

  • DrunkVegan

    Anyone who speaks of widening inequality in an article that also heralds nanotech has forgotten one of the most important inventions coming: molecular assemblers, which will make currency obsolete and food, water, and consumer goods producible by any individual with a printer and some junk to feed in to it. Traditional models based on scarcity will collapse, money will be a meaningless concept, and poverty will be a VERY relative thing.

  • KnownUknown

    I love the degrees of emotion I felt reading this article, from amazement for the benefits of technological advancement to fear of the various harms it could pose to our already battered global society. Futurists that can encompass a wide focus can act as lookouts for the dangers that can quickly threaten with unregulated progress. Certainly technology and science has welcomed a new, fascinating age that was literally dreamed of in the past with an unknown pathway into the future.

  • Arthur Jackson

    Great article. Focuses on the core of the matter — what is a human being and how can you tell when you stop being one.

  • claynaff

    Please believe me when I say that I don’t dismiss the threat to jobs. However, it’s one thing to say “fully human equivalent intelligence and physical capability” and quite another to realize it. Indeed, if that were to literally come about, how would it be possible to make a moral distinction between the two types of human? More likely in the near future, I think, is the continued development of complementary kinds of capacity. (To be sure, this would still involve lots of disruption and human hardship.) Should the self-augmenting intelligence explosion come about, we will have to count on some luck for it to need us, but as I say in my article, there are at least some reasons to hope that will be true. Thanks for sharing your thoughts. — Clay

    • Matthew_Bailey

      1) The point of the goal is irrelevant as to whether it is ultimately realizable. It is that this is what is being worked toward (essential replacement of the Human Being, physically or mentally).

      2) Because of this, incremental progress toward that goal is still going to leave no “New Jobs” created.

      This is similar to how Industrialization basically rendered the Horse Unemployable. It isn’t just that Horses have problems finding employment. It is that they are UN-employable. Sure, you can find niche jobs that exist for Horses, but the Horses, as a creature, no longer has global employability as it once did.

      And it did not require that we Fully Duplicate the Horse, mentally and physically for this to happen.

      All that it required was the Functional Duplication of the Horse.

      The same will be true of people.

      3) As for Self-Augmenting Intelligence. This isn’t needed at all for there to be total disruption of the Human workforce.

      It is also nothing more than “Sci-Fi” delusions regarding “Will it need us?”, or other catastrophizing.

      Such an innovation will not leap, fully-formed, into existence, as Athena from the skull of Zeus.

      It will arrive via tiny, incremental progress, during which we will be able to judge the effects of the progress.

      The fears surrounding Superintelligences are more an issue of Futurist Eschatology than they are legitimate fears.

      It is worrying about something that, while possible, is a very remote possibility.

      It is also a product of a reductionistic mindset, when such an innovation will come about due to a Systems Theory type of Model.

      There will be no “One Essential Feature” to which such an intelligence can be reduced, which is ultimately the source of these fears (that this one feature should accidentally be realized, and such an AI should come into existence without our knowledge).

      Having spend the last ten years studying Computational & Systems Biology, and Cognitive Science…. There is a lot of misconception surrounding these issues due to poor readings of people like Kurzweil, Bostrum, et al.

      In cases like Kurzweil, he points out several issues that make such an event unlikely, which most people tend to miss in their glamorization of his predictions of the future.

      In cases like Bostrum’s, there is a fundamental ignorance of the Sciences involved. His philosophical arguments are Valid, but validity doesn’t mean that the arguments are sound. As of yet, his premises remain contradicted by the science involved.

  • In your interview with Susan Schneider, she makes the comment, “Your upload can turn out to be nothing like you, especially if it opts for enhancements to its algorithm.”

    This is an interesting statement in that it ignores the fact that “Education” in and of itself is an enhancement to YOUR algorithm.

    She also makes the statement: “Confusion over the self abounds. As the Singularity approaches, it’s only going to get worse. ”

    Perhaps Shakespeare has said it best — “Full of sound and fury, signifying nothing.”

  • morn1960

    you caint fix stupid as we see every day from these brainwashed people

  • Flamingfivehole

    Oh boy……I just stumbled upon the Society of Tin Hats.

  • flak4af

    Author states: “For a species that has long yearned for a supreme and just power to manage its unruly affairs,…” Speak for yourself. That’s a utopian, statist view that I can’t buy into. And neither can most Americans. However, one can understand why accepting the “singularity” concept could come easy for liberal Democrats.

  • Mark Wynn

    “Bedrock concepts of humanism—equality, individual autonomy, education, and democracy, among others ….” If those are the bedrocks of humanism … how can you possibly subscribe to the elitist, utopian … doublespeak and individual freedoms crushing policies of the current Democrat regime?