Growing up I always considered the interaction of humanity and technology presented in the original Star Trek series as a fairly accurate representation of the way things would someday go. Technology would continue to cocoon humanity while maintaining a strict boundary between the two. Then came the Next Generation’s Borg, an antagonistic cybernetic race that subsumed other species by implantation of technology into unwilling victims. Not until I encountered transhumanism did I realize that my earlier understanding of the relationship between humanity and technology might be off. Indeed, with the way things are going, our future may have more in common with the Borg.
The word “transhumanism” has been attributed to Julian Huxley who, among other things, was Humanist of the Year in 1962. In his 1957 book, New Bottles for New Wine, he wrote
The human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way—but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.
In the present, transhumanism is better understood as a movement of techno-enthusiasts who not only anticipate that emerging technologies will have the power to alter and enhance “the human condition,” but who very much embrace the idea. I’ve always been sympathetic to transhumanists—not simply because of their vision, but because it’s so easy to see how their techo-enthusiasm would, in the minds of others, overshadow the serious thought they were putting into the ways emerging technologies could fundamentally (or subtly) alter human existence. This misunderstanding of transhumanism is exemplified in a new book, The Techno-Human Condition, by authors Braden R. Allenby and Daniel Sarewitz, both of Arizona State University.
As they define it, there are two separate “dialogues” concerning transhumanism, the first being “the ways in which living humans use technologies to change themselves, for example, through replacement of worn-out knees and hips, or enhancement of cognitive function through pharmaceuticals.” The second such dialogue is where the authors’ main interest lies. Transhumanism, they contend, is a “cultural construct that considers the relations between humanness and social and technological change.”
These two dialogues give rise to a “definitional ambiguity,” which leads Allenby and Sarewitz to conclude that coming up with a more precise definition for transhumanism is less important than dealing with the implications of this ambiguity. They find that
transhumanism turns out to be a conflicted vision offering a remarkable opportunity to question the grand frameworks of our time, most especially the Enlightenment focus on the individual, applied reason, and the democratic, rational modernity for which it forms the cultural and intellectual foundation.
So the problem is not merely transhumanism but instead its root, the Enlightenment itself.
As with most critiques of the Enlightenment, the core of humanity’s problems is our hubris in striving to apply some small amount of rationality to our world. As Allenby and Sarewitz argue, human enhancement is troublesome because most people don’t understand the technologies we’ve already created, so how could we understand the effects of even greater complexities? Technology is broken down into three levels: Level I technology is like a train, which gets a person from point A to point B. Level II technology is the entire system that supports the train getting to its destination. This includes railroad companies, government regulations for safety and security, and the fact that we’re more or less culturally “locked in” to the way we think of the railroad’s place in society. Level III technology speaks to the transformative role trains and the railroad have played on “Earth Systems,” defined by the authors as “complex, constantly changing and adapting systems in which human, built, and natural elements interact in ways that produce emergent behaviors which may be difficult to perceive, much less understand and manage.” For example, railroads played a role in the standardization of time a century ago, because as trains moved faster, it was untenable for railroad companies to keep track of all of the different local times they had to deal with. Railroads also changed military power relationships, because it was now possible for a small country like Prussia to mass large numbers of troops to defeat their foes.
Essentially, all technologies, no matter how small or seemingly benign, can lead to unforeseen consequences. Therefore, human enhancement will also lead to unforeseen consequences. Admittedly, this may be a shocking conclusion for Allenby and Sarewitz—who seem to have drawn much of their understanding of transhumanism from the 1997 scifi/biopunk film Gattaca—but rest assured, transhumanists themselves have already reached a similar conclusion. Though they’ve taken sides in this debate, they’re still engaged enough to have a serious dialogue on what emerging technologies will mean for humanity. The very conversations that the authors are calling for are already happening in transhumanist circles. But then, maybe these conversations aren’t legitimate because of transhumanism’s slavish adherence to the “applied reason” of the Enlightenment, instead of using whatever system of thought Allenby and Sarewitz favor.
Though it would almost be petty to harp on a single citation as an example of the authors’ thought processes, I was surprised to see them reference The Religion of Technology by David Noebel. Noebel is the founder of Summit Ministries, which, among other things, offers two-week summer sessions for Christian college students to teach them about five pernicious worldviews: Islam, secular humanism, Marxism, New Age, and postmodernism. As Noebel teaches it, the Christian worldview has fueled every great scientific advance and leap of knowledge in history, until Charles Darwin wrote On the Origins of Species in 1859, and scientists turned away from God toward both evolution and morality-free atheism, a shift that has since allowed society to run amok. It would be an understatement to say that most of what he writes cannot be assumed as objective or grounded in historical fact. Referencing Noebel as a reliable source on the thoughts and beliefs of seventeenth-century scientists is problematic, to say the least (though I suppose if we removed “applied reason” from the conversation, then we wouldn’t need to worry about letting facts get in the way of things).
As an example of unforeseen consequences, perhaps Allenby and Sarewitz never realized that when they sat down to write The Techno-Human Condition, they would become part of the dialogue they say doesn’t exist among transhumanists. Whether or not future generations become enhanced and perhaps unrecognizable as our descendants, we in the present are better off for discussing these complex issues. Fortunately, Allenby and Sarewitz’s aren’t the only ones looking to the future.