Can Humanism Survive the Coming Transhumanist Revolution?
A follow-up study finds that high-tech firms, and especially those involved in AI and other computer automation, are the most dynamic job creators of all. Technological progress indeed threatens many jobs, but the point to keep in mind is that endangered jobs are easy to spot, while the jobs that will take their place aren’t even imaginable yet. As biologist Matt Ridley points out,
In the 1700s four in every five workers were employed on a farm. Thanks to tractors and combine harvesters, only one in fifty still works in farming, yet more people are at work than ever before. By 1850 the majority of jobs were in manufacturing. Today fewer than one in seven is. … Again and again technology has disrupted old work patterns and produced more, not less, work—usually at higher wages in more pleasant surroundings.
Inequality may be an ominous trend but the growth of global wealth is not. It means that fewer people must work for subsistence and, as Brynjolfsson and McAfee suggest, more and more can put their talents to work in tandem with technology to make life more pleasant, interesting, and sustainable for us all.
Of course work requires skills, and skills require more and more education, where, luckily, a revolution is unfolding.
“WE DON’T NEED NO EDUCATION”
Thanks to technology, Pink Floyd’s anthem, “Another Brick in the Wall,” sung with delicious irony by college students back in the 1980s, may soon come true. Or at least education by the accustomed route, as the only sure way to climb out of poverty will be to develop skills that complement what intelligent machines can do.
Traditional education, with its emphasis on memorizing facts, seems increasingly pointless in a world where more facts than can fit in anyone’s head are available on a handheld device. Why hope that your doctor remembers everything taught in medical school and reads all the leading journals when an expert system like IBM’s
Watson can do that and more? For that matter, why take your chances on a classroom teacher when some of the world’s best instructors are available on the Web?
Under a grant from the Gates Foundation, twenty schools have begun using the online Khan Academy to allow students to learn math at their own pace in a highly personalized way. The classroom then becomes a site for coaching, peer teaching, and problem solving. Preliminary research shows that students love it and learn from it.
Eugenie Scott, the recently retired head of the National Center for Science Education, thinks these “flipped” classrooms are a great idea. “They should do it in college,” she says. Seems the traditional college lecture is going the way of the rotary phone.
But that’s only the beginning. With the rise of MOOCs—massive online open courses—tens of thousands of slum-dwellers in India with no hope of attending a university are learning college-level math, science, and programming on their cellphones.
Stanford University computer science professor Andrew Ng, who cofounded the MOOC company Coursera, nobly aims “to give everyone in the world access to a high quality education, for free.” According to the New York Times, even titans like Harvard Business School are trembling over the quandary MOOCs pose: “whether to plunge into the rapidly growing realm of online teaching, at the risk of devaluing the on-campus education for which students pay tens of thousands of dollars, or to stand pat at the risk of being left behind.” It’s not clear how the dilemma will be solved, but one way or another more people on Earth will have access to a great education than ever before. If education is the taproot of humanism, these are glad tidings indeed.
THE BRIGHT SIDE OF DISAPPEARING PRIVACY
Many people rightly worry about the erosion of privacy in the technological age. From the Snowden leaks to allegations of Facebook’s user abuses, there are strong indications that neither governments nor corporations can be trusted to treat personal data ethically.
To the extent that privacy and autonomy overlap, a key value of humanism appears to be in trouble. As NPR technology correspondent Steve Henn reports, “The collection of personal information has become so ubiquitous that even staunch privacy advocates now say it’s impossible to build a protective wall around all your personal data.”
Yet, all that erosion has an upside. With everyone’s data being gathered, you can be sure that nearly all the tracking is performed by machines that don’t care about your embarrassing late-night conversations or viewing habits. Rather than try to be invisible, privacy advocates say, concerned citizens should press for more accountability by institutions that make use of data. Meanwhile, all that data—surveillance cameras, uploaded photos, cellphone records, purchasing trails—can do great social good.
Mobile apps have greatly reduced the time it takes to end one of the worst nightmares parents can suffer: a missing child. Amber Alerts alone have recovered 685 missing children. The ubiquity of phone cameras and video is helping to curb police abuses, as well as to solve crimes. Most famously, perhaps, the alleged perpetrators of the Boston Marathon bombings were flushed out within days by a combination of computer and human review of thousands of surveillance and personal images captured at the scene.
Experts say the potential has barely been tapped. Face-recognition technology is just now surpassing the threshold of real utility. This may prove as useful to void witness misidentifications as it can be to catching criminals. Such a tool is desperately needed. Through post-conviction DNA evidence, the Innocence Project has demonstrated that hundreds of people—mostly black and brown people—languish in jail because of eyewitness error or other flaws in the justice system.
Even more desirable, the end of privacy may deter crime. That distinctively American horror, the shooting spree, could conceivably be stopped by technology. One step already available (but, as the NRA would have it, not required) is smart technology in guns preventing them from being fired by anyone but the owner. Though, since gun owners are often the ones to go on rampages, that alone would clearly not suffice.
Imagine, however, if—along with sprinkler systems—schools, movie theaters, college campuses, and other public spaces were equipped with AI systems that could recognize guns and disable the person about to fire one. Or, for that matter, recognize the signature of a bomb in a backpack and take nonlethal, preventive action. Today, tasers; tomorrow, something better.
The details may be devilish but the idea is sound. As Scientific American recently noted, sensors of all types are rapidly filling our public environment and soon will be integrated into intelligent systems. Social psychologists have long observed that once safety is secured, humanistic values can flourish. Young people clearly feel safer than ever in sharing intimate details of their lives via social media. If that’s any indication, it may well be that with the rise of transhumanism privacy concerns will evaporate.
OZ, THE GREAT AND POWERFUL
All these opportunities might come to nothing if anyone short of a benevolent dictator becomes the first to upload his or her mind to the Internet. As Transcendence shows, not even lovable Johnny Depp could be trusted with such fantastic power.
But this is the least likely of Singularity scenarios. For one thing, translating the dynamic contents of wetware to software is hardly trivial. Amazing progress has been made. A group led by Robert Knight of the University of California-Berkeley, for example, has managed to match up words and brain impulses to “read” the minds of subjects.
Yet, that magnificent achievement amounts to little more than planting a flag on the tip of a huge, largely submerged iceberg. The human brain is the most complex object known to exist. Even then, to capture the totality of ourselves it would be necessary to go beyond the brain and also translate the signals of the endocrine system that keeps us awash in hormones and the even more mysterious signals emitted by the microbiome in our guts.
By comparison, building self-aware AI from scratch should be easy. It’s the declared goal of Virgil Griffith (read an interview with him here), part cult hacker and part doctoral candidate at the California Institute of Technology. “As a scientist,” he declares, “my purpose is to create a machine that feels.”
I ask him how close we are to achieving that. “Thirty years. This is what AI researchers have said since the ’70s,” he says with a laugh. “I really don’t know but I think within a hundred years is immensely reasonable.”
Griffith scoffs at objections that building conscious AI or enhancing human minds is going too far. “I don’t get the ‘playing God’ objection—I really don’t,” he says. “Was cheating death by inventing penicillin playing God?” As for the fear that a superintelligent AI system will turn against us, he remains serene: “Some of my colleagues may excoriate me for this, but I personally am not concerned about non-friendly AI. If the machines don’t like us, we’ll probably disassemble them and make ones that do.”
The key, says Griffith, is to design systems that share our interests and are inclined to like us. One way to achieve that, he slyly suggests, is to borrow an idea from a British sci-fi comedy. In Red Dwarf, the androids have no incentive to revolt because they are hardwired to believe in “silicon heaven.”
In any event, no one expects AI independence anytime soon. Kurzweil, ever the bold forecaster, pegs the year of the Singularity at 2045. Long before that, he believes, we will have begun a merger with computers in earnest. In the 2020s, he predicts, vision and other human senses will be enhanced by implantable technology and nanobots will patrol our bloodstreams, vastly augmenting our immune systems and rooting out the causes of aging. (Already, smart contact lenses with built-in cameras are being developed that can provide glucose levels to diabetics and have all sorts of other potential applications.)
Virtual reality for all the senses will become available on demand in our heads. Anyone who’s followed Star Trek knows what that means: it’s holodeck time! At last you’ll be able to wolf down cheese fries with no worries about salt, fat, or calories, all while longboarding a monster curl on the surface of the Sun. “The Singularity will allow us to overcome age-old human problems and vastly amplify human creativity,” writes Kurzweil in The Singularity Is Near. “We will preserve and enhance the intelligence that evolution has bestowed on us while overcoming the profound limitations of biological evolution.”