What Will Be Human in the Future?

What does it mean to be human? As it relates to artificial intelligence (AI), this important question has been explored in science fiction films and television beginning with 2001: A Space Odyssey and more recently in Ex Machina, Humans, Westworld, Bladerunner, and Black Mirror. But with the current advancements in AI, the issue is becoming increasingly urgent.

We confer legal personhood to designate certain rights and privileges, but that’s a completely arbitrary legal choice decided by a society. For example, the Catholic Church wants to confer personhood at conception, the US Supreme Court tells us it’s at twenty-six weeks, many liberals say it is at birth, and the ethicist Peter Singer posits it may be reasonable to designate personhood at thirty days after birth. We run into similar dilemmas with end-of-life issues such as when someone can be declared dead.

When we consider personhood regarding AI things gets even murkier. When do we confer personhood to an AI robot? Does it even matter if an AI is conscious? Right now there are military robots being programmed to determine the difference between a foe and a friend and to kill, independent of a human operator. Self-driving cars are programmed to make ethical decisions similar to the famous trolley problem that presents a choice between doing nothing and killing five people or acting and killing one. They will be autonomous moral agents.

The Three Laws of Robotics Isaac Asimov created to protect us are becoming increasingly irrelevant as computers program themselves. One experiment with a self-programming bionic arm on an amputee resulted in a computer code that was indecipherable after a few months but worked extremely well. AI of the future won’t be programmed by us in a linear fashion—it will be like an evolving organism producing AI that adapts to its environment. Robots will self-evolve. Like us, there will be no designer. We will never be their gods.

The future is here and the technology is far outpacing our control and ethics. These are questions we need to explore with urgency as the foundational architecture is being built.

Consider this: brain implants are a reality today, and in only a few years many of us will be implanted with interactive AI ourselves. How will it change our view of being human when some of us can access all of the world’s data instantaneously or talk to anyone anywhere, employing a smart phone in our brain like a cochlear implant? It’s almost certainly coming and within our lifetimes.

But those programs in our brain will have to make choices, like pacemakers do already, and this means our concept of free will, already problematic, will be tested even further. Moreover, advanced, emergent AI may not have human motivations and values, including humanist values, which is why the foundational AI goals need to be carefully thought through now. Presently it’s hard enough for humans to deal with moral dilemmas where we have multiple high values and all of them are in radical conflict and require tradeoffs. How do we program values into future robotic companions when there may be no rational way for us to decide? Is it even safe to teach them nuance and ambiguity regarding our values?

Mind control for good and evil may take place. For example, a chip first designed to reduce pain could be reprogrammed to provide an input into the brain to prevent racist thoughts, or it could stimulate authoritarian mind control. Both are theoretically possible. The portent of dangerous Terminator-like robot takeover isn’t as much of a potential problem as AI takeover by our adversaries, unemployment as machines take our jobs, and, crucially, how we exist as human beings. Look how technology has changed how we interact already.

Artificial intelligence may in the future threaten our safety, privacy, and sense of self-worth. It can destroy what it means to be human. AI can also create a radical flourishing of humanity. It remains unclear how it will all play out, but regardless, humanism itself will have to evolve.

Ultimately humanism always comes back to the essential goal of affirming the inherent worth and dignity of all people. But who is a person in the future? If we are working for the good of humanity, just who will that be?