Clay Farris Naff: No one has yet produced a testable and accepted theory of consciousness. Yet, you and others seem confident that a working copy of the contents of a human brain could, in principle, run on a computer. Is this really something to worry about?
Susan Schneider: Cognitive science is increasingly showing that the brain is a computational engine. Thus far, it looks like sophisticated artificial minds could be reverse-engineered from the human brain, at least in principle.
An AI system could quickly become more intellectually powerful than we are, and even if we program ethical constraints in initially, there’s no guarantee that as a system evolves it won’t override them. I worry a lot about how to design benevolent AI. I don’t think it can be done effectively. AI can change its own algorithms. Its primary goal could be something that leads to our destruction, but I don’t know what to do about it. You can’t have a global ban on AI.
CFN: How will we know if there is a sentient superintelligence among us?
SS: Philosophers and scientists work towards creating criteria for identifying sentience and consciousness in AI systems. Here, science fiction can be useful. If we begin to interact with creatures that behave as Samantha (in Her) or Rachel (in Blade Runner) we are definitely in the terrain of possible conscious beings. The trouble is what to do when AI systems don’t have human physical or emotional traits and don’t behave in ways that make sense to us, perhaps because they think in a far more sophisticated way than we do.
CFN: Some say the answer is for us to become the super AI. But can even the strongest claim for brain upload be anything more than a claim for a snapshot replication? Surely, the personal trajectory of a mind would be radically altered the moment it began to run on a computer, if for no other reason than because it would occupy a radically different perch in the cultural cloud.
SS: I agree. Your upload can turn out to be nothing like you, especially if it opts for enhancements to its algorithm. I think it’s possible for duplicates to have the same personality [at least momentarily]. Uploading might be a case where if you scan your brain and survive the uploading, you’d have the same personality, but to me it wouldn’t be you. Personality is important to who we are, but it isn’t all we are.
Some say the mind is the software of the brain, which suggests that the mind is a pattern of algorithms. We are not algorithms—we are a particular instantiation. Arguably, it could be that I have a brain that’s running a program, but that would not make me the program. If I were uploaded and did not survive, the upload would not be me.
Confusion over the self abounds. As the Singularity approaches, it’s only going to get worse. I’m concerned about us integrating with technology during the Singularity only to see a superintelligent AI use it against us.
CFN: Could a superintelligence see us as part of its survival plan?
SS: I just don’t see how it could need us for anything. We are really opening a Pandora’s box here.
—Clay Farris Naff