If we could reach deep into the mind of another person, what would we find? Perhaps to know the mind of another is to know ourselves a little better? Technology is being thrust forward ever closer to the day when we will inevitably be faced with the possibility of reading human minds, and the sooner we start thinking about it, the better prepared we will be for when that day comes. How will we use such technology? If history is anything to go by, questions of ethics will be afforded very low priority, and yet the revelation in recent times of government organizations reading citizens’ private emails has smoothed the ground a little for ethical matters of privacy to be considered in relationship to technology. So let’s ask the question. Would it ever be ethical to read someone’s mind?
To begin with, it’s tempting to conclude that it would be ethical if the person being “read” is a consenting adult with full mental capacity and who is free from any afflictions that might otherwise impair their judgement. Beyond the point that such criteria is open to interpretation, to challenge the robustness of any definitive answer to the question posed by this piece, I think we need to be more rigorous in our probing. Would it ever be ethical to read someone’s mind against their will?
I reached out to Professor James Ladyman of the University of Bristol to ask him his thoughts. (Ladyman is a former co-editor of The British Journal for The Philosophy of Science.)
That is a fascinating question. The best discussion of it I know is implicit in the science fiction novel Excession by Iain M Banks. In that, an AI [artificial intelligence] reads people’s minds and is ostracized and called a rude name by all the other AIs.
Excession is one in a series of books commonly referred to as “The Culture.” The series features various spaceships that also function as individual personalities of artificial intelligence. The curious aspect to the ostracization of one ship entitled Grey Area is that the ship wasn’t given the cold shoulder for reading the minds of the other AIs. Grey Area was shunned because it chose to read the minds of humans and the other AIs did not approve. Perhaps one of the more eccentric ways in which they ostracized Grey Area was by dropping the use of its name altogether. Instead, they began to refer to it as Meatfucker.
The justification of the reading of emails and private correspondence by intelligence agencies is usually presented in the context of security and keeping people safe. Clearly there are parallels to be drawn between the reading of emails and the reading of actual minds. On a personal level, I had my doubts as to whether or not it could be ethical to read someone’s mind against their will. However, I wanted to challenge that belief by contacting someone who I thought might present the case in favor of it. I had seen the former foreign affairs editor of Sky News, Tim Marshall, appear on TV lots of times talking about defense issues amongst other things. I asked him if he felt it would ever be ethical.
We are now approaching a time when the question will need to be answered for philosophical, moral, and legal reasons. It would not be ethical to read another person’s mind as a matter of course without their knowledge and consent. If a person of sound mind willingly acceded, without pressure, to their thoughts being monitored, that would be ethical.
We do not have a “thought police” capable of reading our minds, but in the event that such a thing were possible it would not be ethical for them to routinely do so. However, I can see how in the future laws may be enacted allowing the security forces to read minds in certain circumstances—anti-terror operations, for example.
I wondered though, how Marshall felt specifically about reading people’s minds against their will. Could that ever be ethical or would it be a step too far?
In the event of criminal activity, or the “ticking time bomb” scenario, then starting from the position that torture is wrong is not necessarily helpful here as reading someone’s mind, even against their will, may not fall into the torture category. Assuming that reading someone’s brain against their will does not cause physical and/or mental harm, it might be ethical to so do under certain circumstances, with legal, and medical oversight.
On balance, I do not believe that reading someone’s mind against their will would, in all circumstances, be unethical, however, before being sure of this I would need sight, and understanding of, scientific and legal evidence of the potential harm done, and the same conditions would apply to legal oversight of the activity.
Having thought about Marshall’s opinions on the subject, I feel we stand on some common ground. I remain skeptical of the ethics of reading someone’s mind, but he and I seem to agree that it really depends on the circumstances and how such technology would be used. Marshall also suggested an intriguing safeguard: “In the event of mindreading becoming the norm, a ‘block caller’ function in the brain/mind would be useful at this point and I suspect most people would mostly have it in the ‘on’ position.”
Ladyman declined the opportunity to answer conclusively on whether or not it would ever be ethical but he did make an interesting point: “I think utilitarianism might well imply that it is readily justified in the case of a given that it would be much less intrusive or damaging than even legal forms of interrogation.”
Aside from safety and security, I wonder: Would mindreading be enough to tell us who someone is? If we are not our thoughts, then I have to ask, who or what are we?