Ahead of the Curve: AI Regulation Needs Some Help

Photo by Markus Spiske on Unsplash

Government oversight of the explosive growth of artificial intelligence (AI) is essentially nil. Into the breach have stepped a growing number of AI engineers and executives who have taken it upon themselves to do some regulating of a sort. The fact that someone is doing something, as opposed to no one doing anything, is a plus. However, the power for good or ill that AI carries needs more than a conflicting hodgepodge of unenforceable oversight. Consider some of the recent moves from the industry:

(1) Over 3,000 Google employees signed a letter last spring demanding that the company cease work on a defense contract for “Project Maven,” a research initiative to develop algorithms that can better analyze aerial drone footage. Nothing in the project involved software for flying drones or launching weapons. It was limited to better determining whether, for example, a particular gathering is a terrorist staging ground or a wedding. That didn’t matter to the letter writers, who wanted no possible complicity in objectionable future use of their work.

(2) Nearly sixty AI and robotics experts from almost thirty countries have signed an open letter calling for a boycott against KAIST, a university in South Korea that has been reported to be developing artificial intelligence technologies to be applied to weaponry. The boycott is all-encompassing: “We will… not visit KAIST, host visitors from KAIST or contribute to any research project involving KAIST,” the researchers said.

(3) Google announced in June a set of seven ethical principles to guide its future development of AI. The Electronic Frontier Foundation, which enjoys a reputation as a thoughtful, independent industry watchdog, gave Google favorable marks for its effort, while noting that Google hasn’t committed to any third-party, independent review of its adherence to its standards. Given how vague the standards are, such a review could be problematic.  For example, Principle #1 is to “Be socially beneficial.” The respective views of Franklin Graham and Alexandria Ocasio-Cortez  regarding what is “socially beneficial” would differ quite a bit, don’t you think?

(4) In July, researchers at IBM claim to have taught an AI to follow a code of ethics. Given how powerful and flexible AI can be, that doesn’t sound overly breathtaking. If IBM comes up with a way in which a more-ethical AI can drive out a less-ethical yet more-profitable AI in the Wild-West marketplace we have today, that will be a man-bites-dog story worthy of attention.

(5) An article last spring details how “Mark Zuckerberg has been apologizing for reckless privacy violations since he was a freshman.” The same article notes how inexorable market pressures make self-policing by Facebook—or any major AI player—little more than a stream of ever-more-unctuous apologies.

(6) Military applications of AI are not the only things tech employees find objectionable. Employees at Microsoft and Amazon have threatened to walk off the job over their companies’ development of technology for facial recognition and immigration enforcement.

(7) Google is being slammed, by both its employees and outside groups, for what appears to be a weakening of its resolve against China’s demand for rigid censorship of internet searches. Back in 2010, Google was widely praised for telling China “If those are the rules, we’re not going to play,” thus foregoing untold billions of revenue. Now, they may be caving in.

Different people view these alleged transgressions differently. The one that bothers me the most is the last item, about aiding Chinese repression. I want to be protected by the best military in the world, and within a very short period that will mean the military with the best AI. But others may disagree.

It’s not that no one is looking at the ethical ramifications of the AI sea change. In fact, too many people are. The Institute of Electrical and Electronics Engineers has a “Global Initiative on Ethics of Autonomous and Intelligent Systems,” though its chairman cautions that “no hard-coded rules are really possible.” The Allen Institute for Artificial Intelligence has published a “Hippocratic Oath” for AI practitioners, consisting heavily of platitudes like “I must not play at God,” mixed with absurd promises to share all knowledge despite the proprietary nature of so much commercial AI. There’s a “Future of Life Institute,” the Kennedy School of Government’s “Future Society” Institute, and the “Open AI” group founded by Elon Musk. There’s “Fairness, Transparency, and Accountability in Machine Learning,” the “Center for Humane Technology,” the Electronic Frontier Foundation, the Tech and Society Solutions Lab, and probably many others, all stumbling over one another in the competition for grants. Even if one of these groups comes up with something brilliant, there’s no particular reason to expect that its voice will be heard above the cacophony—especially that part of the cacophony that says: “I’ve got a billion dollars to invest in whatever promises me the quickest return.”

My suggestion: the next president (not the current buffoon) should establish a commission, with weight, authority, and a budget, to sort through the morass and come up with…something. We’ve had scores of presidential commissions in the past, going back at least as far as Theodore Roosevelt’s “Keep Commission” on government operations. There have been two different Hoover Commissions on government reorganization, a Warren Commission on the Kennedy assassination, a Kerner Commission on urban riots, and plenty of others. The combination of promise and threat that AI offers is at least as momentous, perhaps more so, than the subject matter of any of these prior commissions. I would suggest a recent ex-president as someone with the open-mindedness to be a good chair of such a commission, though it’s important that it not be viewed as partisan in any way.

At this point it’s hard even to begin to decide whether we need platitudes, enforceable laws, preferences, tax incentives, international treaties, or something else. The policy and ethics advisor at Open AI, when asked about the practical implications of his work, was refreshingly honest when he replied, “I’m still trying to figure that out.” I’m pretty sure we don’t want rigid central planning, but I’m also pretty sure that today’s free-for-all is not the best option, because AI has the potential for disrupting what it means to be human. The first step of such a commission would be to figure out what the commission itself should be doing. At this point, any sort of plan with some gravitas behind it would be better than the chaos we have now.