Ahead of the Curve: Regulating Artificial Intelligence Can We Expect AI to Explain Itself?

I ran across a new report recently that made me do a double-take. “When Software Rules: Rule of Law in the Age of Artificial Intelligence” was prepared by the Environmental Law Institute, and as the title implies it discusses the regulation of Artificial Intelligence (AI).

Environmental law and AI are two interesting fields, but what have they got to do with one another? Have the environmental lawyers run out of other ideas to work on, so they have to invade computer geeks’ turf?

It turns out that AI has quite a bit to do with environmental concerns, for good and for ill. That’s because AI has quite a bit to do with everything, and the environment is a subset of that. Or, turning it around, you could say the environment has to do with everything, and AI is a subset of that.

The report at its outset cites two specific examples of overlap. Cloud storage systems are wonderful, but they consume gobs of energy nearly two percent of all US electricity. So Google, which maintains some of the largest data centers, turned its “Deep Mind” AI system onto the problem of reducing its data center’s energy use. The result was a 15 percent energy use reduction at what was already an efficient center.

Then there’s the dark side. Everyone knows that Volkswagen figured out how to cheat on its emissions testing. What’s less widely known is that they used AI to do it. By one estimate, Volkswagen’s little AI trick may cause some 1,200 premature deaths in Europe.

It would be helpful if the report recited a handy list of laws to pass in order to maximize the good that AI can do for humanity while minimizing the harm. Unfortunately, that isn’t possible. AI itself is changing every day, and the “social contract” that society must informally adopt before hard rules are promulgated is nowhere near crystallizing.

One useful suggestion the report does raise is to port the concept of an “Environmental Impact Statement” (EIS) over to the world of AI. An EIS is required by law for major projects and takes an interdisciplinary approach to predict unintended consequences. A careful attempt to make such a prediction, expanded beyond purely physical concerns, could help decision makers cope with AI.

Example: an EIS for self-driving cars. On the one hand, there’s reason to believe they can be more energy efficient, e.g. by reducing idling times and taking the most efficient route to a destination. They may someday become safer drivers than humans as well. All good—but what happens when people are freed from the annoyance of driving and are able to work productively during drive time instead? How many will be tempted to move even further away from their jobs, thus negating whatever positive effects there might be? An EIS isn’t a magic talisman that can answer these questions definitively, but it’s better to try to think them through in advance than just to lurch forward blindly.

I also separately ran across exactly the kind of danger we need to watch out for as we think through the challenges AI presents. One of the key recommendations in the Environmental Law Institute report favored “transparency,” which in this context means requiring AI to be able to explain how it achieves a particular result. Not all AIs do this; as I described last fall, the AlphaGo AI that beat the human champion at the game of “Go” makes decisions in a way that is impossible for humans to trace or understand. The ELI report, and many others who have considered the issue, strongly recommend that all AI must be capable of explaining itself. “Black box” AI that produces an untraceable result should be verboten, if not by law then at least by social and industry standards of acceptability. Nevertheless, a recent New York Times opinion piece argued that “black box” AI is just fine, so long as it works, and the demand for explainability creates unnecessary friction and added cost. I thought the article was mildly persuasive—until I spotted the “About the Author” segment at the bottom, which noted that the author was a general partner in a venture capital firm.

I worked with venture capital pros during my legal career. They are incredibly smart, incredibly hard-working, incredibly selfish, and incredibly single-minded in their pursuit of rate of return. They have to be this way to survive, I suppose, and that’s fine. They fill an important role in the economy. But they are the last people you want to listen to about the best way for humanity to deal with AI. I don’t agree with the ELI folks on everything, but I can trust that they’re trying to do their best for humanity as we deal with AI that is rapidly outstripping us. Venture capitalists? Not so much.

The single most important recommendation in the ELI list was also the fuzziest, calling on the public to “Raise awareness and foster critical thinking about potential applications and impacts of AI.” AI is too big to leave to the experts, be they venture capitalists, think-tank lawyers, or tech geeks. The burden of critical thinking falls especially on humanists, because relying on scripture for guidance, as some theists do, cannot yield useful results.