AI is Already Everywhere
Karen Hao received the Humanist Media Award at the American Humanist Association’s 83rd Annual Conference, held virtually in September 2024.
Hao is an award-winning journalist covering artificial intelligence. She was the first journalist to ever profile OpenAI; and is now working on a book about the company and the AI industry for Penguin Press, to publish in 2025. She is also a contributing writer for The Atlantic and lead of The AI Spotlight Series, a program she designed with the Pulitzer Center to train 1,000 journalists on how to cover AI.
Previously, she was a foreign correspondent at The Wall Street Journal focused on AI & China, and a senior editor at MIT Technology Review, where she wrote about the latest AI research & its social impacts. She has been a fellow with the Harvard Technology and Public Purpose program, the MIT Knight Science Journalism program, and the Pulitzer Center’s AI Accountability Network.
Her work won an American National Magazine Award in 2022 for “outstanding achievement for magazine journalists under the age of 30.” Her former weekly newsletter, The Algorithm, was named by the Webby Awards as one of the best newsletters on the internet.
Her pieces on the forced dismissal of Google’s ethical AI co-lead; Facebook’s addiction to and funding of misinformation; and OpenAI’s heavy toll on workers in Kenya were cited by Congress. In 2018, her “What is AI?” flowchart was featured in a museum exhibit in Vienna.
She regularly gives talks about AI and journalism and has guest lectured at MIT, Harvard, Yale, Columbia, Notre Dame, and HKU, among other institutions. She sits on the board of the Co-Opting AI Series, a book series from the University of California Press exploring the different social dimensions of AI, and on the Craig Newmark Graduate School of Journalism AI Advisory council. Her work is taught in universities around the world.
This text is excerpted from Ms. Hao’s acceptance speech at the Conference.
HI EVERYONE, thank you so much to the American Humanist Association for this immense honor. I am honestly still in shock that my work is being recognized by the same organization that recognized the work of so many of my greatest career heroes, including the recipient of this year’s Inquiry and Innovation Humanist Award, Ted Chiang. When I first received the email, I have to confess I thought it was a scam. I thought, how does AHA know about me? And my sincere thanks goes to the award committee and Greg Epstein in particular for nominating me for this prestigious award. I feel my career and work have only just started. And I hope for the rest of my life, I can continue to earn my place as a Humanist Media Award recipient.
I am a journalist. I write about artificial intelligence. What that means is, yes, I write about the technology itself—the artifact—the different ingredients in its machinery. But more importantly, I write about people and about power. I write about the people in power who run the companies and perpetuate the ideologies that shape AI, and the people out of power who are left grappling with its effects and often, despite incredibly challenging circumstances, find ways to overcome those effects. Some of the work I am most proud of is my profiles of communities shaped by and shaping AI in unexpected places: I’ve written about the plight of Venezuelans who, as their country’s economy went into a tailspin, turned to online platforms to clean and annotate the reams of training data that are essential inputs for AI models. I’ve written about workers in Kenya whom OpenAI contracted for less than $2 an hour to moderate the violent and sexual content out of its products, including ChatGPT, and have since sparked an international movement to seek better wages and labor conditions for workers across the AI industry. I’ve written about the massive Microsoft, Google, and Facebook data centers that are taking over vast stretches of land in the greater Phoenix area and consuming enormous quantities of power and drinking water to cool the facilities, at a time when the region is shattering heat records and experiencing the worst drought in 1,000 years. I’ve written about an indigenous couple in New Zealand that, faced with the threat of the disappearance of the Maori language, developed an AI speech-recognition tool to transcribe decades of recordings of their ancestors speaking it to help new generations of speakers revitalize it.
Today when so many people are wrestling with how to understand AI and what it means for all of us, I find that these are the stories that best illuminate the answers to these questions.
It was by complete accident that I came to this profession and this topic. But in hindsight, perhaps the roots of my interests and my approach were always there. I grew up in New Jersey, the daughter of Chinese immigrants, who, throughout my early childhood, worked as computer scientists. Because of this, I was decidedly not interested in computer science. I had no idea what computer science was but I found it horribly boring. All I could tell as a child was that it meant you spent your days glued to a screen. I was a restless, pesky adventurer. I always wanted to be outside, I always wanted to see things and touch things. I didn›t want a desk job. I wanted to go out and experience the world.
My parents were very clever. Even as I expressed disinterest in what they did, they still imparted in me a love of math and science. By simple virtue of their jobs, they taught me that technology is not some awe-inspiring, unknowable force; it’s the creation of many people. In her own childhood, my mom had also wanted to be a writer before moving to the U.S. and grappling with the realization that such a path would be too difficult in her non-native language. She cultivated in me a love of books and encouraged me as much as possible to master English in a way that eluded her, and to command it as a tool for expression and for change.
So I grew up with a blended education: One that didn’t separate science and technology from writing and the humanities. To me they are all different expressions of the same drive to understand more about our universe and our place in it; they are all different acts of creation.
My childhood was blended in another way. I am Chinese-American, which through much of my life was a significant source of existential tension. There were so many signals all around me—in my home, in my school, on the nightly news—that seemed hell-bent on framing these two components of my identity, Chinese and American, as profound opposites and pitting them against each other. Today those signals are ever louder. And it took me many years to realize that to be born Chinese-American is in fact one of the greatest gifts my parents ever gave me. To be born the union of two cultures, two peoples, two histories from places so distant from one another that they can represent such a rich spectrum of humanity. It makes you realize how many false binaries there are in the world. It makes you question narrow worldviews and to expand your aperture of understanding. It makes you understand that there are far more than two sides to any story. It makes you hungry to go discover them.
In college I put down my pen and studied mechanical engineering at MIT. When I graduated I had every intention of working in the tech industry. I moved to San Francisco; I joined a classic Silicon Valley startup with ambitions to change the world. Within a year, the startup began to rapidly decline due to a lack of profitability. It forced me to confront the Silicon Valley innovation model: are the best technologies really the most profitable? It also forced me to confront whether I wanted to be a part of an industry or a culture that might think this way.
I switched into journalism on a whim, drawn back to writing as my medium for change. And in 2018, after two years of internships, I took my first full-time staff position at MIT Technology Review to cover AI. This was not motivated by any kind of insight. I did not foresee how important AI would become. I took the job because I needed a job. And after applying to dozens of openings, this one—to cover AI—was the only one that accepted me.
I was initially really disappointed. I did not want to cover AI. Just like my childhood, AI, it seemed to me, was horribly boring. Some things never change. But I was quickly hooked by the role. I realized that covering AI is like a blank canvas. AI intersects with everything; its pursuit, the industry, its impacts encapsulate so much about people, our relationships, how we organize ourselves, how we find meaning. It means that to cover AI is to cover whatever you want. I’m really lucky that I got to do this before commercial interest in AI exploded and before the AI infocalypse started.
Today many people think of AI as ChatGPT. Of ChatGPT as all of AI. What powers ChatGPT is a single massive AI model, known as a large language model, that is trained on unprecedented amounts of data and with supercomputers that consume unprecedented amounts of energy. In fact, such a model needs so much data and so much energy that the world now risks running out of both. AI developers have raised alarms themselves that they are running out of internet to train on; supercomputer developers are proposing to build single data center campuses that could soon require 1,000 to 2,000 megawatts of power, which is as much power as one-and-a-half to three-and-a-half San Franciscos. All this to develop a chatbot that companies are trying to turn into a new search? Except that this new search takes 10x more electricity than a Google search. And it generates answers to your queries through a probabilistic technology that is inherently unreliable and riddled with errors, or so-called hallucinations.
To believe that AI can only ever look like such a model, and that progress can only ever emerge in this way, is not only false but dangerous. Some of the AI models we need desperately more of in the world right now are tiny enough to train and run on a laptop: models that, in the hands of well-trained doctors, can identify different types of cancer much earlier in MRI scans; models that can optimize energy loads on a power grid to better integrate renewables and accelerate our transition away from fossil fuels.
What’s more, the predominant vision of AI development today is predicated on the very premise that there is but one truth, one worldview, that can be captured in a single massive model and become the foundation for everything. Companies like OpenAI, Microsoft, Google, Facebook—they’re all chasing after this goal, which they’ve given the name AGI, or artificial general intelligence. AGI will supposedly solve cancer, solve climate change, bring us to utopia. Or, if we are not careful, destroy us. Some people describe AGI as having godlike qualities. All of those who develop it pray to the altar of scale. I think this audience will have a unique appreciation for what this rhetoric sounds like. The pursuit of AGI has turned into a religion. And all of its promise and prophecy hides the enormous social, labor, and environmental costs that are going into the production of this “one model rules the world” philosophy.
To me, this is fundamentally antithetical to the human experience. It collapses all of the beautiful diversity in the world into a singular statistical sameness, a probabilistic mush upon which the people and companies who profit from this vision want us to build our health care systems, our educational systems, our political systems, the rest of society. How does that benefit humanity? It does not.
In my work now, I am continuing to document the troubling consequences that emerge from Silicon Valley’s conception of AI development. I am continuing to travel to unexpected places to show how this conception does little to serve the greater majority of the world and, in many cases, actually harms them. I’m also excited by the search for different visions of AI. Visions like those of the indigenous couple in New Zealand who co-opted this technology from one that accrues benefit up to one that redistributes it to those who were historically marginalized. Visions that celebrate difference; that celebrate history, context, and culture. Visions that are democratic and humanist: that cede not our agency and power to corporations or false conceptions that a higher being might emerge from 1s and 0s but that enable human potential, enable each and every one of us to flourish.
I encourage all of you to also do that in your own life. AI is already everywhere: at your work, in your doctors’ offices, in your children’s schools. Ask who is developing it, how it’s developed, who does it serve. Resist AI technologies that seem only to erode your data privacy, that consume enormous resources, that seem only ever to automate jobs, that seem to flatten everything into simple categories and teach your children a diminished view of the world. Champion AI technologies that do the opposite; that preserve your privacy and agency; that are sustainable; that defer to your expertise; that embolden the historically marginalized and push you to expand your aperture of understanding.
And when you come across amazing stories that illustrate new visions of AI, send me a note. I’d love to write about them.