Curbing Hate: It’s Time for Tech Platforms to Address the Mess They Helped Create

Months before Robert Bowers gunned down eleven Jewish congregants at the Tree of Life synagogue in Pittsburgh, Pennsylvania, he posted anti-Semitic, hate-filled vitriol on the social media platform Gab, which bills itself as “the free speech social network” and has been embraced by Far-Right extremists, white supremacists, and bigots. Before he murdered nine black Americans at the Emanuel African Methodist Episcopal Church in Charleston, South Carolina, Dylann Roof, spurred forward by Google’s search algorithm, ventured down a digital rabbit hole of racist, fearmongering misinformation that radicalized him to the point he was willing to kill. Before white supremacists and neo-Nazis brought hate and violence to the streets of Charlottesville, Virginia, they used PayPal, Facebook, Discord, and other online forums to coordinate the violent rally and spread their rhetoric.

Since the inception of the internet, insular online communities of hateful people have spread misinformation, stoked nationalist entitlement, and called for angry men to channel their rage through intimidation and outright violence. These communities have carved out a place for themselves through a combination of technological ingenuity and free-speech legal arguments, often skirting the tenuous legal boundary between hate speech and incitement to violence. Following the lead of Stormfront—the first major racial hate site on the internet, established in 1996 by KKK leader Don Black—hate groups have been early adopters of new web services, eagerly sinking their teeth into any tools and platforms they can use to promote their hateful agenda.

The traditional response to these toxic online communities has been to ignore them. The common thought being that even bigots have the right to post their ideas online and that confronting hate speech only drives it further into the dark corners of the web. (Although that might actually be a good thing; better that a hate community exist in the darkness with a high barrier to entry than out in the open where it can scoop up more recruits prone to Far-Right ideology). Nevertheless, that approach ignores that the internet is more than a space to share ideas; it’s a series of services that allows for efficient coordination of real-world activities while also protecting the participants with near-anonymity. Indeed, hateful online activities do culminate in real-world violence and tragedy. We need only look at last week’s shootings and attempted bombings as proof. As bigots, sexists, and white supremacists are further emboldened by our political leaders, we can expect these tragedies to increase in frequency and severity. What can we do against such reckless hate?

The Change the Terms coalition, a group consisting of “civil rights, human rights, technology policy, and consumer protection organizations,” has developed a set of corporate policy and terms-of-service recommendations for internet companies that aim to curb the proliferation of hateful activities both on and offline. The coalition recognizes that, “most tech companies are committed to providing a safe and welcoming space for all users. But when tech companies try to regulate content arbitrarily, without civil rights expertise, or without sufficient resources, they can exacerbate the problem.”

The exacerbation that they are referring to is driven by tech companies’ attempts to use machine learning, artificial intelligence, and other algorithm-based mechanisms to find, flag, and remove content that violates their terms of service. Such methods produce biased and discriminatory results and can be skirted using coded language common amongst hate groups, like referring to minorities as “animals” or swapping out “supremacist” with “nationalist.” Using machines to handle this important task is a half-step at best. Tracking digital content that incites or engages in violence is a nuanced process and takes an understanding of past and present cultural dynamics and civil rights law. In other words, it requires human beings. Tech platforms helped create the mess of prolific hateful activities online. It’s time they allocate the energy and resources necessary to address it efficiently and non-arbitrarily.

It is worth noting that the Change the Terms coalition is not interested in interfering with free speech or even monitoring hate groups. The recommendations are formulated specifically to prevent the facilitation of hateful activities, defined as “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.” These recommended policies are not about censoring hate speech; they merely aim to better equip companies to combat activities that are already illegal under US law. The recommendations ask internet companies to develop fully transparent terms-of-service enforcement procedures that use the best available staff, tools, and training, including a fair and open appeals process for removed content.

The Change the Terms recommendations were released on October 25, but the campaign has already had an impact, most notably through the de-platforming of Gab (the social media site used by the synagogue shooter). Gab has long refused to place reasonable restrictions on content posted within the site, which undoubtedly played a role in Robert Bowers’ decision to violently attack Jewish congregants. In the aftermath, PayPal, GoDaddy, Stripe, Medium, app-stores, and other web services quickly severed ties with Gab, causing the site to go offline. Gab may resurface in the future through independent web-service providers, but it won’t enjoy the luxury and privilege of easy access through mainstream platforms.

With a set of standard guidelines, the Change the Terms coalition hopes to empower more internet companies to take similar actions against hateful activities on and offline, and to provide the means for consumers to hold companies accountable if they don’t.

These policies are a crucial step toward developing a peaceful, inclusive, and egalitarian society, but they are only one step. With a president who routinely attacks media, political opponents, and minority groups—the same entities that are regularly targeted with violence, intimidation, and harassment by hate groups and other members of his base—it’s clear that more must be done.

Read Change the Terms’ policy recommendations at changetheterms.org, and don’t forget to vote on Tuesday, November 6.