Tech Companies Have the Tools to Confront White Supremacy

After Charlottesville, companies like Facebook, Twitter, and the rest of Silicon Valley should take a firmer stand against white supremacy on their platforms.
Image may contain Human Person Sunglasses Accessories Accessory Flag Symbol Jules de Corte Clothing and Apparel
Neo-Nazis, white supremacists, and white nationalists fight with counterdemonstrators near Emancipation Park in Charlottesville, Virginia, on August 12, 2017.Albin Lohr-Jones/Pacific Press/Getty Images

Say you're a white supremacist who happens to hate Jewish people—or black people, Muslim people, Latino people, take your pick. Today, you can communicate those views online any number of ways without setting off many tech companies' anti-hate-speech alarm bells. And that's a problem.

As the tech industry walks the narrow path between free speech and hate speech, it allows people with extremist ideologies to promote brands and beliefs on their platforms, as long as the violent rhetoric is swapped out for dog whistles and obfuscating language. All the while, social media platforms allow these groups to amass and recruit followers under the guise of peaceful protest. The deadly riots in Charlottesville, Virginia, last weekend reveal they're anything but. Now it's up to those same tech companies to adjust their approaches to online hate—as companies like GoDaddy and Discord did on Monday, by shutting down hate groups on their services—or risk enabling more offline violence in the future.

A Platform for Hate

For the most part, as long as you’re not using an online service to directly threaten anyone or disparage groups of people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease—policies laid out by Facebook, Twitter, and YouTube—you can get away with practically anything. You can wrap your hate in lofty language about “the heritage, identity, and future of people of European descent,” as white nationalist Richard Spencer does through his supposed think tank, the National Policy Institute. On Twitter, meanwhile, sharing a gas-chamber meme garners just a one-week suspension.

“Social media has allowed [hate groups] to spread and share their messages in ways that was never before possible,” says Jonathan Greenblatt, CEO of the Anti-Defamation League, which has tracked anti-Semitism and hate for more than a century. “They’ve moved from the margins into the mainstream.”

This weekend’s white-supremacist march in Charlottesville, which left 32-year-old Heather Heyer dead after an apparent Nazi sympathizer rammed his vehicle into a crowd, injuring 19 others, was organized out in the open on the very platforms that claim to ban hate speech of any kind. The weekend’s “Unite the Right” rally had its own Facebook page. On Reddit, members of the subreddit r/The_Donald promoted the event in the days leading up to it. And bigots like former Ku Klux Klan leader David Duke used Twitter to issue foreboding warnings that the torch rally was “only the beginning.”

Under the banner of free speech, these tech companies allowed the rhetoric to not only live on their platforms but thrive there. That’s because they operate using a simultaneously fuzzy and overly narrow set of rules around what constitutes banned behavior.

Twitter overtly allows “controversial content,” including from white-supremacist accounts. It only takes action when those tweets threaten violence, incite fear in a group of people, or use explicit slurs.

Facebook, meanwhile, says that while it removes hate speech or any praise of violent acts and hate groups, it allows “people to use Facebook to challenge ideas, institutions, and practices. And we allow groups to organize peaceful protests or rallies for or against things.”

That distinction ignores social media's well-known role as a tool of mass radicalization. Without explicitly espousing violence, these white-supremacist extremists can still recruit potential followers to a set of beliefs with deeply violent roots in Nazi Germany and the Jim Crow South. It should come as no surprise that a protest anchored in hate would erupt in violence. For tech companies to defend those online discussions as peaceful protests is disingenuous at best.

“It is their responsibility to figure out a way not to be complicit with these types of violent actions—or become comfortable with the fact that they are,” says Charlton McIlwain, an associate professor at New York University who focuses on race and digital media.

Tools at Hand

These are, after all, companies, not governments, meaning they’re free to police speech in whatever way they deem appropriate. And in many cases, they already do. Twitter, Facebook, and YouTube have taken aggressive approaches to curbing ISIS activity on their platforms, a type of extremism they handle distinctly from hate speech. Facebook uses artificial intelligence to spot text that advocates for terrorism or terrorist groups, and deploys image-recognition technology to identify terror-related photos or memes. Apparently less sensitive to the free speech rights of ISIS aspirants, Facebook even works to wipe out clusters of users that might have terrorist ties. “We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account,” the company recently explained in a blog post about the approach. Facebook-owned Instagram recently introduced an algorithm to wipe away comments from trolls.

YouTube, meanwhile, has gone so far as to deploy a tool known as the Redirect Method, which serves anti-ISIS content to users searching for ISIS-related videos. Developed by Jigsaw, a think tank within YouTube’s parent company Alphabet, the Redirect Method was designed to reach people who may be curious about extremist ideology before they become fully enveloped in it. “Let’s take these individuals who are vulnerable to ISIS’s recruitment messaging and instead show them information that refutes it,” Yasmin Green, Jigsaw’s head of research and development, recently said to WIRED.

Now that the Department of Justice has deemed Heyer’s murder an act of domestic terrorism, it remains to be seen whether these companies will apply the same sort of rigor to white-supremacist groups. “These tech companies are very sophisticated. They’ve dealt with issues like child pornography or pirated content or terrorist activity,” Greenblatt says. “I don’t think any of the strategies are perfect, but applying some of those lessons learned from dealing with other public hazards would have a lot of value here.” The ADL has formed a working group of tech companies, including Facebook, Google, Microsoft, Twitter, Yahoo, and YouTube, that is focused on addressing cyber hate.

Of course, having the tools to police white supremacists is different from using them. Social media companies would inevitably face a user backlash and accusations of violating free speech. It's up to them to decide whether taking a moral stance is worth the cost.

Leading by Example

Despite the potential repercussions, some in Silicon Valley have already led the way. Before the rally in Charlottesville, Airbnb used background checks to block people it believed were attending the Unite the Right rally, specifically those who were organizing large events on the neo-Nazi website Daily Stormer.

“We investigated, and in some situations could confirm that users on our platform had booked listings for large gatherings that are affiliated with this event,” an Airbnb spokesperson said, adding that the company “evaluates these matters on a case-by-case basis.” On Monday, the web-hosting company GoDaddy followed suit, announcing plans to boot the Daily Stormer from its service. Within hours, Daily Stormer had a home with Google. Shortly thereafter, Google canceled the hate website’s domain registration as well.

Other popular alt-right destinations have pulled the plug too. On Monday, Discord, a gamer-focused chat platform, announced that it would shutter the popular altright.com server it had hosted. "We will continue to take action against white supremacy, nazi ideology, and all forms of hate," the company announced in a tweet.

But playing hot potato with the web’s ugliest URLs can only do so much to curb the resurgence of white supremacy in America. After all, the internet has no shortage of white-nationalist sites. Squarespace, for instance, hosts Richard Spencer’s National Policy Institute site. (Squarespace didn’t respond to WIRED’s request for comment.)

Even if tech companies were to proactively identify white supremacy as unacceptable content, catching every bad actor would require enormous resources. Some experts advocate giving the government a role in regulating these platforms, the same way it has regulated television broadcasters over the years.

“The history of the US says that if you have an entity that shapes public opinion, like a broadcaster, you were subject to a higher level of scrutiny,” says Nicco Mele, director of Harvard’s Shorenstein Center on Media, Politics, and Public Policy. “If the platforms have the power to shape public opinion, it would be astonishingly un-American and counter to the history of the country not to look at what appropriate regulation looks like.”

Given Silicon Valley’s historic rejection of regulation, it’s unlikely they’d accept intervention by Washington without a fight. But the country at large shouldn’t accept the status quo without one either. Indeed, the biggest barrier to tamping down the kind of hate speech that leads to violence isn't technological. It's simply making a choice.