Code for tolerance: How tech companies can respond to hate but respect human rights

This week, companies including GoDaddy, Twitter, Google, Spotify, Paypal, Apple, and Cloudflare have booted hate groups off their services. These decisions were spurred after a driver in the city of Charlottesville, Virginia plowed into a crowd of protesters opposing a neo-Nazi parade, killing a woman and injuring several others.

U.S. President Donald Trump initially assigned blame to “both sides”; retracted his statement to specifically condemn white supremacists and neo-Nazis; and then doubled-down on his claim that “alt-left” activists were also at fault for the violence. The Daily Stormer, a hate group site, published an article shaming and insulting the woman who was killed by the driver in Charlottesville. In response, the web hosting platform GoDaddy cut service for the publication. Other platforms soon followed suit, and leaders of major tech companies such as Apple condemned the racist acts in Charlottesville and moved to ban white supremacist groups from using their services, while also donating funds to anti-racist groups.

As an organization that supports the digital rights of users at risk around the world, Access Now condemns hate groups and actively fights the racism and discrimination they advocate. We do this by supporting the fundamental rights to free expression and privacy. We advocate for free expression at all levels to create space for meaningful dialogue unfettered by governments or companies. No one is perfectly equipped to determine what should and should not be said online. Some speech can incite people to violence and be rightly prohibited under human rights norms. But we also recognize that other speech, such as that espoused by hate groups, can make marginalized people feel physically unsafe and induce self-censorship, preventing them from exercising their human rights, online or off. Importantly, we offer free digital security advice to vulnerable people to help provide them with security and a zone of privacy, which are essential to speaking and connecting freely.

U.S. government abdicates its responsibility to protect human rights

Freedom of expression is not an absolute right. However, governments appear too willing to obscure the most public and vocal face of hate, while failing to combat the deeper roots of racism and violence, listen to victims, and prosecute those responsible for the most heinous and violent crimes. Hate groups in the U.S. have emerged because they feel emboldened by the rhetoric of U.S. authorities, but also because the government has failed to uphold its responsibility to protect human rights, especially of minority communities. This includes weakening investigations into police brutality and potentially failing to effectively investigate the threats of actual violence while ensuring appropriate protections for human rights. Rather than leading the nation into meaningful dialogue, and opening effective avenues for civil society to engage on these questions — and to back those efforts with rights-respecting law enforcement — the Trump Administration has ignored or even allowed the rise of violent non-state actors. This abdication of responsibility by the government to protect human rights places pressure on technology companies, who are in turn responding to the presence of hate groups on their networks.

Tech companies must bake human rights into their policies

Tech companies must make human rights law a core consideration when evaluating whether to boot hate groups from their services. They currently appear to be acting based on public opinion and ad hoc interpretations of their terms of service. They are not, in general, making these judgments through established policies or consultation with human rights organizations. For example, in an internal email to employees, Cloudflare CEO Matthew Prince wrote:

This was my decision. Our terms of service reserve the right for us to terminate users of our network at our sole discretion. My rationale for making this decision was simple: the people behind the Daily Stormer are assholes and I’d had enough… Let me be clear: this was an arbitrary decision.

Prince goes on to call for a dialogue to develop policies surrounding the decision. Yet an “arbitrary decision” does not include due process — or fairness — and could eventually be wielded against vulnerable groups for their speech, too. A CEO could make the arbitrary decision to ban the Southern Poverty Law Center or the Anti-Defamation League, a result that no one but hate groups would want. President Trump himself has even seized on this logic, creating false equivalence between neo-Nazis and the people who oppose them. That outcome would not respect human rights.

As David Kaye, the U.N. Special Rapporteur for freedom of expression and opinion, noted in a Tweet, “because they’re assholes’ is a terrible reason for restricting expression, even if another basis exists (such as inciting violence)”.


David Kaye seems to be arguing that the justification “because they’re assholes” may not meet an exception under international human rights norms — especially if there is another basis to kick off The Daily Stormer that would respect human rights. Later in the Twitter thread he clarifies “it’s abt process and standards.” We agree that tech companies should incorporate human rights norms into their policies in order to effectively and consistently make decisions about the use of their services by hate groups. They may well end up with the same result of banning the The Daily Stormer. Earlier this year, we closely examined Cloudflare’s abuse reporting form, which had allowed hate groups to harass people for complaining that Cloudflare was providing them services. As we previously wrote:

Companies like Cloudflare are shielded from liability of content that passes over their networks through safe harbor provisions such as Section 230 of the Communications Decency Act and the Digital Millennium Copyright Act… This compromise of shielding internet intermediaries from liability fostered the creation of the web because of the sheer volume of content that moves across internet services — for example, YouTube has 400 hours of video uploaded every minute… But this issue, in some respects, calls into question the strength of intermediary protections on offer.

Despite the protections afforded to internet companies, the government’s failure to protect human rights means that they may increasingly be forced to account for the content moving over their networks.

Tech companies can make decisions right now to respect human rights. They can integrate human rights into their operations and join organizations that focus on helping them do so, such as the Global Network Initiative. They should consult regularly, and in a structured manner, with human rights groups rather than only turning to them when trouble arises. They can, in short, do better than wait for a horrific attack and make a gut decision. We’re here to help, and so are many other organizations which have been working on these issues for some time.

Recommendations for tech companies encountering hate groups

Improve due diligence

  • Undertake a human rights impact assessment across all your platforms and services;
  • Consult with users at risk and stakeholders impacted by your company’s policies;
  • Engage with the human rights community about evolving notions of online hate speech and intermediary liability, via a public point of contact; and

Update policies and practices

  • Use plain language, instead of technical terms, whenever possible to better inform users;
  • Bake in human rights protections to any new policies and services, rather than relying on a model of scaling up first and addressing abuses later;
  • Provide clear, rights-respecting, and predictable appeals mechanisms to users and clients

Remedy any adverse impacts and ensure non-repetition

  • Acknowledge harm caused to victims when it occurs, without qualification;
  • Seek to provide an appropriate remedy to those affected; and
  • Consider joining a multistakeholder entity like the Global Network Initiative to ensure ongoing attention to human rights impacts.