Is Twitter Doing Enough to Remove Extremist Content? An Ex-Employee Weighs In

Courtesy INSITE on Terrorism blog: image circulated by IS users mocking Twitter's jihadi account suspensions.

As a former (veteran, even) employee of Twitter, I am used to people expressing shock and confusion at the need for “just a website” to employ so many people, or a genuine wonder if we all sit around and Tweet for a living. What I am not used to—and continue to be surprised by since moving to London to study Religion in Contemporary Society— is the question of why Twitter isn’t doing more to combat the dissemination of extremist content.

I don’t speak for Twitter, or claim to represent the company’s views, but this question (in conjunction with an assignment to present on the 2014 study “Tweeting the Jihad) has inspired me to take a closer look at this issue, and to offer a reflection based on past experience and current studies.

The closest approximation to a central Twitter ideology that I can come up with, based on my time in four different departments in nearly as many years, is an adapted version of Voltaire’s adage “I disapprove of what you say, but I will defend to the death your right to say it.” (It was S. G. Tallentyre writing about Voltaire in The Friends of Voltaire who actually said this, but no matter.) It is fundamentally a speech platform, personified in former-CEO Dick Costolo’s favorite description of the service as the “global town square.” Acknowledging that, it becomes clear how profoundly important the policy, execution, and precedent set by content removal was and is to the company and its understanding of itself.

The Twitter Rules warn, in two places, that an account may be suspended if it “publish[es] or post[s] threats of violence against others or promote[s] violence against others,” or posts “excessively violent media.” This makes sense, as it aligns closely with the American notion of hate speech as a boundary to First Amendment speech protections.

I remain impressed by the way Twitter’s Legal and Trust and Safety departments have devised and enforced their content removal policies as it applies to illegal, hateful, or violent speech. The Country Withheld Content policy, which allows individual accounts or pieces of content to be withheld only in countries in which it violates respective speech laws, reflects this commitment to avoiding censorship whenever possible.  This approach lets Twitter as a business—and a public one at that—more peacefully coexist with local and national governments in order to ensure continuation of access while preserving as much freedom to speak as possible.

For example, CWC allows for anti-Semitic commentary to be withheld in Germany while still available in the United States, where it may not specifically violate any laws (other than the laws of decency).  To ensure this policy is applied diligently, only authorized actors like law enforcement or government agencies are allowed to submit CWC requests. You can imagine the potential difficulty of maneuvering within this system when dealing with a country like Syria (where ISIS is strongest), whose leadership and legal system is unstable at best. For what its worth, according to Twitter’s semi-annual Transparency Report, Syria has never submitted a removal request under the Country Withheld Content policy. Iraq submitted its first this year.

This being said, individual users can report generally abusive content directly to Twitter—indeed, it is first on the list on Twitter’s policy page. Abusive content includes accounts or individual Tweets used to harass individuals or promote or incite violence. These reports go to a real person trained to evaluate and make a judgment call on whether or not the content in question violates the rules and should be removed.

Note that on average 6,000 Tweets are posted each minute, totaling about 500 million each day. As Twitter doesn’t (and never has, and realistically never could) actively mediate content, even if each piece of offending material was reported using available tools it would still constitute one of the world’s largest games of “whack-a-mole.”

In light of near universal calls for Twitter, Facebook, Ask.fm and other social media sites to suppress extremist content from public view, a number of questions arise. Some are practical: how many Twitter employees speak or read Arabic or Urdu or Farsi, etc.? Should these requests be given preferential treatment over other offensive content like child pornography or threats of rape? Who makes that decision? How many people can a company be expected to employ to review these requests when they have other legitimate business concerns as well?

Debates on these questions could fill volumes, and these are the easy ones.

Some questions are, broadly, ideological. For accounts or Tweets sympathetic to, or acting as amplifiers of, Muslim ‘extremism’ that don’t specifically incite or promote violence, how far is too far? As there is no accepted definition of ‘extremism’ (as there is no definition of ‘religion,’ ‘culture,’ ‘terrorism,’ etc.) how can we address these questions systematically but with an eye towards the precedent we are setting for the future?

We are living in the new world of proactive counter-terrorism programs. In the U.S., for example, there is create and capture, an NYPD program that sent undercover informants to report on the behavior of Muslims attending mosques, study groups, and community events hoping to identify potentially radical organizations and individuals. The Green Number, or Numéro Vert, is a private telephone hotline in France where family and friends can report someone they fear may become radicalized. And in the U.K., Prevent is a program that essentially creates ‘trigger warnings’ for teachers and institutions to report the potential for radicalization.

I do not mean to condemn or even criticize these programs, merely to point out they represent a step towards a culture of policed thought—extremism is an actual threat, but it benefits no one if suspicion, or ideological discomfort, becomes the rationale for censorship or punishment.

I am not arguing here that Twitter is incapable of doing more, or should not do more. I do think it’s important to note that Twitter employees (and execs) have been specifically threatened by ISIS in the recent past, and as such feel perhaps more acutely than most the urgency of these issues.

Twitter is tackling solvable problems where they can. I believe that as a culture we need to approach this issue on two fronts: we must continue the practical, on-the-ground conversation of how to remove illegal, hateful, or violent speech from social media quickly and at scale, while continuing to consider why this is done and how it is justified.