You there, bro?



What The Actual F**k Is Up With Twitter's Support System?

The new Ghostbusters reboot has been controversial since its inception. Some detractors find fault with what they see as the unnecessary parroting of a beloved franchise, while other, more devious critics take issue with the all-female cast. Most of the latter, it is safe to say, are found on the internet, where murky depths and a shield of anonymity provide the perfect breeding ground for all manner of trolls. Case in point: the movie’s trailer is Youtube’s most disliked video in the site’s history, and some writers are quick to blame sexists anonymous.

While it is easy to dismiss the video’s reception as evidence of the trailer’s poor quality (rather than the Internet’s widespread misogyny), a new online controversy has erupted that is much more straightforward. This one comes after one of the stars of the movie, Leslie Jones, was inundated with racist and misogynistic tweets, eventually quitting Twitter because of the torrent of hate. For some time before leaving, Jones retweeted the offensive messages people were sending, which eventually brought the issue to the attention of the site’s CEO, Jack Dorsey, who seems to be taking action. On Tuesday, Twitter permanently suspended the account of Milo Yiannopoulos, an ultra-conservative blogger who had called the actress “barely literate,” and, some say, spearheaded the harassment campaign against Jones.

Yiannopoulos—or, as he calls himself, “the most fabulous supervillain on the internet”—actively courts controversy with outlandish rightwing claims, and even sells t-shirts that read “feminism is cancer.” He’s a “free speech” evangelist who is unable to understand that “free speech” and “no consequences” are not equivalent, crying foul whenever anyone responds in part. Responding to his suspension, Yiannopoulos said in a statement that “anyone who cares about free speech has been sent a clear message: you’re not welcome on Twitter.”

Twitter, however, as a company, acted well within its rights and moral obligations to censor an actively antagonistic voice on its platform. Even the speech guaranteed by the First Amendment is not entirely free, as the state has an obligation to protect its constituents (see Brandenburg v. Ohio). The question, however, that Yiannopoulos’ suspension raises is one of Twitter’s adjudication processes, and not only its handling, but definition of online harassment.

Milo Yiannopoulos.

Back in 2015, in a leaked internal memo obtained by The Verge, then CEO Dick Costolo responded to the prior year’s Gamergate controversy and openly admitted that “we suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years.” What followed was another revision to the site’s rules in a history of many. Whereas Twitter had begun by emphasizing its commitment to freedom of speech—good and bad—in its updated rules, the site noticeably began to balance its desire to be the “free speech wing of the free speech party” with that of protecting of its users. An updated preamble reiterated that “we believe in freedom of expression and in speaking truth to power, but that means little… if voices are silenced because people are afraid to speak up,” and specific language proscribed harassment and hate speech.

At least, seemingly specific language. Rules are open to interpretation, and Twitter’s revisions, while meaning well, are not so precise as to be completely unequivocal. Here is an example from its policy on abuse:

Violent threats (direct or indirect): You may not make threats of violence or promote violence, including threatening or promoting terrorism.

Violence, in an age where the word is used to describe both police brutality and the need for trigger warnings, is an amorphous concept. And so are threats, for that matter, too. Does this rule only apply, for example, when one user threatens another with physical harm, rape, or even death? Shouldn’t “violence” include hate speech which may live on the screen but affect the psyches of millions? The problem with Twitter’s rules—beyond their numerous loopholes (e.g. whether the primary purpose an account is to harass)—is that they don’t define the very things they seek to regulate: violence, threat, abuse, harassment.

Obviously, Twitter has its own lexicography, or it would not have suspended Milo Yiannopoulos. Such disciplinary action, however, is inconsistent, at best. Earlier this year, for example, journalists Jonathan Weisman and Julia Ioffe suffered extremely hateful harassment campaigns to relative silence from Twitter, and—to humor a point from Yiannopoulos and his disciples—what about those accounts with less followers, those that fly under the radar but nonetheless walk the line between what they call “being outspoken” and being abusive? Of course, there’s a sliding scale: it makes sense that the larger a user’s influence, the easier it will come to the site’s attention. However, that Weisman and Ioffe’s reports were relatively ignored is not only ridiculous but highlights the lack of transparency with which Twitter handles these cases.

How is it possible to guarantee the freedom of expression when the boundaries that delimit it are kept under lock and key? As long as Twitter denies its users transparency on the very principle it seeks to protect, a spectre will hang over every tweet – “is this allowed?” In some cases – Milo Yiannopoulos, ISIS, etc. – it will be straightforward, but in others, those will be questions that never see a concrete answer, and in the end, that unexplained censorship is anything but freedom of speech. Twitter has released a statement on the whole debacle, but what concrete steps is it going to take in the future to make sure all of its users are safe?

The official statement:

People should be able to express diverse opinions and beliefs on Twitter. But no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others. Over the past 48 hours in particular, we’ve seen an uptick in the number of accounts violating these policies and have taken enforcement actions against these accounts, ranging from warnings that also require the deletion of Tweets violating our policies to permanent suspension.

We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We’ll provide more details on those changes in the coming weeks.

Stay tuned to Milk for more on Twitter.

Related Stories

New Stories

Load More


Like Us On Facebook