There’s this assumption about internet trolls, one that’s been drilled into us since the first troll crawled out from beneath his bridge to terrorize an otherwise peaceful message board: that they do it for the lulz. In other words, trolls exist for no other reason than to get a rise out of us, so if we just keep this in mind we can simply ignore them and they’ll get bored and go away. Anyone who has accepted this as truth should listen to last week’s episode of This American Life, which aired a segment narrated by Lindy West, a former Jezebel writer. If anyone could become desensitized to trolling, one would think it’d be West, considering that she, along with her fellow Jezebel writers, has been inundated by a daily deluge of some of the most hateful rhetoric imaginable from the bowels of the anonymous internet.
But instead of removed indifference, West was plagued by anger, pain, and fear, a trifecta that reached its peak (or perhaps nadir) when a particularly malicious troll created a fake Twitter account pretending to be West’s recently-deceased father, tweeting how disappointed he was in his daughter. The incident, enacted when West’s wounds were still raw from her father’s death, sent her reeling, and she responded with a Jezebel post lambasting the “don’t feed the trolls” mantra and encouraging others to confront and expose their trolls.
West’s troll would eventually write to her and apologize for the pain he caused her (she actually interviews him in the TAL episode), but the overwhelming majority of trolls won’t be so reconciliatory. It seems evident that troll culture has reached an alarming crescendo in the last year, and Twitter is ground zero for their attacks. In October, I wrote about why Twitter is such an ideal platform for trolls, how its open, easily searchable platform allows trolls to organize and descend on their victims en masse in a way that would be impossible on more closed platforms like Facebook. I penned that piece in the wake of #GamerGate, a movement that drove women from their homes in fear for their lives. But even as #GamerGate has died down, the trolling has continued, intensifying to a point that any empathetic human being should find absolutely terrifying.
The coverage of said trolling, on This American Life and other mainstream outlets, has finally forced Twitter CEO Dick Costolo to respond. In an internal memo obtained by The Verge, Costolo acknowledges that “we suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years.” He then promises that “we’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.”
Costolo’s team certainly has a daunting task ahead of it. As I noted in my October piece, what makes Twitter so great is its open platform and how it enables users to plug into the real-time web. Twitter needs to address its troll problem without tampering with the lack of friction that makes the platform so useful for its millions of users. That balancing act won’t be easy.
That being said, it isn’t the first platform with a troll problem, and administrators for large web forums have been generating methods for dealing with these issues for years. Here are five strategies Twitter could adopt that would sharply curtail troll abuse:
1. Invisible bans
This is a trick that’s been around for at least half a decade. Instead of simply deleting or banning a troll, you enact a ban where that troll’s content is invisible to everyone except the troll itself. This way he doesn’t simply start a new account, but instead continues to tweet his hate thinking it’s being seen by his intended victims when in reality it’s only visible to an audience of one. Facebook has employed this method through its comments widget you see on many news websites — comments flagged as abusive or spam disappear for everyone except the logged-in troll.
2. Require a follower threshold for a user’s tweets to be seen by other users
This tactic is used on Wikipedia, Reddit, and Facebook’s commenting widget. Whenever a Wikipedia article becomes a target for vandalism, admins simply step in and install a rule requiring anyone who edits the article to have an account that’s more than three days old. The theory being that most vandals are fly-by users who won’t bother to wait three days so they can commit their vandalism. On many of Reddit’s largest subreddits, bots automatically remove submissions from brand new accounts with no karma. New users are instead encouraged to lurk and build karma up on smaller subreddits so they don’t crowd the default subreddit with rookie mistakes. Twitter could take a similar route by making it so that tweets coming from brand new accounts aren’t seen by more popular, verified users.
3. Start dealing out lifetime bans
Though Twitter has been known to suspend accounts, it often allows users back onto the platform after a waiting period. This allows repeat offenders to continue collecting followers that can amplify their abuse. For instance, Chuck C Johnson, the conservative troll who regularly outs rape victims and publishes the personal information of those he targets, has been suspended from Twitter at least once and is often reported for abuse, yet he’s been allowed to continue to amass an influential following that he can use to terrorize his victims. Given that Twitter is his main marketing vehicle for his “journalism,” he would likely hold back on much of his abuse under threat of losing those followers forever.
4. More effective training and education of law enforcement
One of the most horrifying details to emerge from the coverage of Twitter trolls is law enforcement’s complete lack of interest in pursuing these trolls. In an article for Jezebel, Anna Merlan details how her visits to a police precinct to report the threats against her were met with indifference and shoulder shrugs. When a man uploaded videos of himself holding knives and guns that he promised to use to murder Brianna Wu, a police officer simply told Wu “we suggest you turn off your electronic devices.” This response is unacceptable, but you can probably understand why your average police officer might not know how to deal with these threats. Twitter could respond by going on an education tour where it visits police precincts and gives presentations on how to report crimes and request identifying information from the platform that can aid law enforcement. Twitter already does this for other purposes; it has an entire staff that regularly visits media organizations and major brand advertisers to educate them on how to better use the platform. Hell, I received an email from a Twitter sales representative offering to get on the phone with me to talk about promoted tweets, and I’ve only spent a few hundred dollars on Twitter advertising.
[LIKE THIS ARTICLE SO FAR? THEN YOU’LL REALLY WANT TO SIGN UP FOR MY NEWSLETTER. IT’S DELIVERED ONCE A WEEK AND PACKED WITH MY TECH AND MEDIA ANALYSIS, STUFF YOU WON’T FIND ANYWHERE ELSE ON THE WEB. SUBSCRIBE OVER HERE]
5. Hire more staff for a quicker response team
Twitter relies in large part on its users to report abuse, but any abuse reports likely have to be reviewed by a human being before a ban is put in place. Having more staffers standing by to review abusive material and issue bans more swiftly would frustrate trolls and dissuade them from putting in the time and effort to create new accounts.
Will these five steps remove all the trollery on Twitter? No, but it will make the Twitter platform openly hostile to trolls and difficult for them to take root and multiply. Taking decisive action will also send a reassuring note to its millions of peaceful users, most of whom would like to exist in a world where the response to degrading abuse isn’t simply a suggestion that they should grow a thicker skin.
Did you like this article? Do you want me to create awesome content like this for you? Go here to learn how you can hire me.