Can verified Twitter accounts help curb hate speech?
Twitter announced Tuesday it would expand its verified accounts, and permanently ban controversial figure Milo Yiannopoulos, hours after actress Leslie Jones became the high-profile target of online hate speech.
Kacper Pempel/Reuters
If anonymity breeds cruelty online, will verified identities lead to a more civil conversation?
Less than 24 hours after "Ghostbusters" star Leslie Jones received a barrage of racist and misogynistic tweets, Twitter has opened the floodgates on applications for its blue verified badge that confirms users really are who they say they are.
Starting Tuesday, all users can apply for the verification icon, Twitter announced. Account verification was previously reserved for celebrities, journalists, and other users and organizations whose accounts were considered "of public interest." Verified accounts – signified through a blue badge to the right of a profile name – inform users that individuals or organizations' accounts are authentic.
The announcement appears to be one of Twitter’s solutions to curbing hate speech on its micro-blogging site. While more verified accounts could parse out online trolls and others, such as those who targeted Ms. Jones, it’s unclear if the move will solve this problem. Yet, as one blogger tweeted Tuesday afternoon, it’s a shift in the right direction.
Twitter did not explicitly state Tuesday whether the expansion of verified accounts is linked to filtering out abuse. Officially, Twitter expanded the verification process "to make it even easier for people to find creators and influencers on Twitter," said Tina Bhatnagar, Twitter's vice president of user services, in a press release.
"We hope opening up this application process results in more people finding great, high-quality accounts to follow, and for these creators and influencers to connect with a broader audience," she said.
Out of 300 million monthly Twitter users, some 187,000 are verified, according to the press release.
Twitter introduced verified accounts in 2009 following a landmark lawsuit over account impersonation, reported Motherboard, an online magazine. Tony La Russa, who was then the manager of the St. Louis Cardinals, sued Twitter over an account that he alleged impersonated him and posted disparaging messages in his name. Within a day of Twitter being served the lawsuit, it removed the controversial account, and it introduced verified accounts soon afterward.
In January, Twitter revoked the verified status of Milo Yiannopoulos, a highly controversial writer and free speech advocate who tweets under the @Nero handle and writes for conservative outlet Breitbart.com, for violating its policies. Buzzfeed speculated that Mr. Yiannopoulos was unverified for inciting harassment. According to Twitter’s verified accounts guidelines, an account may lose its verified status if the profile information, original purpose of the account, or tweet privacy settings change, or if tweets become protected.
Any account in which Twitter revokes its verified status is not eligible to regain the status. Mr. Yiannopolous also was an instigator in the targeted attack of Jones on Monday, for which he was permanently barred from Twitter the next day.
Ever since it was created, Twitter has been criticized for how it addresses hate speech and abusive accounts. Twitter started to move away from its permissive free-speech attitude in 2013, following several high-profile incidents involving online trolls. Since then, Twitter has strengthened its guidelines against abuse, and established the Trust & Safety Council in February 2016, composed of 40 organizations and academics, including the National Domestic Violence Hotline, LGBT advocacy group GLAAD, the Anti-Defamation League, and UK-based charity Anti-Bullying Pro.