Twitter beefs up anti-abuse controls. What's changed?

Twitter unveiled new measures to crack down on abuse and harassment on Wednesday, including a shift to using algorithms to identify users in violation of the rules. 

The sign outside of Twitter headquarters in San Francisco is seen from the street below. Twitter announced Wednesday, March 1, 2017, that it is adding more new tools to curb abuse, part of an ongoing effort to protect its users from hate and harassment.

Jeff Chiu/AP/File

March 1, 2017

Twitter unveiled new measures to crack down on abusive behavior on Wednesday, the latest in a recent string of updates aimed at making the social network a safer and a more pleasant place to be. 

The newest changes include an expansion of algorithms used to identify users engaging in potentially abusive behavior, new filtering options that allow users to limit which types of accounts they see notifications from, and efforts to make the abuse-reporting process more transparent. It also places more responsibility on the company, instead of relying mainly on its users to improve its social environment.

The updates come as Twitter and other social media sites navigate the challenges of responding to mounting pressure to crack down on hateful and abusive behavior without sacrificing free speech in the process. Last month, the company announced a number of changes aimed at curbing hate speech, including a move to bar those whose accounts have been repeatedly banned from creating new ones, and hiding offensive content from conversation threads and searches. 

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

"Twitter has been proceeding carefully and thoughtfully in thinking through and rolling out tools designed to help harassment victims," University of Maryland law professor Danielle Citron, who advises Twitter on these issues, told USA Today. "Those tools aim to put victims in the driver's seat but also tackle how overwhelming it can be when attacked by a cyber mob. The newest tool helps ensure that a harasser's provocations of others don't fill up victims' notifications."

The increased use of algorithms to identify potentially abusive accounts marks a departure from the service's previous reliance on users to report possible violations of site rules. Last month, Twitter announced that it would begin putting accounts identified by an algorithm as potentially abusive in "time out," during which the abuser’s tweets would only be shown to their followers. 

The changes announced Wednesday are an expansion of that feature: "The company is now doing a lot more using its own technology to identify abusers' accounts and take action," writes Sarah Perez for Tech Crunch. "In other words, Twitter isn't just mindlessly flagging accounts that tweet a single rude word or phrase, but is more closely examining the behavior itself and the context." 

Accounts identified by algorithms and found to be in violation of Twitter's rules will have their functions and reach limited temporarily, the company said. 

"We aim to only act on accounts when we're confident, based on our algorithms, that their behavior is abusive," Ed Ho, Twitter's vice president of engineering, wrote in a blog post. "Since these tools are new we will sometimes make mistakes, but know that we are actively working to improve and iterate on them everyday." 

In the race to attract students, historically Black colleges sprint out front

Of course, users will still have the option of reporting abusive behavior themselves – and for those that do, Twitter is increasing the transparency of the process. Users who report abuse or harassment will now be notified when Twitter starts looking into the report and if the site decides to take action. 

Another new feature: Twitter users' ability to mute certain keywords, phrases, or entire conversations from their timeline for as long as they want. The site will also give users more control over what they see in their notifications, as they can now opt not to receive notifications from the types of accounts typically created for the sole purpose of harassing others. Such accounts tend not to have a profile picture and are not associated with a verified email address or phone number. 

This report includes material from Reuters.