How to defeat Internet bullies

Laws against online abuse are often underenforced and many police departments need better training to confront threats on the Web. But author and privacy expert Danielle Citron says states are starting to do more, and the public is beginning to stand up against Internet trolls, bullies, and tormentors.

Students rallied on the Penn State campus on March 20 in support of women who police say were depicted on Kappa Delta Rho fraternity's private Facebook pages.

Matt Rourke/AP

March 27, 2015

Effective solutions for ending – or even lessening – the abuse and online harassment that happens on the Web have been elusive to say least. But the cause did receive a strong endorsement recently when Twitter, Reddit, and Facebook all took harder stances against deplorable behavior on their platforms. What's more, over the past 18 months, 14 states have criminalized so-called "revenge porn," the act of posting online nude images without someone's consent. 

But even though there's new attention focused on curbing online misconduct, driving that kind of behavior off the Web requires a more concerted effort. Danielle Citron, a leading expert on privacy and online harassment, says it'll take enforcing existing state laws as well as as broader societal acknowledgement that what happens online has real world effects, too.

I recently spoke about these issues with Ms. Citron, the Lois K. Macht Research Professor of Law at the University of Maryland. Edited excerpts follow. 

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Selinger: Your recently published book, "Hate Crimes in Cyberspace," has received widespread praise and significant media attention. Have any misunderstandings arisen that you’d like to correct?

Citron: There’s a misperception in the public sphere that the law can’t handle online harassment, and that’s not accurate. We have a well-crafted federal cyberstalking statute and threat statutes at the state level that are very effective. I’d say approximately 50 percent of the state level harassment and stalking laws are well-designed, meaning that they reach harassment posted on third-party sites and not just abuse communicated directly to victims. Yes, there are gaps in the law. Yes, there are ways in which technology has outpaced legal protections. But it’s wrong to say that the law isn’t equipped right now, in some respects, to address harassment.

A big problem is that existing laws on the books are underenforced. Take tort law. We’ve got the right tools. But people can’t afford to use them. Or, look at the FBI. The agency needs more funding. Or, think about police departments. They need better training so that more enforcement happens on the ground. I’m working with the attorney general of California on efforts to train peace officers to address the posting of nude photos, threats, and other forms of harassment appearing online precisely because at the local level there are effective laws to invoke.

Selinger: Does this mean the issue of online harassment is less about expanding the legal imagination and more about having the political will to back viable regulatory mechanisms?

Citron: Yes. No law is perfect, but half of the country has laws that can be applied to punish and deter cyberstalking and cyberharassment. I take a state-centric approach because states have long been understood as the innovators and enforcers of stalking laws. From this perspective, it’s a mistake to rely too heavily on federal law enforcement. For the most part, federal law enforcement may not have sufficient resources or bandwidth to tackle the problem, at least not without serious help from state law enforcement.

Howard University hoped to make history. Now it’s ready for a different role.

Selinger: Given the widespread conversations that are taking place around issues related to online harassment, are you optimistic that the needed bottom-up, state-driven reform will occur? 

Citron: Yes. California’s Attorney General Kamala Harris is proof positive. She’s working on very exciting proposals along with Assemblyman Mike Gatto, including one to allow law enforcement to get a warrant to pursue misdemeanor harassment charges in cases involving the nonconsensual posting of nude images. Then, there’s a proposal to amend the state’s long-arm statute that would permit prosecutors to reach outside the state to prosecute defendants who have harmed people living there. We’re also seeing important amendments to the cyberexploitation statute, and so much more promising legislative movement. There are 25 states that are considering proposals to criminalize invasions of sexual privacy involving the disclosure of someone’s nude images without consent. In the last 18 months, 14 states – including my own state Maryland – have criminalized the nonconsensual posting of nude images in violation of someone’s confidence and privacy. That is progress, indeed. 

A perfect storm has occurred that brought the issue of cyberstalking to the public’s attention. Amanda Hess’s article in the Pacific Standard came out a year ago. The New York Times covered it and made clear that it’s not OK to try to force people off line and silence them. Brave victims like Holly Jacobs come forward and start talking to the press. You’ve got Jennifer Lawrence’s hacking and disclosure story seven months later. The narrative began to change, compared to earlier times when celebrities like Demi Lovato had accounts hacked but were personally blamed for taking nude photos. In Lawrence’s Vanity Fair piece, she said, don’t shame me, that invading her sexual privacy constituted a form of sexual assault. Then we had Gamergate. Brianna Wu, Zoe Quinn, and Anita Sarkeesian bravely went public with their stories. They're speaking out and saying enough is enough.

It’s not that folks haven’t spoken out before. But we’ve finally gotten to a moment where people are paying attention in a more sustained way.

Selinger: The perfect storm that you’re referring to is a build-up of individual cases of online harassment that’s resonating strongly with the public and creating a visceral sense of unease. This gets me thinking about the privacy harms piece that you’re working on. Is there a connection?

Danielle: Yes. I’m writing an article with [George Washington Law School Prof.] Dan Solove where we discuss why it’s so difficult to get the courts to appreciate the increased vulnerability that people experience when their privacy is violated. Sure, we can quantify some privacy harms resulting from online harassment, like losing a job or being unable to get one, or suffering identity theft as the result of one’s Social Security number being hacked and posted online, or having to move. But how do you quantify the pain of feeling like you always need to look over your shoulder because you’re never safe, and strangers can confront you offline because of something they read online? Or the subtle shift that occurs where you’re so shaken that you develop a diminished view of your life’s possibilities? 

To segue to my work with Solove, we see that courts dismiss these types of harm as not palpable enough, at least when it comes to data breaches. They say you can’t come up with clear financial penalties for emotional distress, and end up viewing the suffering as too minor to merit consideration. What they’re not recognizing is the harm because it is not sufficiently visceral, at least in the eyes of many courts. 

Selinger: Does that mean the public is expressing greater sensitivity than the courts, at least when it comes to online harassment? 

Citron: The situation seems more complicated than that. I recently appeared on the Kojo Nandi show and many of the callers dismissed threats made online because they didn’t occur face-to-face. One caller even said that if a threat isn’t made in person, it isn’t real.

Chapter four of my book argues that social attitudes, including views held by police officers, often reflect this sense that ones and zeros can't hurt you and the Internet is supposed to be a Wild West where crazy things happen and that victims can choose to ignore them. Think about the fraternity brothers’ response to the Penn State Facebook scandal, where they allegedly posted nude photos of women who were asleep or passed out. Many basically said it’s a joke and nobody takes it seriously because it happened online. Some members of the public still have a way to go as well. So do judges who more often are not well versed in networked technologies. They’re like, “What’s Facebook?” And they’re going to bring this social attitude to the bench, including a trivialization response.

Selinger: We’ve talked about the public and the courts, but what about tech companies? There’s lots of positive changes occurring. Twitter, Reddit, and Facebook are all against revenge porn and now prohibit nonconsensual nude photos. Twitter is expanding the range of people who can report doxing and impersonation, and it has increased the support team dedicated to responding to abuse reports. But why did it take so long for Silicon Valley to address basic problems?

Citron: I’ve been working with these companies behind the scenes for years. As a member of the Anti-Cyber Hate Working Group and the Inter-Parliamentary Task Force Against Cyber Hate, I’ve been talking formally and informally to tech companies about online harassment. It may seem like they’ve been ignoring the issues, but they are not. We’ve had lots of conversations about cyberharassment. In part, companies have set clear policies about different forms of cyberharassment gradually because they were wrestling with business models linked to data collection and internal narratives about information needing to be free. But, remember, at least five years ago when we began these conversations in earnest, the public was enthralled with a notion of these platforms as being fundamentally pro-social, even if some of the speech was anti-social, because of the perceived value in allowing all flowers to bloom.

Ultimately, these companies realized that difficult calls about boundary thresholds needed to be made. I don’t think they got there slowly so much as they proceeded carefully and deliberately. I’ve got profound respect for these folks, even if along the way I was frustrated and felt like I was just nudging and nudging ... but we have seen some exciting changes. As you noted, Twitter and Reddit recently banned the posting of nude photos without consent and Facebook clarified its nudity ban with some detailed explanation of the difference between nudity meant to engage in social issues like photos of mastectomies – permitted – and images of buttocks and genitals without the subject’s permission – not permitted. Facebook has long worked to address online harassment; Google’s social platforms have as well. These are exciting times for the fight against online stalking and harassment. 

Selinger: But don’t we know enough from psychology and design studies to predict in advance that certain environmental features will bring out the worst in people?

Citron: Yes, companies have long recognized this. Early on, Twitter took a strong stance against impersonation. They understood that anonymity could fuel people to pretend to be someone else and create reputational harm. Tech companies do grasp social-psychological dynamics related to de-individuation, anonymity, group polarization, etc. But they were trying to work slowly through the boundary management concern, while also dealing with advertiser pressure, public pressure, and shifting norms, and also grappling with the seriousness of shifting defaults away from free speech.  

Tech companies had too much confidence in us. They knew the troubles that could come, but were hopeful it wouldn’t be too bad. And, again, it made economic sense to allow all flowers to bloom. But the economic reality is changing. Facebook made changes after big advertisers, including Toyota, started pulling out because of pro-rape pages. Advertisers were saying we’re not OK with this and we’d rather be on the side of angels.

Selinger: Sticking with the theme of coding predictable behavior, you've lamented that that Yik Yak “is the Wild West of anonymous social apps” and “is being increasingly used by young people in a really intimidating and destructive way.” Can you imagine an app like it being used in less toxic ways?  

Citron: Yes. Over at Secret, David Willner, the head of Policy, Trust, and Safety, is trying to make anonymous communication safe. The key is to be clear on the front end that stalking, harassment, and revenge porn aren’t OK. We need clear rules of engagement for having safe environments with anonymity and real penalties for violating them. It’s all about the right mix of design and policy. Facebook Rooms is an attempt in this direction, too.

Fundamentally, I view all of the efforts of private intermediaries through the lens of due process. Companies should be clear about what they expect from their users. They should explain what they prohibit as clearly as possible, using good examples. They should enforce the rules they set up and establish real costs and consequences. And, they should allow challenges that promote fairness. You don't just kick someone off a platform without giving them a chance to appeal. That way, users will have more buy in and platforms will be effectively recognizing and counterbalancing the power that they wield.

Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology. Follow him on Twitter @EvanSelinger.