How to hide your digital trail in plain sight

New York University’s Finn Brunton and Helen Nissenbaum spoke with Passcode about their book, 'Obfuscation: A user’s guide for privacy and protest.'

Reuters

November 16, 2015

Use a discount card to buy a bag of Cheetos, and the supermarket will log your purchase. Look for something on the Internet, and search engines such as Google, Yahoo, or Bing will tailor the ads you see. Even your mapping software tracks your location.

But what if you could find a way to cover your tracks?

Obfuscation is the process of burying useful information in a heap of similar data, rendering it useless. Users of discount card exchange networks now band together to place all of their purchases on the same cards – making it impossible to track a single buyer. The browser plugin TrackMeNot searches for random phrases in the background while users are online, turning search engine histories into lists of unconnected terms. In today’s era of digital surveillance, obfuscation can be incredibly useful.

Democrats begin soul-searching – and finger-pointing – after devastating loss

But, so far, it hasn’t seen much academic study. That inspired New York University Assistant Professor Finn Brunton and Professor Helen Nissenbaum’s new book, “Obfuscation: A user’s guide for privacy and protest,” the first scholarly review of the subject. Dr. Nissenbaum is a particularly well-versed guide: she spearheaded work on TrackMeNot and its sibling, AdNauseum, which obfuscates ad tracking systems ability to analyze trends across many users by clicking on every ad on every Website filtered by ad blockers.

Passcode spoke to Brunton and Nissenbaum about the book, the digital frontiers of obfuscation and the moral case for and against its use. Edited excerpts follow.

Passcode: What’s the difference between obfuscation and lying?

Brunton:  Lying is a very powerful, but imprecise, term, and especially given the breadth of different kinds of things that obfuscation does, I think it is much too general.

A favorite example of mine is France. French citizens love to speed, and even though the government has radar detectors, and they are able to speed with impunity. And France somehow has to try to control them. For technological reasons, to ban radar detectors, you have to ban a lot of other kinds of things and it would be very difficult. So instead, they simply place decoy devices around all the roads that emit signals that the radar detectors pick up. Even when French people realize that their radar detectors are going off when the police aren’t monitoring them, they still have to slow down because they know that the police exist somewhere and could be anywhere.

They took up arms to fight Russia. They’ve taken up pens to express themselves.

So are those devices lying? Not really. They're obscuring the truth to create a certain sort of social good, but they're not necessarily lying. I would think that a person who is asked a direct question with some informative content is lying when they give a false response.  

Obfuscation can be deceitful to provoke a response, but the deceit doesn’t have to be believed to provoke the response.

Passcode: So, the recent debate about Facebook’s real name policy, where some users wanted the right to use pseudonyms to protect their privacy, isn’t necessarily an example of obfuscation - obfuscation has to have an aspect of diluting the truth with additional information?

Brunton: Right. Often when I'm trying to find a visual metaphor for obfuscation, I draw on a great “Where’s Waldo” image Helen found with Waldo hiding among hundreds of other Waldos.

Nissenbaum: Facebook is in that same big bucket, but it's different from obfuscation.

Brunton:  It's a somewhat blurry line, but obfuscation is different than concealment. Obfuscation is the production of ambiguous, confusing, or deliberately misleading information in context where direct observation cannot be avoided.

In the case of Facebook, people who are fighting the name policy might not be doing so because they want to conceal an identity, but because for them it's very important that they are able to have two different identities. But people who are doing exactly that might also be using obfuscation on Facebook, in the sense that for example, in the middle of bland updates on a real name account is a note that only friends who understand their lives will get the actual significance, so that the really salient activity can be buried in a bunch of other things that all seem unimportant.

Passcode: What type of situations call for obfuscation as opposed to other privacy techniques – encryption, choosing not to use a service, or even run of the mill lying?

Nissenbaum: I go back to my own experience with working and developing tools like TrackMeNot. In that situation Google was logging all our searches and there was no constraint on what they could do with those searches. I have no access to say to Google, "Don't do it." TrackMeNot, is a typical type of scenario, where you need to engage with someone for something to work. You need to provide information in order for them to respond. But you have no say about what they do with that information on the other side.

Brunton: We refer to obfuscation as a “weapon of the weak.” This is a concept from the political theorist Jim C. Scott, who did extensive research in peasant agricultural communities in Southeast Asia. His particular interest was how people who lack access to the tools for redress that others might have, like the ability to vote, access to the law, or violence carve out find different ways to kind of push back against inequities in their situation.  

What he tried to identify were "weapons of the weak," which included things like pilfering,deliberate stupidity, and slow downs, all these sorts of small scale ways to resist situations, which are not ones where you can take a noble Spartacus-like stand against injustice. Obviously, Google and consumers have a different kind of power relationship.  Consumers don't necessarily know or even are in a position to understand what is being done to their data.

It's not just they can't sort of selectively refuse, but it's that anyone who is not a professional in data mining or machine learning is not really going to be able to grasp the potential of what can be done with these things. For us, it became really interesting to take this idea of weapons of the weak and take it in this direction of people who are nonetheless weak in relation to the powers that are gathering their data. Then see what kinds of tools are available to them to use. Obfuscation really jumped out as one of that classic approaches.

Passcode: In the book, most of the examples you give are these kinds of power imbalances - where, say a Website exerts control by collecting consumers’ data. But you also profile a few situations where it is the government doing the obfuscation – like the Russian and Mexican governments using Twitterbots that post gibberish using opposition hashtags to drown out rebellion. Why would obfuscation be a weapon of the weak? Why isn’t it a weapon for the weak or strong?

Brunton: The reason why I suspect why we will not see obfuscation used across the board by lots of different groups is that if you have the secret police, and the best sort of encrypted communications that the NSA can provide, and the red phone, diplomatic pouches and money on your side, you don't need to use it.

Governments aren’t in the same kind of situations where they can't avoid observation. Even with the Twitter example, it’s the still the weak people, the opposition, being obfuscated, even though it’s the government doing it.

Passcode:  One of the interesting things about the book is how much of it is devoted to reasons not to use obfuscation – both ethical and practical. If it works, why shouldn’t people use it?

Nissenbaum: I mean there are two kinds of questions. One is this discussion we've been having, which is the use of obfuscation for purposes that we considered to be problematic, like stifling speech. Let's just assume we're in the space where we agree that the end that we're trying to achieve is correct. That doesn't mean that any means to the end is correct, and a lot of the critiques that we write in the book were actual ones that had come up when we presented the ideas.

For example, who are we free riding on? Are we free riding on the servers by taking advantage of an online service we are implicitly agreeing to give our data to in exchange for a service? Are we free riding on other people? If you doing using obfuscation really relies on other people not using it, then you're gaining advantage on other people's disappointment. Our argument is that when you're looking at any obfuscation tool, in order to come out at the end and say, "Yes, this is morally acceptable,” you need to analyze the specific design and implementation of that system.

Brunton: Obfuscation strategies that involve many people all hiding their identities at once as a way of concealing the activities of one person can backfire. If only one person is wearing a mask, they’re much more identifiable than if no one is wearing a mask. There's lots of situations in which a single person trying to obfuscate might be at greater risk. To say that it's very contingent on the threat model, on the adversary, on the goals that the user has, so those are kinds of the things that shape the circumstances in which we can say, "Obfuscation is going to work better than another privacy technique here, but worse than it here."

One of the things we dread is the understandable presumption that obfuscation is a one-shot solution to everything. That’s the way security techniques are often framed. But obfuscation is something that needs to optimized to particular threats, particular adversaries in particular roles. No one is saying everyone should start wearing their hoodies up over their heads all the time. Part of what makes this technique exciting is that we’re still at the starting point for a larger inquiry into how to use these things.

Passcode: In the case where you know nothing about how your data is being used, does it make sense to obfuscate?

Brunton: In terms of the presumption of wrongdoing, there's also the big issue of the future, in the sense that it's one thing to say that I trust that a company I’m giving my data to will just be using it to send me ads, but it’s another when the company goes out of business, and all of their data is acquired by Scumbags, Ltd.

Data recirculates in many different ways you don't necessarily know, even if we think that particular actors are going to be good actors. One of the things that I liked about the idea of obfuscation is being able to future proof data, so that we don't have to trust service providers in exactly these situations.

Ideally, someone who offers a service can say, "We are going to actually obfuscate your data on our side, so we can provide the service and get paid for it, but your data will not be used against you in the future by forces outside of us."

Nissenbaum: That's kind of the dream scenario.