Opinion: If predictive algorithms craft the best e-mails, we're all in big trouble

The new Crystal app creates profiles 'for every person with an online presence' so its users can craft the ideal e-mail for every recipient. That's not only troubling for privacy, but also threatens to strip individuality out of our digital dialogue.

Ann Hermes/The Christian Science Monitor

April 27, 2015

It claims to be "the biggest improvement to e-mail since spell-check."

Sounds impressive, right? 

The huge leap forward promised by Crystal Project Inc., makers of the newly released Crystal app, is to give its users real-time insight into the recipients of their e-mails. In the blink of an eye, it can examine the recipient's online data trail and turn that information into a detailed personality profile. It will even offer suggestions for how to better communicate to the person you're writing. 

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

What Crystal is offering is something like digital clairvoyance when it comes e-mail communications and marketing. Ostensibly, the automated interface makes it effortless to tweak your prose to conform with a recipient’s conception of an ideal note.

While that premise might excite e-mail marketers or job seekers, it should concern anyone who cares about privacy, especially if Crystal is a sign of what's to come.

For one thing, the assumptions that Crystal makes about people aren't exactly accurate as far I can tell during a few experiments with the app. But, more importantly, if something like Crystal can be perfected, and that may very well happen, such technology has the potential to provide nearly anyone with an intimate depiction of who we are and what we’re about – information many of us don’t want widely shared, and already worry about when assembled by data brokers.

Think of it this way: The little and seemingly harmless digital breadcrumbs that we’ve left here and there can be aggregated to form a portrait that’s too revealing and too accessible.

That isn't how the Crystal Project sees it, though. Here's its rationale for the product: People have different communication styles, and if we fail to appreciate them, misunderstanding and even hurt feelings can be the result. In the corporate world, effectively translating our thoughts into recipient-friendly formats can lead to efficient workflow. By contrast, the company contends, it can be disastrous to treat highly analytical correspondents who prefer maximum detail as if they’re trusting intuitive types. Plus, when you don't speak to people as they want to be spoken to, projects can be undermined and folks can feel like they’re working with selfish or insensitive colleagues.

Howard University hoped to make history. Now it’s ready for a different role.

Does Crystal actually work? I tested it out by running a search on Jules Polonetsky. He’s the executive director of the Future of Privacy Forum, a group I’ll be working for full time during my sabbatical. I thought maybe the investigative technology could give me valuable insight into how to talk to my future boss.

Crystal purported to have a confident read on what makes Mr. Polonetsky tick.

Screen capture of Crystal
Evan Selinger / Christian Science Monitor

Here’s the detailed profile it created.

Screen capture of Crystal
Evan Selinger / Christian Science Monitor
Screen capture of Crystal
Evan Selinger / Christian Science Monitor
Screen capture of Crystal
Evan Selinger / Christian Science Monitor

I asked Polonetsky if this is an accurate report. Turns out, I’m better off trusting my instincts than Crystal’s algorithms. “I speak for a living, often to strangers, and often leading the conversation," said Polonetsky. "I probably interrupt far too often and am working to do so far less often. As a futurist of sorts, I absolutely want to hear about the future plans for products and spend a good of time advising leading companies on their future plans for data use. I welcome informality! So the algorithm is very wrong in its basic efforts to assess me. Unless, of course, the program knows me better than I know myself.”

OK. So the program doesn’t shine a light into everyone’s soul. But could a better version? Elana Zeide, a privacy research fellow at New York University's Information Law Institute, is skeptical. We discussed what happened when I did a search on President Obama. According to Crystal, when it comes to communication, POTUS can be "rambling."

Screen capture of Crystal
Evan Selinger / Christian Science Monitor

Now, maybe what the program is picking up on is that a president gives lengthy responses to major questions of international policy. But I’d characterize that as being thorough or maybe thoughtful. Ms. Zeide notes that the problem, here, is context. Making broad inferences without explicit reference to context, she said, can lead to an “asset” being portrayed as “a significant character flaw.”  

Sociologist Erving Goffman published a landmark book, "The Presentation of Self in Everyday Life," in 1959 in which he argued that when it comes to communication, context is king because social interaction can be viewed as a theatrical performance. On the “front stage,” we engage in impression management by crafting our expressions to try to shape how audiences will respond to what we're communicating. But on the “back stage,” we try to keep things hidden that a particular audience would find objectionable, including beliefs and intentions they wouldn't want us to hold or would find it inappropriate for us to express. To make our performances believable, it can help to separate one stage from the other. 

For example, when I'm on social media I always convey a sense that every single class I teach is so amazing as to be flawless and inspiring. Now, in reality, some classes are and some classes aren’t; nothing can be consistently perfect. But because it’s too easy to offend people with public complaints, my front stage performance always portrays upbeat thoughts about being in the classroom and preparing for it. Consequently, if the only access you have to my views on teaching come from reading my social media comments, you'll get a distorted account– unless, that is, you’re hip to my idealizing strategy.

But how can a program such as Crystal make sense of the front and back stage dimensions online? It can't unless it knows a lot of intimate information: what we really think, how we engage in impression management, and when we’re trying to shape what other people believe. And what about the fact that we expect different performances from different people?

Unfortunately, Crystal tries to distill what we're about and what we're looking for into essential, static characterizations – as if we desire everybody, without exception, to write to us in the same style, and as if we always expect the same style to be used in every note, no matter what the situation is. Such a reductive approach to communication can’t be a techno-fix that magically makes us all sound empathetic and sensitive.    

Ultimately, Crystal poses a threat to what Woodrow Hartzog and I call privacy by obscurity. Because the service is automated and inexpensive, it dramatically reduces the transaction costs required to scan and interpret large volumes of personal information.

Given the potential for obscurity evisceration, it’s important to take stock of the main problems on the horizon.

Chiefly, there’s the problem of distortion. If people don’t realize that a profile is inaccurate, they can be nudged to come to false conclusions. Imagine technology such as Crystal creating biased characterizations that influence how people view politicians or job candidates. It’s not hard to see how discrimination and prejudice can be inflamed in those cases.   

What's more, programs such as Crystal take our information of context and turn it into something new – data that others can use for the express purpose of manipulating how we respond to their queries and requests, as well as data that we wish certain people or groups never got their hands on, and which we took practical steps to minimize the chance of them discovering.    

The technology also offers value-laden judgments about a person's character – a fundamentally moral dimension of their identity – but is marketed in a way that rhetorically downplays this feature. After all, the tool is characterized as a mean for improving communication and empathy, and not a litmus test for determining who amongst us is virtuous

The software is also opaque. As Zeide laments, the inner workings an app such as Crystal aren’t accessible to consumers. The inferences the technology generates are shrouded in secrecy, and “the app provides no way for users, or the people profiled, to assess the accuracy of the information that’s scanned and the values underlying the interpretative algorithms.”

If Crystal is just the beginning of a new category of similar apps that rely on algorithms to tell people what to say and how to write, these technologies will not only have a troubling effect on privacy but also begin to strip the character and individuality from communication in favor of banal, machine-generated prose.

Should future versions of profiling technology improve, it will be tempting to use them to save time, minimize mistakes, and feed our curiosity. But if the history of privacy problems has taught us anything, it's that there's more to the good life than efficiency, safety, and voyeurism.