Journalists and AI: Is the newsroom big enough for both?

Mathias Döpfner, CEO of Axel Springer, speaks at the opening of an online award ceremony, March 18, 2021. Mr. Döpfner wrote in an internal memo that “Artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it.”

Bernd von Jutrczenka/dpa/AP/File

March 27, 2023

European media giant Axel Springer – owner of newspapers Bild and Die Welt in Germany, Politico in the United States, and various other publications – splashily announced in February that it was preparing to lay off staff, go digital only, and reemphasize creation of original and investigative content. Perhaps most significantly, it said it was doing so in anticipation of an information future dominated by tools such as artificial intelligence chatbot ChatGPT.

“Artificial intelligence,” wrote Axel Springer CEO Mathias Döpfner in an internal memo, “has the potential to make independent journalism better than it ever was – or simply replace it.”

Axel Springer is just one of many news outlets – and their consumers – who are reorienting themselves to a media landscape where AI plays an increasingly large role. With the popularity of Silicon Valley company and Microsoft partner OpenAI’s increasingly sophisticated AI tools, and Google and Meta engineers hot on their heels, it is becoming quick and easy to generate AI-produced text with a minimum of prompting, leading some publications to experiment with using them to supplement articles, or even create full ones.

Why We Wrote This

News media are beginning to experiment with artificial intelligence to supplement and even write articles. But AI doesn't know if what it writes is true. How can it be used for responsible journalism?

Yet the very nature of chatbots like ChatGPT – which don’t actually understand what they’re writing about, but only predict what belongs in a sentence based on previous information it has “learned” – seems to conflict with the fundamental purpose of journalism: to provide citizens with accurate information about world events. What role can, and should, AI play in the media landscape if it is unable to discern the difference between what is true and what is not? With society’s trust in what journalists put out already at an all-time low, the answer to such questions may be critical for determining whether AI enhances journalism, or diminishes it.

“AI is not for thinking, but for making lazy, rapid, intuitive decisions about the world, and for optimizing our understanding of the world not for truth, but for information that confirms our views and attitudes,” says Tomas Chamorro, author of “I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique” and a professor of business psychology at University College London. “Journalists still have a potentially really important role to educate people. They can use tools like ChatGPT to really discover or identify biases that exist in how people think, and then take this intermediate role as a filter between these tools and millions of users so that people become aware of these threats. Journalists can step in and say, ‘Hey what’s up? I’m using it as well, and here are some things it does that are inaccurate are contributing to misinformation.’”

What happens if Trump tries to overturn another election loss?

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, March 21, 2023, in Boston.
Michael Dwyer/AP

“It’s not actually knowledge generation”

ChatGPT, like other AI-driven chatbots that have followed it, is deceptively simple to use. Type in a question or make a request for a specific sort of prose, and the application quickly produces neatly written text in response, trying as best it can to fulfill the user’s directive. Trained on databases of text, ChatGPT can produce everything from college essays to lists of birthday party ideas to poems about scanning groceries in a self-checkout line.

Since its introduction in November 2022, ChatGPT has attracted more than 100 million users. Its next-gen iteration, released earlier this month, passed the bar exam and can solve logistical challenges, and Google released its own version, dubbed Bard, last week.

But ChatGPT does not craft its prose from knowledge of the world, firsthand or otherwise. Rather, it writes sentences predictively, based on what it has been taught.

“The kind of tone of a lot of these generative AI tools is very authoritative,” says Jenna Burrell, director of research at New York City-based independent research institute Data & Society. “It will give you answers that sound very confident, but it’s in fact statistical prediction. It’s not actually knowledge generation. They look at words, and then predict a likely word that goes next.”

That is important when it comes time to apply AI to journalism, whose purpose, the American Press Institute writes, is “to provide citizens with the information they need to make the best possible decisions about their lives, their communities, their societies, and their governments.” To distill such information requires understanding of what is true about the world and being able to compare and contrast the statements and deeds of people against facts.

Harris vs. Trump: Where they stand on the big issues

Though wholly AI-generated articles may look like they do that, it’s an illusion, says Dr. Burrell. As humans, “if you talk to something and it responds in a human-like way, you assume all these other things about that entity – motivation and intent and reasoning. But it’s not intelligent.”

Practice has borne that out. CNET quietly began using AI to produce articles in November, but had to issue corrections in January to an article referencing compound interest due to errors in basic math. A recent Guardian investigation found a gender bias in artificial intelligence tools, apparently due to the AIs’ adoption of the implicit prejudices of their human teachers.

“We’ve seen lots of examples of where it gets facts wrong,” says Nic Newman, digital strategist and researcher at the Reuters Institute. “That’s obviously extremely worrying. If the basis of journalism is accuracy, and trust is critical, then we need to be very careful about how we use it, how we check the information that we know, and also how we communicate that to audiences. So the whole question of transparency and labeling is also going to be really critical over the next few years.”

Siccing AI on the “tedious tasks”

The introduction of AI into journalism, experts say, may not really be about AI replacing journalists, but rather whether and how journalists learn to use AI as a tool in their vocation.

Mr. Newman sees it being used in three different layers. One is how to create content more cheaply and efficiently, which has been going on for some time. It can also help identify story ideas or gather news. Finally, it can be used to help package and distribute content more efficiently by personalization and other means.

And, as news outlets begin training AI on increasingly specific and vetted content, the levels of trust in what it spits out – whether it’s timelines, backgrounders, or summaries of stories – should grow, says Mr. Newman.

“In the future, you’re going to have solutions that are tailored to a particular publisher, or it may be trained on its own databases, which is reliable content already,” says Mr. Newman. “We’re going to see the general improvement of some things that have been around for ages … to the point we have confidence in being able to use them.”

Media outlets have already been automating certain functions using technology in recent years; AI could be the next logical step in those tasks.

The Associated Press – the venerable American news agency founded in 1846 – began automatically generating corporate earnings stories nearly a decade ago. It now automates game previews and recaps of some sports, and uses AI to help manage flows of information, transcribe audio, translate text, and create shot lists, according to Lauren Easton, AP’s vice president of corporate communications. In a sign of what’s to come, AP has partnered with local newsrooms to share its expertise in reducing a newsroom’s “tedious tasks.”

An opportunity for journalists?

But regardless what task an AI performs, everything it produces needs fact-checking – which requires preexisting knowledge about a topic, says Dr. Burrell.

Ultimately, she says, journalistic practices will only be undermined to the extent that we over-trust these tools. “As long as journalism maintains those commitments around truth – around shining light on issues, on speaking truth to power and the practices of fact-checking and investigative reporting – as long as those those standards are intact, I’m not too worried about any of these tools disrupting that.”

The dawn of AI may even provide a potentially tremendous opportunity for human writers by highlighting their value, says Dr. Chamorro, the business psychologist.

“The commoditization of information is a little bit like fast food. Fast food is cheaper and efficient, but it’s not very nutritious and over time, that has increased the number of healthy and high-end alternatives such as Michelin star chefs,” he says. “If journalists actually start to cater to people’s thirst and need for truth, then you offer something that has value. There’s still a market and an audience for reliable sources.”