How should we talk about artificial intelligence?

It’s easier for the general public to grasp what is going on when complicated computerized processes are explained in terms of human cognition.

Staff

February 13, 2023

I had a chance to try out Meta’s chatbot BlenderBot 3, an artificial intelligence system that is designed to converse with humans. Within three or four text exchanges, I forgot I was talking to a machine. It seemed to get snarky, and I wondered whether I had somehow put it on the defensive. It used punctuation like my teenage daughter – lots of exclamation points and smiley faces. It also failed to accurately describe what a limerick is. 

OpenAI’s ChatGPT can compose newspaper articles, college admission essays, emails, and books that large percentages of people identify as written by a human. The temptation to think of and talk about these bots as if they are human is nearly irresistible. 

But should we do this, not so much for the machines, but for ourselves?  

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Much of the discourse around artificial intelligence (AI) anthropomorphizes it, directly or indirectly. Since ChatGPT was released to the public in November, social and print media have been full of people wondering whether it is “sentient.” 

In contemporary philosophy, sentience is the bottom rung on the ladder of consciousness – a sentient being, according to the Stanford Encyclopedia of Philosophy, is “one capable of sensing and responding to its world.” A fish is sentient, by this definition. Colloquially, though, people use it interchangeably with conscious, a state often thought to be unique to humanity. 

It’s strikingly easy to imagine that today’s chatbots are sentient while conversing with them, and the language we use to talk about them often reinforces this impression. 

One article explains that AI is now able to “pay attention to” important words and phrases. Another article says that AI “understands” which online shoppers are most likely to make a purchase. 

When AI makes things up – as when BlenderBot 3 claimed that a limerick is a poem with an ABBA rhyme scheme – it’s called “hallucination.” 

Howard University hoped to make history. Now it’s ready for a different role.

It’s easier for the general public to grasp what is going on when complicated computerized processes are explained in terms of human cognition.

It is possible, though, to say similar things in ways that make a point of reinforcing AI’s “machineness.” Let’s bring on the iterative processing algorithms, parallelization, and the machine learning, transformers, and bots. The more we can talk about AI with language that counteracts rather than enables our tendency to anthropomorphize, the better.  

I am not suggesting that we should all start being rude to Siri or use disparaging terms when talking about BlenderBot. But perhaps we should be conscious of the ways we talk about AI as if it is human. Because it is not.