Why AI stories are more about humans than about machines

Madeleine Yuna Voyles stars as the robot child Alphie in “The Creator,” a film arriving Sept. 29.

20th Century Studios © 2023

September 28, 2023

Humans are at war with machines. In the near future, an artificial intelligence defense system detonates a nuclear warhead in Los Angeles. It deploys a formidable army of robots, some of which resemble people. Yet humans still have a shot at victory. So a supersoldier is dispatched on a mission to find the youth who will one day turn the tide in the war.

No, it’s not another movie in “The Terminator” series. 

In “The Creator,” opening Sept. 29, the hunter is a human named Joshua (John David Washington). He discovers that the humanoid he’s been sent to retrieve looks like a young Asian child (Madeleine Yuna Voyles). It even has a teddy bear. As Joshua bonds with the robot, he wonders whether machines are really the bad guys. 

Why We Wrote This

Representations of artificial intelligence in popular culture help push society to think more about technology’s role – and which human values it reflects.

“All sorts of things start to happen as you start to write that script where you start to think, ‘Are they real? And how would you know?’” writer and director Gareth Edwards told the Monitor during a virtual Q&A session for journalists. “‘What if you didn’t like what they were doing – could you turn them off? What if they didn’t want to be turned off?’”

Popular culture has profoundly influenced how we think and talk about artificial intelligence. Since ChatGPT’s giant leap forward, AI has often been cast as the villain. AI supercomputers go rogue in Gal Gadot’s Netflix thriller “Heart of Stone” and the latest “Mission: Impossible” movie. For dramatic effect, AI is often embodied in robots. They’re not only sentient, but also the killer who’s in the house – quite literally, in the case of “M3GAN,” the murderous high-tech doll. The message: Be kind to your Alexa, or it may set the Roomba on you.

Ukraine’s Pokrovsk was about to fall to Russia 2 months ago. It’s hanging on.

In the 2022 horror movie “M3GAN,” a lifelike doll develops a mind of her own.
Geoffrey Short/Universal Pictures

But the more thoughtful AI stories are really more about humans than about machines. The scenarios about good AI versus evil AI push society to consider ethical frameworks for the technology: How can it represent and embody our best and highest values? 

“Sometimes we are so excited about the technology that we forget why we build the technology,” says Francesca Rossi, president of the Association for the Advancement of Artificial Intelligence (AAAI). “We want our humanity to progress in the right direction through the use of technology.” 

Before ChatGPT was a twinkle in the eye of a search engine, Arthur C. Clarke, Philip K. Dick, and William Gibson were writing about the ethics of AI. Isaac Asimov’s stories posited the Three Laws of Robotics: (1) Robots may not injure humans. (2) Robots must obey human commands, unless they conflict with the first law. (3) A robot must protect its own existence but without conflicting with the first or second laws.

At first, the laws sound good. A closer examination reveals that they’re a literary device with loopholes that the author could exploit for “whodunit” murder mysteries. But in an era in which the Australian military has developed combat AI robodogs – reminiscent of the machine K-9s in the “Black Mirror” episode “Metalhead” – Mr. Asimov’s framing seems freshly relevant.

“The real issue is the ethics of the people behind the robots. Do we want robots that can kill? Apparently we do, because we’re making them right now.” – Jeff Vintar, a screenwriter for the 2004 blockbuster “I, Robot”
Courtesy of Jeff Vintar

“The real issue is the ethics of the people behind the robots,” says Jeff Vintar, a screenwriter for the 2004 blockbuster “I, Robot,” named after a collection of Mr. Asimov’s short stories. “Do we want robots that can kill? Apparently we do, because we’re making them right now.” 

Howard University hoped to make history. Now it’s ready for a different role.

AI and human aims

If AI should be aligned with human goals, the question is, which ones? HAL 9000, the onboard computer in the 1968 film “2001: A Space Odyssey,” illustrates the dilemma of conflicting values. An astronaut returns to the spaceship and asks HAL to open the pod bay doors. The computer refuses. It places a higher priority on the success of the mission than on the life of the astronaut. 

The 2002 movie “Minority Report” is a more earthbound example of competing values. In the story by Mr. Dick, police can predict criminal acts in advance. The result is a tension between safety and privacy. In real life, police are now using AI technology to identify potential future crime by analyzing data about previous arrests, specific locations, and events. Critics claim the algorithms are racially biased.

“This does seem to be coming true, and ‘predictive policing’ doesn’t seem to be so great in that movie,” says Lee Barron, author of “AI and Popular Culture.” “[Mr. Dick] is a particularly prescient writer.”

Perhaps some of the time. The sci-fi author’s book “Do Androids Dream of Electric Sheep?” – later adapted as the movie “Blade Runner” – imagined AI robots that are indistinguishable from humans. But it also predicted that we’d have flying cars by now.

“We’re not good at futurism,” says influential philosopher Fredrik deBoer, who has written about AI for Persuasion, an online magazine, in a Zoom call. “Future forecasting is really hard for us.” 

Mr. deBoer cautions that humankind is prone to overhype the impact of new technologies, for example the Human Genome Project. He wonders if AI will ultimately prove to be less revolutionary than imagined. 

Will Smith stars as a Chicago police detective trying to solve a murder with a nonhuman suspect in the 2004 film “I, Robot,” named after a collection of stories by Isaac Asimov.
20th Century Fox

The arrival of ChatGPT certainly startled and awed the world with its astonishing grasp of the language and communicative abilities. It has amplified debates over whether AI will become sentient – or at least evolve into such a convincing simulacrum of consciousness that we will imagine it to be a living entity with a soul. Could we fall hopelessly in love with sultry-voiced AI entities on our phones, as Joaquin Phoenix does in “Her”?

Pop culture may have conditioned us to fear that AI will destroy humanity if it becomes sentient. That prevalent notion amounts to fearmongering, says Ian Watson, co-writer of the Steven Spielberg movie “A.I. Artificial Intelligence.” Nonbiological machines are heuristic algorithms, the sci-fi author says in a phone interview. It’s possible that self-aware machines may never exist, he adds. In his Pinocchio-like screenplay, which he originally wrote for Stanley Kubrick, a robot boy named David wants to become human. At the end of the movie, David discovers that’s impossible.

Daniel H. Wilson, author of the bestselling 2011 novel “Robopocalypse,” thinks that AI could someday pass the Turing test – that is, appear to think like a human. But he says there hasn’t been the requisite big breakthrough in mathematics and algorithms to make so-called artificial general intelligence possible. By contrast, ChatGPT is known as generative AI. It lacks the ability to understand context. The technology is a predictive algorithm that scrapes the web to calculate the most likely response to queries. He finds that worrisome.

“Generative AI is creating humanlike intelligence by regurgitating billions of data points taken mostly from people on the internet,” says Mr. Wilson, a former robotics engineer. “Can you imagine a worse mirror to hold up to humanity than all of our moments from the internet?”

The role the public plays 

Some computer scientists are working to create healthier AI inputs. A 2023 college textbook titled “Computing and Technology Ethics: Engaging Through Science Fiction” includes reprints of short sci-fi stories that prompt students to contemplate ethical dilemmas in computer programming.

“Once you’re inside a story, thinking from another point of view, issues of motivation [and] issues of social effects are much clearer,” says Judy Goldsmith, a professor of computer science at the University of Kentucky and one of five co-authors of the textbook. The book helps students think beyond the value of utilitarianism, she adds.

Ms. Rossi from AAAI has a copy of that textbook on her desk. Her favorite sci-fi allegory is Pixar’s “WALL-E.” In the 2008 movie, obese humans aboard an intergalactic ship have become wholly beholden to AI. They’ve forfeited meaningful connections with others because they’re constantly staring at screens. 

“‘WALL-E’ is one that really brings up this concept of passively accepting the technology because it makes our life easier,” says Ms. Rossi, who is also the AI ethics global leader at IBM. “In order to keep AI safe and take care of the ethics issues, companies have to do their part. The regulators have to do their part. But every user has to use it responsibly and with awareness.”