Test for humans: How to make artificial intelligence safe
Jean-Francois Badias/AP
The drumbeat of warnings over the dangers of artificial intelligence is reaching a new level of intensity. While AI researchers have long worried that AI could push people out of jobs, manipulate them with fake video, and help hackers steal money and data, some increasingly are warning the technology could take over humanity itself.
In April, leading tech figures published an open letter urging all AI labs to stop training their most powerful systems for at least six months. Last month, hundreds of AI researchers and others signed onto a statement suggesting humanity should approach the “risk of extinction” at the hands of the technology with the same priority it now gives to nuclear war and pandemics.
“The idea that this stuff will get smarter than us and might actually replace us, I only got worried about a few months ago,” AI pioneer Geoffrey Hinton told CNN’s Fareed Zakaria on June 11. “I assumed the brain is better and that we were just trying to sort of catch up with the brain. I suddenly realized maybe the algorithm we’ve got is actually better than the brain already. And when we scale it up, we’ll get things smarter than us.”
Why We Wrote This
As tools based on artificial intelligence spread, calls for regulating the technology are rising. A core question is, can we trust AI – and our own responsibility in using it?
Mr. Hinton quit his job at Google in May, he says, so he could talk freely about such dangers.
Other scientists pooh-pooh such doomsday talk. The real danger, they say, is not that humanity accidentally builds machines that are too smart, but that it begins to trust computers that aren’t smart enough. Despite the big advances the technology has made and potential benefits it offers, it still makes too many mistakes to trust implicitly, they add.
Yet the lines between these scenarios are blurry – especially as AI-driven computers grow rapidly more capable without having the moral-reasoning abilities of humans. The common denominator is questions of trust – how much of it do machines deserve? How vulnerable are humans to misplaced trust in machines?
In fact, the systems are so complex that not even the scientists who build them know for sure why they come up with the answers they do, which are often amazing and, sometimes, completely fake.
“It’s practically impossible to actually figure out why it is producing that string of text,” says Derek Leben, a business ethicist at Carnegie Mellon University in Pittsburgh and author of “Ethics for Robots: How To Design a Moral Algorithm.”
“That’s the biggest issue,” says Yilun Du, a Ph.D. student at the Massachusetts Institute of Technology working on intelligent robots. “As a researcher in that area, I know that I definitely cannot trust anything like that. [But] it’s very easy for people to be deceived.”
Already, examples are piling up of AI systems deceiving people:
- A lawyer who filed an affidavit citing six bogus court cases, with made-up names like Varghese v. China Southern Airlines, told a New York judge at a sanctions hearing on June 8 that he was duped by the AI system he relied on.
- A Georgia radio host has sued OpenAI, the company that makes the popular ChatGPT, claiming that the AI system created out of thin air a legal complaint accusing him of embezzlement.
- Suspicious that his students were using ChatGPT to write their essays, a professor at Texas A&M University-Commerce ran their papers through the same system and gave a zero to those the AI system said it wrote. But the system can’t reliably recognize what it has written. The university intervened, ensuring that none of the students failed the class or were barred from graduation.
These are just hints of the risks in store, AI scientists warn. Throw away the sci-fi visions of Terminator-type robots taking over the world – those are still far-fetched with today’s technology – and the risks of human extinction don’t disappear. Scientists point to the possibility of the technology allowing bad actors to create bioweapons, or boosting the lethality of warfare waged by nation-states. It could also enable unscrupulous political actors to use deepfake images and disinformation so effectively that a nation’s social cohesion – vital to navigating environmental and political challenges – breaks down.
The manipulation of voters and the spreading of disinformation are some of the biggest worries, especially with the approach of next year’s U.S. elections, OpenAI CEO Sam Altman told a Senate panel last month. “Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
OpenAI is the creator of ChatGPT, which has fueled much of the AI hype – both positive and negative – ever since its release to the public late last year. It has raised hopes that workers could become much more productive, researchers could make quicker discoveries, and the pace of progress generally would increase. In a survey of CEOs last week, 42% said AI could potentially destroy humanity in 10 or even five years, while 58% said that could never happen and they are “not worried.”
Legislators on both sides of the Atlantic are eager to set up guardrails for the burgeoning technology. The European Union seized the lead last week by agreeing to the draft of an act that would rate AI technologies from “minimal” to “unacceptable” risk. AI deemed unacceptable would be banned and “high risk” applications would be tightly regulated. Many of the leading AI technologies today would likely be considered high or unacceptable risk.
In the United States, the National Institute of Standards and Technology has created an AI risk-management framework. But many in Congress want to go further, especially in light of the perceived failure to regulate social media in a timely manner.
“A lot of the senators [at last month’s hearing] were explicitly saying, ‘We don’t want to make the same mistakes with AI,’” says Mr. Leben of Carnegie Mellon. They said, “‘We want to be proactive about it,’ which is the right attitude to have.”
How to regulate the industry is still an unknown. Many policymakers are looking for more transparency from the companies about how they build their AI systems, a requirement in the proposed EU law. Another idea being floated is the creation of a regulatory agency that would oversee the companies developing the technology and mitigate the risks.
“We as a society are neglecting all of these risks,” Jacy Reese Anthis, a doctoral student at the University of Chicago and co-founder of the Sentience Institute, writes in an email. “We use training and reinforcement to grow a system that is extremely powerful but still a ‘black box’ to even its designers. That means we can’t reliably align it with our goals, whether that’s the goal of fairness in criminal justice or of not causing extinction.”