Stephen Hawking calls for ‘world government’ to stop a robot uprising

Physicist Stephen Hawking reiterated his view that artificial intelligence presents both threats and possibilities. One way to address this and other global challenges, he suggested: world government.

Britain's Professor Stephen Hawking delivers a keynote speech as he receives the Honorary Freedom of the City of London during a ceremony at the Guildhall in London on Monday. Hawking was presented the City of London Corporation's highest award Monday in recognition of his outstanding contribution to theoretical physics and cosmology.

Matt Dunham/AP

March 9, 2017

Physicist Stephen Hawking may be a proponent of artificial intelligence, but he has also been outspoken about the potential challenges it creates. In a recent interview, he sounded a similar tone, and offered a solution that conservatives may find hard to accept. 

Speaking to The Times of London to commemorate being awarded the Honorary Freedom of the City of London, a title that was conferred on him on Monday, Professor Hawking expressed optimism for the future. He added, however, that he is concerned about artificial intelligence (AI), as well as other global threats. His answer: international action, and possibly world government.

"We need to be quicker to identify such threats and act before they get out of control," Hawking said. "This might mean some form of world government."

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

He cautioned, however, that such an approach "might become a tyranny."

As the role of artificial intelligence in society grows, computer scientists and policymakers are moving from constructing these systems to harnessing their power for the good of society. Though observers are divided on the nature and scope of AI-related challenges, there is widespread agreement that these impacts need to be addressed. Might world government provide a solution?

“Yes, I think much improved global governance may be necessary to deal with not only advanced AI, but also some of the other big challenges that lie ahead for our species,” writes Nick Bostrom, a professor at the University of Oxford who is the founding director of the university’s Future of Humanity Institute, in an email to The Christian Science Monitor. “I’m not sure we can survive indefinitely as a world divided against itself, as we continue to develop ever more powerful instruments.”

Today, AI is involved in seemingly everything. It’s behind the advances in autonomous vehicles, it powers Facebook’s ad screening processes, and it interacts with people everywhere through virtual assistants like Apple’s Siri and Amazon’s Alexa. In New York City, it’s predicting fires, and in Britain, machine learning is being deployed to get people to pay their debts. Ultimately, it could even eradicate persistent social challenges like disease and poverty, Hawking previously indicated.

But with these unique opportunities come unique problems, observers suggest. Part of the concern is about the economic transition to a world dominated by machines will look like.

Howard University hoped to make history. Now it’s ready for a different role.

“There are two main economic risks: first, that a mismatch may develop between the skills that workers have and the skills that the future workplace demands; and second, that AI may increase economic inequality by increasing the return to owners of capital and some higher-skill workers,” Edward Felten, a professor of computer science and public affairs at Princeton University who is the founding director of the university’s Center for Information Technology Policy, tells the Monitor in an email.

Those issues, he suggests, could be addressed by adopting public policies that will distribute the benefits of increased productivity.

What Hawking was more likely alluding to in his comments, however, are the concerns that AI will become hyper-powerful and start behaving in ways that humans cannot control. But not everyone is convinced that the overbearing machines of science fiction are a necessary eventuality: Professor Felten says he doesn't see “any sound basis in computer science for concluding that machine intelligence will suddenly accelerate at any point.”

Amy Webb, founder and chief executive officer of the Future Today Institute, takes these threats more seriously. One of the goals of AI, she explains to the Monitor, is to teach machines to connect the dots for themselves in “unsupervised learning” systems. That means placing a lot of trust in these AI systems’ ability to make the right decisions. She offers the analogy of a student learning math in a classroom:

“What happens if a student incorrectly learns that the answer to 1 + 1 is 3, and then teaches the next group of kids? That wrong answer propagates, and other decisions are based on that knowledge.”

And the stakes are higher than in math class, she adds: “We will be asking them to make decisions about personal identification, security, weapon deployment, and more.”

Professor Bostrom frames the problem facing AI today as one of “scaleable control: how to ensure that an arbitrarily intelligent AI remains aligned with the intentions of its programmers.” 

There is a small but growing field of research addressing these problems, these commentators explain – and world government or international harmonization of AI laws may be one approach. Though Bostrom says he does not expect “any imminent transformation in global affairs,” world government may be just the next phase of political aggregation.

“We’ve already come most of the way – from hunter-gatherer band, to chiefdom, to city-state, to nation-state, to the present mesh of states and international institutions,” he writes. “One more hop and we are there.”

Ms. Webb, though she agrees that international cooperation would be valuable, is skeptical it will happen soon enough to address immediate issues.

“It would be great for all countries around the world to agree on standards and uses for AI, but at the moment we can’t even get unilateral agreement on issues like climate change,” she points out. It will take time for international government cooperation to catch up with AI development, she says.

Government decisions may also be affected by unexpected changes in human behavior as AI becomes more ubiquitous, notes Scott Wallsten, president and senior fellow at the Technology Policy Institute, in an email.

“Will safer cars based on AI cause people to respond by acting more recklessly, like crossing against a light if they believe cars will automatically stop?” Dr. Wallsten asks.

With that in mind, he suggests, effective policy solutions at local, national, or international level should start with more research into the effects of AI.

“Any initiatives to address potential challenges need to be based on a solid understanding of what problems need to be solved and how to solve them in a way that makes sense,” he concludes.