Stephen Hawking warns of mankind wiping itself out: where to find hope?

Stephen Hawking spoke of a multitude of manmade threats encircling the planet in a recent BBC lecture. But how real are the risks, and what hope is there of mitigation?

Professor Stephen Hawking lectures on his research, life and times the Perimeter Institute in Waterloo, Ontario, June 20, 2010.

Dave Chidley/AP/File

January 19, 2016

Physicist Stephen Hawking has said the chances of cataclysmic events which could affect the survival of humanity are soaring, and we have only ourselves to blame.

In comments while recording the annual BBC Reith lectures, the renowned physicist asserted that disaster befalling planet Earth in the next 1,000 to 10,000 years is a “near certainty”, and that increasingly the threats facing humankind are of our own making.

Yet he also insisted that humanity will likely survive, because by the time catastrophe strikes, we shall have colonized other worlds.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

"However, we will not establish self-sustaining colonies in space for at least the next hundred years,” said Prof. Hawking, “so we have to be very careful in this period".

The threats Hawking specified included nuclear war, global warming, and genetically engineered viruses, proposing that progress in science and technology is in some ways a gamble, improving the lives of billions, but also introducing the means to end humanity.

Yet this is no new debate. The idea that man’s advancement could be his very undoing has been with us for decades, centuries.

“The vices of mankind are active and able ministers of depopulation,” wrote the British economist and demographer Thomas Malthus in 1798. “They are the precursors in the great army of destruction; and often finish the dreadful work themselves.”

In “Human Impact on Earth,” geographer William B. Meyer wrote in 1996: "humankind has become a force in the biosphere as powerful as many natural forces of change, stronger than some, and sometimes as mindless as any."

Howard University hoped to make history. Now it’s ready for a different role.

Meyer continues, “Nature has not retired from the construction (or demolition) business, but humankind has in the recent past emerged as a strong competitor.”

One of the areas of particular concern is artificial intelligence and the autonomous weapons being enabled by huge advances in the technology, systems such as “armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria,” as an open letter published in July 2015 warned.

Hawking signed this letter along with over 1,000 researchers, experts, and business leaders, including co-founders of Apple and Skype, which went on to say “the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms”.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the end point of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow”.

In an interview on Thought Economics, Jaan Tallinn, co-Founder of Skype and Cambridge Centre for the Study of Existential Risk, divides AI into two categories: ‘sub-human’ AI, which includes technologies such as autonomous weapons, and ‘super-human’ AI, which would have the ability to reason and model better than humans themselves.

In the same interview, however, Sir Crispin Tickell, former diplomat and advisor to successive UK Prime Ministers, insists that any risk to humanity from the intentional misuse of technology – from people who "want to extinguish everything" – is far less of a concern than that of accidental misuse.

“Science is hard, and scientific breakthroughs are even harder, and so most scientists are not motivated to think of these negative consequences,” said Sir Tickell.

“When you are an AI researcher for example, you’re highly motivated to improve the capability and performance of your system, rather than research the side effects those systems could have in the world”.

But while there are inherent risks to our advancing technologies, so are there many experts and researchers conscious of the dangers, working hard to mitigate the risks.

Many organisations have arisen to undertake this responsibility, including Future of Life (funded in part by Elon Musk, founder of Tesla), Global Catastrophe Risk Institute, Future of Humanity Institute at Oxford University in the UK, and the Centre for the Study of Existential Risk at Cambridge University in the UK.

And while it is ironic that such a prominent scientist as Stephen Hawking should be the one to offer these dire warnings, as the BBC notes, perhaps such eminent academics are exactly the people who need to be acting as guardians, watchmen, on the technologies that most of us know little about.