$5 million prize for A.I. targets the 'dystopian conversation'

As IBM and X Prize unveiled a new $5 million competition to spur research into A.I., one purpose of the prize is to address unwarranted scaremongering about artificial intelligence.

A member from a German team adjusts a humanoid robot during the 2015 Robocup finals in Hefei, Anhui province, July 22, 2015.

Jianan Yu/Reuters/File

February 18, 2016

Developers of artificial intelligence (A.I.) now have an added incentive to pursue their work: $5 million dollars.

The prize money was announced at the annual TED conference Wednesday, in a joint initiative between tech giant IBM and X Prize, the company behind the world’s first private space race to reach the moon.

Motivating the backers of this competition is, among other things, a desire to demonstrate the potential benefits to mankind of advances in A.I., but many skeptics have yet to be convinced.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

"Personally, I am sick and tired of the dystopian conversation around artificial intelligence," said X Prize founder Peter Diamandis when unveiling the prize.

The competition challenges teams to “develop and demonstrate how humans can collaborate with powerful cognitive technologies to tackle some of the world’s grand challenges,” according to an X Prize statement.

The winner will be determined at the 2020 TED conference when three finalists will take the stage, but each year in the meantime will see competitors vying for interim prizes, seeking to progress to the next round.

“We believe A.I. will be the most important technology of our lifetimes, and our scientists, researchers, and developers have decades of innovation ahead of them,” stated IBM in a press release.

But it is precisely this enormous potential that causes many to wonder – to pause – to question whether we need to slow down and consider the implications of A.I., before tearing ahead with its evolution.

In the race to attract students, historically Black colleges sprint out front

Probably the most dramatic incarnation of these concerns is the debate swirling around the development of autonomous weapons, machines of war able to make deadly decisions without the input of humans.

Renowned physicist Stephen Hawking was one of thousands of researchers, experts, and business leaders to sign an open letter in July 2015, urging caution, as The Christian Science Monitor reported.

Yet even those who are most vocal in their opposition do not counsel that we abandon our A.I. ambitions.

“It’s not about destroying an industry or a whole field,” said Mary Wareham, coordinator of Campaign to Stop Killer Robots, in a phone interview with The Christian Science Monitor. “It’s about trying to ring-fence the dangerous technology."

And so we find ourselves at something of a crucial juncture: can opponents and proponents of A.I. development find common ground or, at the very least, remain engaged in this critical discussion?

Some researchers have ceased communicating with the media or the public, tired of what they perceive to be “hyped headlines," as Sabine Hauert, robotics lecturer at the University of Bristol, United Kingdom, wrote in the journal Nature.

But we must not disengage,” writes Dr. Hauert. “[The public] hear a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat, and wondering whether laws should be passed to keep hypothetical technology 'under control'.”

“Experts need to become the messengers,” she says.

X Prize describes itself as a “facilitator of exponential change” and a “catalyst for the benefit of humanity”.

IBM developed Watson, “a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data,” which rose to fame in 2011 after defeating human opponents on the “Jeopardy” quiz show.

They seek to use, promote, and develop A.I. in a quest for progress stating in their announcement that “we are forging a new partnership between humans and technology.”

But such laudable aspirations cannot eliminate the risks. And whether the risks are real or imagined can only be determined by continuing to engage in reasonable and informed discussion.