When robots roam the earth
Sometime this century, artificial intelligence may become its own species. Society will need new rules to cope.
What will be the first alien intelligence with which humans come into contact? Surprisingly, it won't come from another planet. Instead, these entities will be the work of humans – robots with an artificial intelligence that will demand new rules about their roles in society.
That's the conclusion of the European Robotics Research Network, which issued a "Roboethics Roadmap" last spring. Sometime in this century, the group figures, robots will be considered intelligent enough – even self-aware, in some sense – to be considered a species all their own. "It will be an event rich in ethical, social, and economic problems," the group concludes.
For some, the topic may not be worth much more than an amused grin or likened to yet another science fiction book about human-robot interaction. But in robot-happy Japan and South Korea (which wants a robot in every home by 2013), researchers are already studying the potential impact of robots on their societies.
In the United States, Reps. Mike Doyle (D) of Pennsylvania and Zack Wamp (R) of Tennessee have formed a Congressional Caucus on Robotics to look at "this first great technology of the 21st century." Bill Gates says robotics today remind him of the computer industry 30 years ago, when he helped launched Microsoft, with the same promise of altering everyday life.
NASA's planet-probing robots, such as the Mars rovers, are becoming more sophisticated, while the Pentagon would like robotic armed vehicles and other robot weapons to make up one-third of its total deployment by 2015. Earlier this year the US military shipped the first machine-gun-toting robots to Iraq, adapted from bomb-disposal units.
Though human soldiers control the machines remotely, robot soldiers eventually could be given more autonomy. They might even be able to make ethical decisions on the battlefield faster than humans, unhindered by fear or revenge.
Robot experts like to say that intelligence is intelligence, no matter what the material form. But that doesn't provide answers for tricky ethical questions. Would wars break out more easily, for instance, if only broken robots, not body bags, were shipped home? What would be the new rules of engagement? One theory proposes that machines should be allowed only to destroy machines and only humans should be allowed to kill.
Robots that take human-like form also present opportunities. They could act as helpers and companions for shut-ins and those with disabilities. Experiments with robotic faces that exhibit "expressions" and talk back to humans show that people begin to treat such robots as having unique identities.
Might humans someday prefer the company of robots, much as some video-game players get hooked on their game worlds? Who would be responsible if the robots make a mistake or cause harm?
If robots can mimic humans so closely that they're nearly indistinguishable from, say, a child, would they rise above being considered as property, gain legal status as "sentient beings," and be granted limited rights? Might Congress pass a "Robot Civil Rights Act of 2037"?
Thinking about when a robot would be granted rights could help us better appreciate human rights.
So compute on these issues for awhile. They're not the stuff of novels or movies anymore.