Experimental robot shows signs of self-awareness
Loading...
Each year, Artificial Intelligence (AI) seeps more into the real world. From little robots that vacuum the floor to Google's self-driving cars, we are beginning to experience what an “intelligent” robot might be capable of. Yet there is still so much more that needs to be learned about actual human intelligence and how to replicate that in computers and robots.
Selmer Bringsjord, a professor of computer science and cognitive science at Rensselaer Polytechnic Institute in Troy, N.Y., recently led an experiment with three programmable NAO Robots in hopes of showing that these humanoid machines can be self-aware. This was, in part, in response to an open challenge leveled by Luciano Floridi, a noted professor of philosophy at the University of Oxford.
“Floridi's challenge, which we refer to as 'KG4,' requires that a robot have a form of genuine self-understanding ... a human-level justification/proof to accompany the behavior, in which the robot employ a correlated to the personal pronoun 'I,' and the inputs have to come in natural language, in real time,” says Dr. Bringsjord in an e-mail. Basically, the robot has to know it has a self, how to recognize that, and what to do about it.
In Bringsjord’s experiment, all three robots had the ability to speak, but two were programmed to keep silent. The researchers told the robots that two of them had received a "dumbing pill" that left them mute, and then asked the trio to figure out which of them could speak. One robot stood to say that it did not know the answer, but then heard its own voice and amended its statement.
“Sorry, I know now," said the NAO Robot. "I was able to prove that I was not given a dumbing pill.”
The ability to analyze situations and teach one’s self new behavior can be a great accomplishment for humans. We understand how complicated puzzling through new situations can be. But what does it mean that a robot – a bit of plastic with a computer inside – can learn in the moment and make new choices based on that information?
Devoted to finding out, Bringsjord says that this type of cognitive programming, called psychometric AI, offers untold opportunities for growth in learning about ourselves and how helpful robots might be in the future. However, he also has a caution.
Knowing the outcome of what a scientist or engineer is trying to do ensures that learning and evolving AI stays within the control of humans, not the machines. “The human race should not be building sophisticated robots without exquisite attention to detail in formal logics," says Bringsjord. "We are sliding as a culture and country toward AIs that learn in mysterious fashion and end up as essentially black boxes. You don't want a black-box robot driving a car, or flying itself around, etc.”