The Turing test got knocked out last year, computers have been in the lead at chess for a long time, and just recently conquered go. Google has a self-driving car. We’ve all read and dreamed about robots that can really think and be aware of themselves. Think about all the plot lines from The Next Generation revolving around Data gaining an understanding of what it means to be human. This has always been firmly in the realm of science fiction, but an Oxford professor may have taken a major stride towards making these dreams a reality.
If it takes you a few viewings to sort out exactly what is going on, don’t feel bad (it took me a couple). Here’s what happens: Professor Bringsjord taps each of the three robots on the top of the head. It’s not explained in the video, but he’s giving two of them a “dumbing pill” that will prevent it from speaking. The third gets a placebo. Bringsjord then asks the robots if they know what pill they received. Two of the robots sit still, while a third stands up and says that it doesn’t know. After a moment, it apologizes, changes its answer, and announces that it was able to prove it was not prevented from speaking.
So what? Well, this is a huge development in bringing a computer to a state of self-awareness. Programming a machine to make a sound is easy. Programming a machine to recognize when a sound is made is easy. Programming a machine to recognize when it makes a sound and then interpret that fact to understand that it made the sound is incredibly difficult. This little glimmer of self-awareness paves the way for much broader developments.
Inside the robots, they are busily running a variety of logic called Deontic Cognitive Event Calculus. As they try to process which one of them is allowed to speak, they reach an entirely reasonable conclusion: they haven’t been given enough information to answer the question. They all attempt to respond “I don’t know,” though two of them can’t speak, so they are prevented from standing up and responding. The one that is able to speak stands and then realizes that it can speak. The crucial flip of awareness there, that moment where the robot realizes that it made the sound, and that it is the one given the placebo is a revolutionary step in robotics.
The test Bringsjord put the robots through is an old one, originally a logic problem known as the King’s Wise Men. In this problem, a king tells his three advisers that he has placed on each of their heads a hat, either blue or white. Beyond this, he tells them that the question is fair to each of them and that they cannot take off the hat they are wearing. In one version of this problem, one of the wise men shouts “I don’t know!” and storms out of the room, at which point the other two can solve the problem. For a good explanation of the logic in the original problem and a method to solve it without requiring one of the participants to announce that he lacks information, see the article linked above.
About a decade ago the philosopher Luciano Floridi proposed this as a test to see if a robot had developed self-awareness, suggesting that this test would be a good way to sift out true AI from a human. For most of the intervening time, it was expected that this problem would remain a significant challenge, requiring a major evolution in robotics to overcome. To all of our surprise, it seems that robots have achieved that degree of consciousness ahead of schedule. It’s not Skynet, but it’s a long step forward.
One last exciting thing for dedicated followers of this story, Bringsjord will be presenting his results at an IEEE conference in Kobe at the end of August. We can only hope that his presentation will feature another robot having that momentary glance at itself, aware of its own identity.