We present below four demonstrations of the DAC-h3 cognitive architecture. They show how the system adapts to various environments (different robot, lab and human partner). In Demonstrations 2 and 3, the internal states of the robot are displayed in an inset for a better understanding of the robot's internal dynamics. Demonstrations 1 and 4 correspond to live demonstrations performed at the review meetings of the WYSIWYD European Project. Demonstration 1 is the most complete, showing all the abilities of the current HRI system.
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents and body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: passing objects, showing the learned kinematic structure, recognizing actions, pointing to the human body parts. A complex narrative dialog about the robot's past experiences is also demonstrated at the end of the video.
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects and associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: taking an object, showing the learned kinematic structure, expressing a narrative. The goal for taking an object is executed twice, showing how the action plan execution adapts to the initial state of the object.
The inset in the top right part of the screen displays information about the current state of the robot. From left to right:
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents an body parts, as well as associating body part touch and motor information. In addition, a number goal-oriented behavior is executed through human requests for taking and giving an object.
The inset in the top right part of the screen displays information about the current state of the robot. From left to right:
The robot is self-regulating two drives for knowledge acquisition and knowledge expression. Acquired information is about labeling the perceived objects, agents an body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: showing the learned kinematic structure, expressing a narrative, recognizing an action, and playing with a ball.
Compared to the two previous demonstrations above, this video was recorded with another iCub robot, in a another lab and with another interacting human, demonstrating the robustness of the system to varying conditions.