Human-Robot Interaction with the DAC-H3 Cognitive architecture.

Videos

We present below four demonstrations of the DAC-h3 cognitive architecture. They show how the system adapts to various environments (different robot, lab and human partner). In Demonstrations 2 and 3, the internal states of the robot are displayed in an inset for a better understanding of the robot's internal dynamics. Demonstrations 1 and 4 correspond to live demonstrations performed at the review meetings of the WYSIWYD European Project. Demonstration 1 is the most complete, showing all the abilities of the current HRI system.

Demonstration 1

The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents and body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: passing objects, showing the learned kinematic structure, recognizing actions, pointing to the human body parts. A complex narrative dialog about the robot's past experiences is also demonstrated at the end of the video.

Demonstration 2

The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects and associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: taking an object, showing the learned kinematic structure, expressing a narrative. The goal for taking an object is executed twice, showing how the action plan execution adapts to the initial state of the object.

The inset in the top right part of the screen displays information about the current state of the robot. From left to right:

  • First row: Third person view of the iCub with detected objects and human body parts; view from the iCub's left eye camera indicating the detected objects and their current associated linguistic label; perception of the human skeleton tracked by the Kinect; learned kinematic structure (only appears when requested by the human).
  • Second row: Drive dynamics (see Figure 6 of the paper for explanation), recognition score of the most salient object, tactile sensations of the iCub's right hand.

Demonstration 3

The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents an body parts, as well as associating body part touch and motor information. In addition, a number goal-oriented behavior is executed through human requests for taking and giving an object.

The inset in the top right part of the screen displays information about the current state of the robot. From left to right:

  • First row: Third person view of the iCub with detected objects and human body parts (actually not moving in this video due to a minor technical issue); view from the iCub's left eye camera indicating the detected objects and their current associated linguistic label; perception of the human skeleton tracked by the Kinect.
  • Second row: Drive dynamics (see Figure 6 of the paper for explanation), recognition score of the most salient object.

Demonstration 4

The robot is self-regulating two drives for knowledge acquisition and knowledge expression. Acquired information is about labeling the perceived objects, agents an body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: showing the learned kinematic structure, expressing a narrative, recognizing an action, and playing with a ball.

Compared to the two previous demonstrations above, this video was recorded with another iCub robot, in a another lab and with another interacting human, demonstrating the robustness of the system to varying conditions.