Rackham with its major sensors and actuators. Rackham is a B21r robot (iRobot). It is a 4-feet (52 cm)
tall and 20-inches (118 cm) wide cylinder topped with a mast
supporting a kind of helmet. It integrates 2 PCs (one mono-
CPU and one bi-CPUs running P3 at 850 MHz).
The software architecture is an instance of the LAAS architecture "LAAS Architecture for Autonomous System".
It is a hierarchical architecture including a supervisor written with openPRS (a Procedural Reasoning System) that controls a distributed set of functional modules.
To localize itself within its environment the robot uses a SICK laser that exports at the required rate the laser echoes together with segments deduced from aligned echoes.
These segments are matched with segments previously recorded in a map thanks to a classical SLAM procedure.
Rackham being a guide, it must be able to take visitors to various places in the exhibition. the objective is to reach the target zone while avoiding obstacles.
Obstacle detection is a critical function both for security reasons and for interaction purposes.The most efficient sensor is the laser. However Rackham's laser can only look forward (over 180 degrees) in an horizontal plan.
To partially overcome these limitations, the laser data are integrated in a local map and filtered using knowledge about the global map.
A module is able to detect faces in real time from a color camera image.
The detector uses a cascaded classifier and a head tracker based on a particle filter.
Some wors about irit and speech recognition
We use a talking head, or clone, developed by the Institut de la Communication Parlée .
The clone is based on a very accurate articulatory 3D model of the postures of a speaking locutor with realistic synthetic rendering thanks to 3D texture projection.
or how to make all that things work together ??