Robotics, Action and Perception - RAP

Head: Patrick DANÈS
PhD student representative:Fadi GEBRAYEL

 

The research conducted by the Robotics, Action, and Perception team (RAP) mostly takes place within the fields of perception, sensor-based motion and sensor integration in robotics.  Contributions range from functions design and prototyping to implementation on robots and real-world evaluation.  Most of them are steered to robotics in interaction with humans (human populated scenes analysis; coordinated human-robot navigation; co-botics), or ubiquitous robotics and ambient intelligence in the vein of the ADREAM axis (data fusion from embedded and deported sensors; design of integrated communicating sensors).  Signal processing underlies many works, and is addressed per se as a separate upstream research.

  • Perception spans acquisition, filtering, detection, segmentation, tracking, identification and interpretation.  Robust real-time functions are aimed for, from optical (monocular, 3D, polydioptric, active PTZ, RGB-D, IR, multispectral), auditory (microphone array, binaural head) and RFID modalities.  The sensors can be static or moving, onboard a robot, deported in the environment, or worn by humans.  Several results rely upon probabilistic data fusion: vision and RFID based tracking and identification of multiple people; multimodal pedestrian localization in urban environments; audio-motor sound localization; simultaneous localization, mapping and moving object tracking (SLAMMOT).  Other contributions include: vision based detection, segmentation, tracking and recognition of multiple objects; 3D vision modeling; sound detection and azimuth estimation.
  • Research in sensor-based motion mostly concerns vision and laser based reactive navigation.  The underlying techniques for occlusions and dynamic obstacles handling are complementary to vision based functions on humans in that their conjunction enables a mobile robot to guide a tutor among crowds.  In addition, strategies to vision based dual arm manipulation are investigated.  To a lesser extent, advanced control techniques were applied to visual servo analysis.
  • The integration of perception algorithms on smart-sensors considers multiple targets (processors, FPGAs, GPUs…) in order to cope with latence, cadence, size, energy and memory footprint.  This implies algorithms evaluation and selection, model-driven engineering, optimization (at the algorithmic, operating and hardware levels), and hardware-software co-design.
  • Separate contributions relate to stochastic filtering and operatorial transforms of dynamic problems for analysis, simulation, identification, estimation or control.

The above topics are intertwined thanks to internal collaborations.  RAP has been involved in international or national projects, as well as in national or local collaborations, which often led to joint PhD supervisions.  Many studies are in connection with applications outside robotics, e.g., videosurveillance, quality control, and geolocation.  These have given rise to collaborations with industry (local VSMEs, SMEs, large groups), including many CIFRE theses.