Robot Motion Planning for Map Building

The goal of this work is to develop techniques that allow one or more robotic observers to operate with full or partial autonomy while accomplishing the task of model building. The planning algorithm operates using certain simple but flexible models of the observer sensor and actuator abilities. We provide techniques that allow us to implement these sensor models on top of the capabilities of the actual (and off-the-shelf) sensors we have. One characteristic concern of this study is the need to satisfy perception constraints while planning motions. We focus on the fundamental motion planning problem considering information provided by sensors. Some of the questions that this work tries to answer are: Which locations must be visited by a robot to efficiently map a building? How must a robot move to explore an environment? To answer these questions we propose randomized motion planning techniques which take into account both geometrical and image analysis computation.

Experiments

Next best view simulation: Here the robot trajectory and sensing locations were computed by using genetic algorithms Multi-robot map building simulation case of omnidirectional field of view and unlimited range Multi-robot map building simulation case of 180 deg of field of view and limited range
Model Matching: real laser data Laser data at time T and T+1 Data Matching by using the partial Haussdorf distance


Map building with a mobile robot, this image shows my former lab in the ITESM CCM


Future research, next best view computation in 3D

This work was done by my former students Benjamin Tovar and Claudia Esteves

Links to similar research
  • Autonomous observer project
  • at Stanford University
    Back to the main page
    Rafael Murrieta
    Last modified: October 30 2002