In this paper, we address the problem of incrementally optimizing constraint networks for maximum likelihood map learning. Our approach allows a robot to efficiently compute configurations of the network with small errors while the robot moves through the environment. We apply a variant of stochastic gradient descent and use a tree-based parameterization of the nodes in the network. By integrating adaptive learning rates in the parameterization of the network, our algorithm can use previously computed solutions to determine the result of the next optimization run. Additionally, our approach updates only the parts of the network which are affected by the newly incorporated measurements and starts the optimization approach only if the new data reveals inconsistencies with the network constructed so far. These improvements yield an efficient solution for this class of online optimization problems.
Our approach has been implemented and tested on simulated and on real data. We present comparisons to recently proposed online and offline methods that address the problem of optimizing constraint network. Experiments illustrate that our approach converges faster to a network configuration with small errors than the previous approaches.
@inproceedings{grisetti2008icra, TITLE = {Online Constraint Network Optimization for Efficient Maximum Likelihood Map Learning}, AUTHOR = {Giorgio Grisetti and Dario Lodi Rizzini and Cyrill Stachniss and Edwin Olson and Wolfram Burgard}, BOOKTITLE = {Proceedings of the {IEEE} International Conference on Robotics and Automation ({ICRA})}, YEAR = {2008}, }