We present an efficient graph-theoretic algorithm for segmenting a colored laser point cloud derived from a laser scanner and camera. Segmentation of raw sensor data is a crucial first step for many high level tasks such as object recognition, obstacle avoidance and terrain classification. Our method enables combination of color information from a wide field of view camera with a 3D LIDAR point cloud from an actuated planar laser scanner. We extend previous work on robust camera-only graph-based segmentation to the case where spatial features, such as surface normals, are available. Our combined method produces segmentation results superior to those derived from either cameras or laser-scanners alone. We verify our approach on both indoor and outdoor scenes.
@inproceedings{strom2010, TITLE = {Graph-based Segmentation for Colored {3D} Laser Point Clouds}, AUTHOR = {Johannes Strom and Andrew Richardson and Edwin Olson}, BOOKTITLE = {Proceedings of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems {(IROS)}}, YEAR = {2010}, MONTH = {October}, VOLUME = {}, NUMBER = {}, PAGES = { }, KEYWORDS = { laser point clouds, sensor fusion, graph segmentation, robot perception}, ISSN = { }, }