Finding multiple lanes in urban road networks with vision and lidar

Autonomous Robots, 2009

PDF thumbnail
(PDF, 1.6 MB )


This paper describes a system for detecting and estimating the properties of multiple travel lanes in an urban road network from calibrated video imagery and laser range data acquired by a moving vehicle. The system operates in real-time in several stages on multiple processors, fusing detected road markings, obstacles, and curbs into a stable non-parametric estimate of nearby travel lanes. The system incorporates elements of a provided piecewise-linear road network as a weak prior.

Our method is notable in several respects: it detects and estimates multiple travel lanes; it fuses asynchronous, heterogeneous sensor streams; it handles high-curvature roads; and it makes no assumption about the position or orientation of the vehicle with respect to the road.

We analyze the system’s performance in the context of the 2007 DARPA Urban Challenge. With five cameras and thirteen lidars, our method was incorporated into a closed-loop controller to successfully guide an autonomous vehicle through a 90 km urban course at speeds up to 40 km/h amidst moving traffic.


    AUTHOR     = {Albert Huang and David Moore and Matthew Antone and Edwin Olson and
                 Seth Teller},
    TITLE      = {Finding multiple lanes in urban road networks with vision and lidar},
    JOURNAL    = {Autonomous Robots},
    VOLUME     = {26},
    NUMBER     = {2},
    PAGES      = {103-122},
    MONTH      = {April},
    YEAR       = {2009},