AprilTag 2.0 released
MAEBot mobile robot platform
MAEBot is a mobile robot with rich sensor capabilities designed to provide students and researchers with a robust, open, affordable platform on which to explore the concepts of robot control, localization, kinematics and machine vision.
We are in the process of releasing our design files and code in order to benefit the wider robotics community. If you are interested, drop us a line — we’d love to hear from you!
APRIL Camera Calibration Suite
The APRIL camera calibration suite is now available as part of the APRIL Robotics Toolkit. This interactive tool uses the current calibration state to suggest the position of the target in the next image.
AprilCal yields more reliable and accurate camera calibrations compared to alternatives such as OpenCV. A 2D grid of AprilTags is used for calibration, removing the need for observing the entire calibration target for detection. In addition to single-camera calibration, AprilCal also supports multi-camera calibration (without interactive suggestions).
Check out this demo video for more details.
Learning convolutional filters for interest point detection
Interest point detection is often the first step in a computer vision pipeline, reducing a raw image to a small set of key points for tasks like tracking across time in a video. These interest point detectors are typically hand-designed with the goal of maximizing a measure like repeatability of detection across viewpoint changes, a stand-in for full-system performance. Vision pipelines are typically complex, however, and differ in subtle ways. By automatically learning feature detectors, we hope to improve application performance and learn more about the properties of the top-performing detectors.
We show that it is possible to automatically learn feature detectors that perform as well as some of the best hand-designed alternatives. Our application is that of stereo Visual Odometry, with ground truth computed by instrumenting the environment with 2D fiducial markers known as AprilTags. We learn convolutional filters for interest point detection, which lead naturally to fast extraction methods that can take advantage of SIMD parallelism.
The APRIL laboratory won the Multi Autonomous Ground Robot International Challenge (MAGIC) Robotics competition, besting 22 other teams from around the world. To do this, we developed a team of robots that can explore an urban environment(indoors and outdoors), identify and track people, and identify objects of interest.
See our MAGIC 2010 team page for more details.
Automated Grade Crossing Safety Inspection
A grade crossing is a crossing of a railway line and a motor road. In 2009 alone there were 248 deaths and 682 injuries at grade crossings in the United States. Factors like the elevation profile of a crossing or the environment and foliage around the crossing can render it unsafe. Often, vehicles with low ground clearance bottom out on a crossing with a humped elevation profile. Excessive foliage around the crossing can obstruct the visibility of an approaching train, reducing the time a driver has to stop. Hence ensuring safety requires regular monitoring and timely maintenance of grade crossings across the country.
We are building systems that will automatically determine whether a grade crossing is unsafe. Our system builds a 3D model of a grade crossing using LIDAR and camera data, from which it can measure the critical safety parameters of the crossing.
Graph-based segmentation of colored 3D point clouds
Robots navigating and interpreting a complex environment depend both on its spatial layout and its appearance. Traditional sensors measure either spatial information (e.g. laser scanners) or appearance (e.g. cameras). We enable the creation of a rich 3D data source which combines spatial and color information by accurately co-registering a camera with an actuated planar laser scanner.
Segmentation is an important pre-processing step necessary for enabling both high-level object identification and terrain classification. We demonstrate a novel segmentation method which can deal correctly with joint color and spatial information. Our method works on both indoor and outdoor scenes and produces segments which can include gradient regions and areas of uniform variance.