MAEBot is a mobile robot with rich sensor capabilities designed to provide students and researchers with a robust, open, affordable platform on which to explore the concepts of robot control, localization, kinematics and machine vision.
We are in the process of releasing our design files and code in order to benefit the wider robotics community. If you are interested, check back soon or drop us a line — we'd love to hear from you!
In partnership with Ford and State Farm Insurance, we have begun development of a next-generation automated vehicle. On the University of Michigan side, the principal investigators are Ryan Eustice and Edwin Olson. Michigan is taking a leading role on sensing and decision-making.
APRIL Camera Calibration Suite
The APRIL camera calibration suite is now available in the master branch of the APRIL Robotics Toolkit. Located in april.camera, this suite computes accurate camera calibrations and results are more repeatable than popular alternatives like OpenCV (as determined by human trials). A 2D grid of AprilTags is required for calibration and removes the need for observing the entire calibration target for detection. Interactive single-camera calibration via AprilCal is great for novices and experts alike and provides target position suggestions to ensure highly-accurate calibrations. Flexible multi-camera calibration is also supported without interactive suggestions.
Learning convolutional filters for interest point detection
Interest point detection is often the first step in a computer vision pipeline, reducing a raw image to a small set of key points for tasks like tracking across time in a video. These interest point detectors are typically hand-designed with the goal of maximizing a measure like repeatability of detection across viewpoint changes, a stand-in for full-system performance. Vision pipelines are typically complex, however, and differ in subtle ways. By automatically learning feature detectors, we hope to improve application performance and learn more about the properties of the top-performing detectors.
We show that it is possible to automatically learn feature detectors that perform as well as some of the best hand-designed alternatives. Our application is that of stereo Visual Odometry, with ground truth computed by instrumenting the environment with 2D fiducial markers known as AprilTags. We learn convolutional filters for interest point detection, which lead naturally to fast extraction methods that can take advantage of SIMD parallelism.
The APRIL laboratory won the Multi Autonomous Ground Robot International Challenge (MAGIC) Robotics competition, besting 22 other teams from around the world. To do this, we developed a team of robots that can explore an urban environment (indoors and outdoors), identify and track people, and identify objects of interests.
Go to our MAGIC 2010 Team page...
Grade Crossing Safety
A grade crossing is a crossing of a railway line and a motor road. In 2009 alone there were 248 deaths and 682 injuries at grade crossings in the United States. Factors like the elevation profile of a crossing or the environment and foliage around the crossing can render it unsafe. Often, vehicles with low ground clearance bottom out on a crossing with a humped elevation profile. Excessive foliage around the crossing can obstruct the visibility of an approaching train, reducing the time a driver has to stop. Hence ensuring safety requires regular monitoring and timely maintenance of grade crossings across the country.
We are building systems that will automatically determine whether a grade crossing is unsafe. Our system builds a 3D model of a grade crossing using LIDAR and camera data, from which it can measure the critical safety parameters of the crossing.
Graph-based Segmentation of Colored 3D Point Clouds
Robots navigating and interpreting a complex environment depend both on its spatial layout and its appearance. Traditional sensors measure either spatial information (e.g. laser scanners) or appearance (e.g. cameras). We enable the creation of a rich 3D data source which combines spatial and color information by accurately co-registering a camera with an actuated planar laser scanner.
Segmentation is an important pre-processing step necessary for enabling both high-level object identification and terrain classification. We demonstrate a novel segmentation method which can deal correctly with joint color and spatial information. Our method works on both indoor and outdoor scenes and produces segments which can include gradient regions and areas of uniform variance.
Probabilistic adversarial pursuit
The pursuit-evasion problem consists of a team of pursuers maneuvering to capture one or more evaders. Solutions to the pursuit-evasion problem help search and rescue teams quickly find survivors, sentries protect against intruders, and law enforcement apprehend suspects.
We've developed new probabilistic methods that model the cunning of an evader. This allows pursuers to be more conservative when pursuing smart evaders, and more aggressive when pursuing naive evaders.