News

PDF
iros 2014
Locally-weighted Homographies for Calibration of Imaging Systems


PDF
iros 2014
PAS: Visual Odometry with Perspective Alignment Search


PDF
icra 2014
Robust Pose Graph Optimization Using Stochastic Gradient Descent


PDF
iros 2013
Predicting Object Functionality Using Physical Simulations


PDF
iros 2013
Robust Sensor Characterization via Max-Mixture Models: GPS Sensors


PDF
iros 2013
AprilCal: Assisted and repeatable camera calibration


big winNovember 18, 2010
Team Michigan's wins first place in the MAGIC 2010 robotics competition, earning $750k!


editors choiceJuly 31, 2010
Team Michigan's booth was awarded Editor's Choice at Makerfaire in Detroit!!


team photoJuly 26, 2010
Team Michigan has been named a finalist in the MAGIC 2010 robotics competition. On to Australia!


Next Prev

Project Spotlight

AprilTag 2.0 released

Robots with AprilTags

A new version of the AprilTag library (2014-10-20) has been released! It’s written in pure C, dramatically faster than the old version, and generally has both a lower false positive rate and a higher true positive rate.

MAEBot mobile robot platform

Maebot front view

MAEBot is a mobile robot with rich sensor capabilities designed to provide students and researchers with a robust, open, affordable platform on which to explore the concepts of robot control, localization, kinematics and machine vision.

We are in the process of releasing our design files and code in order to benefit the wider robotics community. If you are interested, drop us a line — we’d love to hear from you!

Next-Generation Vehicle

NGV teaser

In partnership with Ford and State Farm Insurance, we have begun development of a next-generation automated vehicle. On the University of Michigan side, the principal investigators are Ryan Eustice and Edwin Olson. Michigan is taking a leading role on sensing and decision-making.

APRIL Camera Calibration Suite

AprilCal

The APRIL camera calibration suite is now available as part of the APRIL Robotics Toolkit. This interactive tool uses the current calibration state to suggest the position of the target in the next image.

AprilCal yields more reliable and accurate camera calibrations compared to alternatives such as OpenCV. A 2D grid of AprilTags is used for calibration, removing the need for observing the entire calibration target for detection. In addition to single-camera calibration, AprilCal also supports multi-camera calibration (without interactive suggestions).

Check out this demo video for more details.

Learning convolutional filters for interest point detection

Sample convolutional filters

Interest point detection is often the first step in a computer vision pipeline, reducing a raw image to a small set of key points for tasks like tracking across time in a video. These interest point detectors are typically hand-designed with the goal of maximizing a measure like repeatability of detection across viewpoint changes, a stand-in for full-system performance. Vision pipelines are typically complex, however, and differ in subtle ways. By automatically learning feature detectors, we hope to improve application performance and learn more about the properties of the top-performing detectors.

We show that it is possible to automatically learn feature detectors that perform as well as some of the best hand-designed alternatives. Our application is that of stereo Visual Odometry, with ground truth computed by instrumenting the environment with 2D fiducial markers known as AprilTags. We learn convolutional filters for interest point detection, which lead naturally to fast extraction methods that can take advantage of SIMD parallelism.

MAGIC 2010

Team Michigan

The APRIL laboratory won the Multi Autonomous Ground Robot International Challenge (MAGIC) Robotics competition, besting 22 other teams from around the world. To do this, we developed a team of robots that can explore an urban environment(indoors and outdoors), identify and track people, and identify objects of interest.

See our MAGIC 2010 team page for more details.

Automated Grade Crossing Safety Inspection

Humped crossing profile

A grade crossing is a crossing of a railway line and a motor road. In 2009 alone there were 248 deaths and 682 injuries at grade crossings in the United States. Factors like the elevation profile of a crossing or the environment and foliage around the crossing can render it unsafe. Often, vehicles with low ground clearance bottom out on a crossing with a humped elevation profile. Excessive foliage around the crossing can obstruct the visibility of an approaching train, reducing the time a driver has to stop. Hence ensuring safety requires regular monitoring and timely maintenance of grade crossings across the country.

We are building systems that will automatically determine whether a grade crossing is unsafe. Our system builds a 3D model of a grade crossing using LIDAR and camera data, from which it can measure the critical safety parameters of the crossing.

Graph-based segmentation of colored 3D point clouds

Graph-based segmentation steps

Robots navigating and interpreting a complex environment depend both on its spatial layout and its appearance. Traditional sensors measure either spatial information (e.g. laser scanners) or appearance (e.g. cameras). We enable the creation of a rich 3D data source which combines spatial and color information by accurately co-registering a camera with an actuated planar laser scanner.

Segmentation is an important pre-processing step necessary for enabling both high-level object identification and terrain classification. We demonstrate a novel segmentation method which can deal correctly with joint color and spatial information. Our method works on both indoor and outdoor scenes and produces segments which can include gradient regions and areas of uniform variance.