Mechanical

Our robots were specially designed for the MAGIC competition and are manufactured at the APRIL robotics laboratory at the University of Michigan. The robots are principally made of Baltic birch plywood which has been cut using a laser cutter based on a SolidWorks design. This method makes it possible for us to construct a relatively large (20ish) fleet of robots. At the time of the site visit, we had 5 finished robots, with 7 more partially completed robots.

<

The robot is driven using four DC gearhead motors, each with independent control and quadrature phase feedback. The drive-train itself uses parts designed for a lawnmower. This drivetrain, while lacking a suspension (forgivable given our maximum speed of around 1 m/s), is quite rugged.

We also made extensive use of a Dimension uPrint rapid-prototyping printer to fabricate sensor mounts, cases, and other small parts. This printer creates models in ABS plastic, which is strong enough for most non-drivetrain purposes.

Electrical

The robot is powered by a 720 Wh battery with a nominal voltage of 24 V, which is sufficient to power all the robot's systems (including computer) for about four hours. The battery is roughly the size of a loaf of bread. A bank of switching DC/DC converters generates the additional voltages required by our other subsystems.

We use two uOrc robotics controllers, which provide motor control and data acquisition. These controllers were developed previously by our lab for an earlier robotics competition. Each controller handles two motors. These controllers are based around the Stellaris LM3S8962 Cortex-M3, which is a 32bit ARM core running at 50MHz. The controllers interface to the laptop via ethernet.

Sensors

Our primary sensor is a combined camera/LIDAR rig. The camera is a PointGray FireflyMV USB camera with a 2.8mm fixed focal length lens giving us about a 90 degree field of view. The camera is mounted on a pair of AX-12 servos which allow the camera to pan and tilt around its focal point, making panorama generation straight-forward. Integrated into the same sensor mount is a Hokuyo UTM-30LX LIDAR range finder. It is mounted on a third AX-12 servo, allowing us to produce 3D point clouds. Because the two sensors are well-calibrated with respect to each other, we can obtain color data for laser points, or ranges for camera pixels. Note that the camera/LIDAR mount was fabricated using our uPrint rapid prototyper.

Our GPS unit is a consumer-grade Garmin GPS-18x. Our Simultaneous Localization and Mapping algorithms provide the navigational information we need for operation, eliminating the need for a more complicated differential or RTK GPS system. This non-reliance on GPS gives our system an advantage in dense urban areas and in combat areas where GPS could be jammed.

We developed an inertial measurement unit (IMU) which has 4 gyro axes (we sense yaw twice in order to reduce noise), 3 accelerometer axes, 3 magnetometer axes, and a barometric altimeter with a resolution of about 10". This IMU is the size of a business card and is a bus-powered USB device.

Compute

Processing all of the sensor data requires a lot of computational power. We use a Lenovo T410 dual core i5 laptop at 2.4GHz. Our software runs on Ubuntu 10.4 and is primarily written in Java. Some of our software is already Open Source; contest-specific code will be released after the competition.

Communication

Our robots communicate with the base station using two separate radio systems: a 900MHz XTend system (which is long range but only about 10kB/s), and an 802.11g mesh network built from OM1P nodes. While the raw bandwidth of the 802.11g radios is very high in comparison to the XTend radios, the need to relay messages quickly eats away at this bandwidth.

Coordination and Control

Task allocation is handled centrally at the ground control station, either via a rewards-based planner or manually by a human operator. These tasks are fairly high-level, such as "travel to (x,y)." The command can send the robot well beyond the robot's sensor horizon. While travelling, the robots autonomously identify obstacles, plan paths, and detect objects of interest. Exceptions are reported back to ground control, which can result in either a new task assignment or human intervention.

Mapping

Getting the robots to agree on a common coordinate system is one of the central challenges of the competition. In our system, each robot maintains a map in its own private coordinate system. This map is constructed using a combination of odometry, IMU, and 3D laser scan matching.

Robots share information about their coordinate systems in two ways. The first way is by observing another robot: the 2D barcodes on each robot allow robots to recognize each other and thus register each other's coordinate systems. This system is based on the AprilTag visual fiducial system, which our lab has made open source.

Robots can also register coordinate frames by matching 3D scans from other robots. This method can be much more accurate, but requires significantly more radio bandwidth and produces outliers that must be rejected. We'll be writing more about how this system works at the end of the competition.