Want all the videos? Download them using this torrent. (Videos are .ogv, 241 MB total). Alternatively, you can watch our videos on YouTube.
Site visit footage. This footage was taken during our site visit and shows each of the three major tasks: outdoor OOI detection, pedestrian tracking, and indoor OOI neutralization.Direct download:
(ogv)(m4v)
Sensor visualization at beginning of phase 1. The camera and LIDAR data are plotted at the beginning of the site visit. The camera is almost always in motion, looking for OOIs or other robots. The other robots are also clearly visible in the LIDAR data.Direct download:
(ogv)(mp4)
Ground surface estimation from LIDAR data. The first step in interpreting sensor data is determining where the ground is; a simple ground plane model isn't expressive enough to support robust obstacle detection. We use a non-parametric ground surface estimate derived from 3D LIDAR data.Direct download:
(ogv)(avi)
Inertial measurement unit auto-calibration. Our IMU corrects our system's attitude estimate by measuring the observed acceleration of the robot and comparing it to the force of gravity.Direct download:
(ogv)(avi)
Building a map of the play field. Each robot maintains a map of the world according to its own sensors. Our maps are high-quality due to laser scan matching and fast non-linear optimization methods.Direct download:
(ogv)(avi)
Tag-based robot tracking. In order for robots to coordinate with each other, they must understand each other's coordinate systems. Using the 2D barcodes on each robot, robots can recognize each other and thus align their coordinate systems.Direct download:
(ogv)(avi)
Robots autonomously registering their coordinate systems. This video super-imposes the maps from each robot at the very beginning of the mission: their coordinate systems start off randomly aligned. However, they soon begin recognizing each other, which causes their coordinate systems to snap into place. This is all done without human intervention.Direct download:
(ogv)(mp4)
Robots exploring during task 1. A robot executes an autonomous exploration task in the phase 1 area. Along the way, it uses LIDR data to identify obstacles and autonomously plans paths around those obstacles. It's also watching for OOIs, and will alert the human controllers if it finds one.Direct download:
(ogv)(mp4)
Panorama-aided situational awareness. It is often difficult for a human to understand the environment around a robot from the camera data alone. We project this data into a 3D space making it easier for a human to visualize the robot's surround. As a bonus, our method makes better use of radio bandwidth by eliminating the need to stream nearly identical images.Direct download:
(ogv)(avi)
Pedestrian tracking. Our robots track people using a combination of LIDAR and camera data. The black circles indicate pedestrian tracks.Direct download:
(ogv)(avi)
3D pointcloud segmentation. Our LIDAR and camera data can be used to segment objects in the world, which is the basis for robust object detection.Direct download:
(ogv)(wmv)
Autonomous detection and neutralization of OOI. The robot was commanded to travel forward, but when it rounded a corner, it autonomously detected a dangerous OOI. The robot stops automatically and illuminates the object with a laser pointer.Direct download:
(ogv)(wmv)
Autonomous path planning through a doorway. This video shows the real-time LIDAR data arriving, super-imposed over the terrain costmap. Red regions indicate areas that the robot is not allowed to travel into. The robot proceeds to a doorway and autonomously passes through the opening.Direct download:
(ogv)(avi)
Rainy-day backup footage. This video shows the performance of our system on each of the three major tasks several days before the site visit.Direct download:
(ogv)(mp4)