Camera suite

From April
Jump to: navigation, search

The APRIL camera calibration suite is now available in the master branch of the APRIL Robotics Toolkit. Located in, this suite computes accurate camera calibrations and results are more repeatable than popular alternatives like OpenCV (as determined by human trials). A 2D grid of AprilTags is required for calibration and removes the need for observing the entire calibration target for detection. Interactive single-camera calibration via AprilCal is great for novices and experts alike and provides target position suggestions to ensure highly-accurate calibrations. Flexible multi-camera calibration is also supported without interactive suggestions

Paper and Citations

The AprilCal paper, AprilCal: Assisted and repeatable camera calibration, will appear at IROS 2013. Please see this link for the paper and citation.

Video overview

A video explanation of AprilCal is available for new users. This video is available in three resolutions: 1080p (83M), Large (42M), and Mobile (18M)

Still image from the AprilCal overview video. Use the links above to view the video

Calibration targets

We recommend mounting these targets on foam poster board. It can usually be done quite cheaply and quickly (about $5/ sq ft), for example at Kinkos. If you can find out the email address for your local FedEx Office, usually formatted, you can use the following as a template to order by email:

Hi, I would like to print ## copies of the
8.5x11in-tag-mosaic-1.5in.pdf on a foam mounted matte paper, (I was
quoted 5/sq ft for the foam mounting, plus printing)  The file is a
72dpi raster; it's important that if it is resized (e.g. to 300dpi),
that no interpolation/resampling is done to ensure a crisp transition
from black to white on the borders.

Please let me know when I can get them, and/or contact me if you have
questions. Also, I won't need to see a proof, assuming the final
output preserves crisp transitions from black to white.

Your Name
Cell: +1 (555) 555-5555
Attachments: 8.5x11in-tag-mosaic-1.5in.pdf

Camera drivers

Connecting to your camera can be done in one of two ways 1) use a camera driver included in the APRIL software stack or 2) use your own camera driver and send imagery over TCP.

APRIL JCam basics (hardware camera library) uses april.jcam to interact with camera hardware. Each camera type implements the abstract class ImageSource. Drivers written in C are used via JNI bindings and the ImageSourceNative implementation of ImageSource. Additional drivers for your application could be added using the C image_source API, or conveniently in Java using ImageSourceReflect, which allows you to host your Java ImageSource in another tree and specify the class name for reflection.

Examples of supported camera drivers/cameras include:

* v4l2     - Video4Linux version 2. This includes most webcams.
* dc1394   - IEEE1394 using libdc1394. We only support Format7
* pgusb    - A Point Grey USB driver using libusb to avoid the libdc1394
* file     - A specific file (e.g. a png)
* dir      - A directory of files
* islog    - ImageSourceLog format. A raw log of the byte buffer returned by
             the camera
* islog-lcm - An ISLog with image pointers (byte offsets) published over LCM at
             time of writing and received over LCM for playback control.

A camera url typically has the following format


Quotes are needed if specifying more than one property, otherwise bash will interpret the ampersand instead of passing it through. Property order matters (e.g. framerate can affect valid shutter range). april.jcam.JCamView is a nice interactive tool to discover feature names. If you use the "print URL" button in JCamView, note that it prints all features, which is not what you want. If a feature is disabled, (e.g. exposure-mode=[0|1] or exposure-enabled=0), the corresponding value (e.g. exposure=3) should be omitted. Otherwise, some drivers will enable the feature when you attempt to set the value.

One known bug in jcam is that features set with the camera URL are set before the camera is started. However, the bus is allocated on starting the camera, and the bus allocation determines the maximum framerate. This doesn't affect most users and should be fixed eventually.

Custom drivers with a TCP connection

For easy integration with custom video drivers, the APRIL software stack includes an "ImageSource" that listens for images sent over TCP in a simple format.

If your linux machine includes a standard webcam, the following commands should test this interface. These should be run concurrently in separate terminals.

   java april.jcam.JCamExample tcp-server://7001
   java april.jcam.JCamTCPExample v4l2:///dev/video0 tcp://localhost:7001

In this example, JCamExample will listen for images through an ImageSourceTCP object on port 7001. The JCamTCPExample will open the specified camera (using the v4l2 driver) and send the images over the TCP stream to localhost over port 7001.

Example code for C is available in File:Tcp image streamer.c. This file does not connect to a camera (that's left to the user), but will create an image with random pixel values and serialize it into the appropriate format. Compile and run with:

   gcc -o tcpstream -std=gnu99 tcp_image_streamer.c -lc
   ./tcpstream localhost 7001

The C example above generates and sends a grayscale image (8 bits per pixel). The appropriate format string for this is "GRAY8". Other supported format strings can be found in april/java/src/april/jcam/ Most users will probably use one of the following formats: "GRAY8", "RGB", "BAYER_RGGB" or "BAYER_GBRG".


AprilCal is an interactive camera calibrator that suggestions calibration target positions to the user to minimize the worst-case model error measured in pixels. We recommend using the 1.5" tag calibration target with AprilCal.

The AprilCal interface

Launch AprilCal with a command like the following. The camera used here is a Point Grey USB camera using the pgusb driver. Most of these settings can actually be ommitted, as they are application defaults (launch with -h to see defaults).

   java -u "pgusb://?fidx=2?shutter=mode=2&shutter=0.010&gain-mode=2&gain=10" -c -p kclength=4 -m 0.0381

To use AprilCal, show the calibration target to the camera and follow the on-screen directions. Images are left-right mirrored for ease-of-use and shown in grayscale to highlight the tag suggestions (shown in bright colors). The shape of the rectified image is shown as an outline in the top-left. At any point, you can click within this outline to switch to a live-rectified view for inspecting your calibration.

AprilCal will display a suggested mosaic position as squares with colored outlines. Live AprilTag detections are shown as filled, colored squares. Each AprilTag corresponds to a unique ID, and each tag is persistently assigned a color based on this ID. AprilCal instructs the user to place the mosaic in a specific position by showing the colored outlines and expects the user to find the target position that aligns these colors. Note that colors are repeated, but they are repeated in a semi-random ordering that should be easy to decipher. Further, if the mosaic is rotated by roughly 90 or more degrees, text will appear on the display to point this out to the user.

Sometimes the first suggestion is very hard or impossible to achieve, as the camera calibration is not well known at this point. Get the mosaic reasonably close and hit the spacebar to force an image capture.

AprilCal includes a simple terminal emulator. Mouse-over the OpenGL canvas and type ":<Tab>" to see a list of commands. These include options like popping-up a plot of the mosaic extrinsics, saving the calibration, and performing model selection.

The save directory is printed to the terminal and is typically in /tmp/cameraCalibration/ The files with the calibration parameters are human readable and typically named calibration.config or, if using model selection, something like AngularPolynomialCalibration,kclength=4.config.

A note on technique

AprilCal can present a learning curve for some users. It relies on strong visual-spatial reasoning to align the target to the suggestion in a game-like fashion. Beta users of AprilCal were timed and seemed to take around 8 minutes for the first calibration, and 2.5 minutes after that. The main difference between the runs was that they had learned what visual cues to align and focused on them.

Here are two viable techniques:

  • Align a small section of the mosaic first before trying to bring in the far corners of the target
  • Align one edge first, then move the target closer to get the right scale and height before finally rotating the physical target about the matched edge until it aligns.

Here is a video of AprilCal in action with the above technique

Still image from the AprilCal technique video. Use the link above to view the video


The MultiCameraCalibrator is a multi-camera calibration tool for experts who do not need mosaic position suggestions. All users are strongly recommended to use AprilCal a few times before using the MultiCameraCalibrator. The quality of your input images has a big impact on the calibration accuracy.

A command like the following will instantiate multiple cameras for calibration

   java -u "v4l2:///dev/video0?fidx=4;pgusb://?fidx=2?shutter=mode=2&shutter=0.005&gain-mode=2&gain=3"

Initially, every camera is rendered in its own coordinate system. In the first image below, you can see a black and dark-gray background, each showing a colored axis to denote the camera position.

The MultiCameraCalibrator interface

Eventually, the coordinate systems will merge after observing images that jointly constrain the calibration and after all constrained cameras are initialized. After adding six images, the mosaics are shown as below. The distortion function is plotted for each camera in the lower-left corner. The mean reprojection error (MRE) and mean-squared reprojection error (MSE) are shown in the top-right.

The calibration after 6 images

This GUI also lets you perform model selection with the stock set of models, as well as save results. The save directory is printed to the terminal and is typically in /tmp/cameraCalibration/ The files with the calibration parameters are human readable and typically named calibration.config or, if using model selection, something like AngularPolynomialCalibration,kclength=4.config.

Note: Images are captured from multiple cameras in an ad-hoc fashion. No time synchronization is done. For high-quality results, take images of stationary targets to ensure that all images correspond to the same mosaic position.

Batch Calibration

Batch calibration can be performed on a directory of images, including the images saved from a previous calibration with either AprilCal or the MultiCameraCalibrator. Simply cd into the image directory in question and run the following command

   /usr/bin/java -Xmx2048M -Xms2048M -ea -server -u "dir://`pwd`?loop=false&fps=1" -a

This command sets the Java memory allocation limits, which is necessary when calibrating with large numbers of high-resolution images. The dir ImageSource is used with pwd to determine the current directory. It is also possible to fully-specify the directory. Relative paths may not be fully supported, at present. The loop=false and -a options ensure that all images in the directory are used exactly once. The fps option is strictly optional, but may be desired to allow inspection of the tag detections.


For ease of use in most image pipelines, integer pixel coordinates are taken to be the center of the pixel. When drawing a texture, the texture should span from (-0.5, -0.5) to (w-0.5, h-0.5) so that points corresponding to integer image indices are properly centered in each pixel. This convention is followed in the rasterization classes (NearestNeighborRasterizer and BilinearRasterizer).

Note: Make sure to turn image interpolation off when verifying plot alignment (e.g. NO_MAG_FILTER in april.vis.VisTexture)

Pixel coordinate convention

Recommended camera model

There are three primary camera models in the library. This may change, but at the time of writing, one model is sufficient for our lenses tested in the APRIL lab (from webcams to fisheye lenses). This model is the AngularPolynomialCalibration.

The AngularPolynomialCalibration is reminiscent of both the standard radial distortion formulation and the lens model from Kannala and Brandt. In the Caltech model, points are projected using pinhole projection (e.g. x_px is proportional to X_meters / Z_meters), then distorted using a polynomial function of the radius after pinhole projection. The reduced version of the Kannala-Brandt model that we use in this work simply uses a polynomial function of one of the angles in spherical coordinates to map into the distorted image. This supports use of points behind the image plane (z < 0) and avoids the explosion of tan(theta)=X/Z as theta approaches PI/2 radians. In our tests, this Kannala-Brandt-like model performs as well or better than most radial-formulated models and seemed to yield a more reasonable extrapolation outside of the constrained part of the image (in the corners where no calibration constraints were made)

The mathematical representation of the AngularPolynomialCalibration. Inspired by the Kannala-Brandt model

For those who need to use the standard radial[-tangential] models used by Caltech and similar systems, look to the RadialPolynomialCalibration or the CaltechCalibration, respectively. Unlike the RadialPolynomialCalibration, the CaltechCalibration includes skew and tangential distortion. Note that the caltech model combines their distortion coefficients into one vector, which we do *not* do, for clarity and expandability. This is documented in

Of course, we also include a distortion-free model, DistortionFreeCalibration.

CalibrationInitializers are companion classes to Calibrations (e.g. AngularPolynomialCalibration and AngularPolynomialInitializer). The initializers contain factory methods to instantiate an instance of the calibration from either observations (calibration images) or a list of parameters (e.g. for copying). Initializers are typically short classes (<100 lines) and exist because some calibrations may be sensitive to initialization.

All models, to varying degrees, support a variable number of distortion parameters. Caltech models in practice use 2-3 distortion coefficients. 4 is recommended with the AngularPolynomialCalibration. These are specified in a key-value pair string that is comma-separated if multiple parameters are required. These parameters are class-specific, so check the *initializer* to see which substrings are extracted from the parameterString. For most models, only one parameter is necessary and the default is sufficient ("kclength=4")

Our main calibration tools support model selection as a post-processing step, essentially allowing you to take the images you have collected and re-calibrate with a number of pre-specified models. The tools which support this includes a variable number of distortion parameters for each model. Please note that making a responsible model choice requires a larger number of images than are typically collected for calibration. A user can achieve a good calibration in 10 images, but more images should be taken to make a confident decision about model selection due to the risk of over-fitting.

Model Selection

If use of an alternate camera model is desired, the recommended approach is to calibrate with AprilCal and the AngularPolynomialCalibration (the default, which typically yields best results), then run model selection after calibrating by selecting the GUI and typing :model-selection. If using the MultiCameraCalibrator GUI, click the appropriate button to run model selection. In either case, a series of batch optimizations will be performed each with a different lens model or number of distortion parameters. Mean reprojection error, mean-squared reprojection error, and max reprojection error statistics will be printed for each model so the user can make an informed decision and pick the appropriate model. Configuration files for each model will be saved alongside the images in the directory specified, or in /tmp/cameraCalibration/imageSetX if unspecified

Compatibility with OpenCV

Two calibration models are compatible with Caltech's camera calibration toolbox and the OpenCV calibrator:

  1. CaltechCalibration (and CaltechInitializer)
  2. RadialPolynomialCalibration (and RadialPolynomialInitializer)

The difference between these models is simply that the RadialPolynomialCalibration does not contain skew or tangential distortion terms. Otherwise, the same radial distortion model is the same between the RadialPolynomialCalibration and CaltechCalibration models.

When converting one of these calibrations to a format compatible with OpenCV, note the order of the distortion terms. Caltech and OpenCV order the terms as follows: (k1, k2, p1, p2, k3). AprilCal supports a variable number of polynomial terms for the radial distortion models, including the CaltechCalibration. For simplicity, the radial and tangential distortion terms are stored in separate arrays.

When using a RadialPolynomialCalibration model with three distortion terms (kclength=3), use the following distortion array for OpenCV: (kc[0], kc[1], 0, 0, kc[2]).

When using a CaltechCalibration model with three distortion terms (kclength=3), use the following distortion array for OpenCV: (kc[0], kc[1], lc[0], lc[1], kc[2])

If using fewer than three distortion parameters, set unknown parameters to zero.

Obtaining the parameters for these models is as simple as running model selection as a post-processing step, and picking the appropriate file.

Image rectification

Each camera model implements the View interface, which species methods to convert pixel coordinates (distorted coordinates, if appropriate for the model) to 3D rays. Classes conforming to the radial distortion model typically return rays with z==1, while the AngularPolynomialCalibration returns rays on the unit sphere (mag==1)

Resampling images uses a *rasterizer* class, such as the BilinearRasterizer. This class takes a pair of models and produces an image for the output model using the following conversion:

   pixel for output -> 3D ray -> pixel for input

The NearestNeighborRasterizer simply looks up the appropriate pixel index, while the bilinear variant performs bilinear interpolation to reduce image artifacts.

Images can be freely rectified or synthetically distorted using this class. They can also be rotated about the focal center, as is appropriate in stereo rectification.

Rectifying images requires an implementation of the View interface that specifies the new camera intrinsics matrix and width/height of the new image. These can be created by hand (e.g. DistortionFreeCalibration), but typically users will use one of the following classes:

* MaxRectifiedView 
    Contains the whole rectified image as determined by iteratively rectifying
    the border of the distorted image to compute the appropriate bounds
* MaxInscribedRectifiedView 
    Like the MaxRectifiedView, but uses a simple heuristic to find the largest
    rectangle inside the rectified border. This should result in an image
    without undefined pixels
* MaxGrownInscribedRectifiedView 
    Like the MaxInscribedRectifiedView, except the region is computed with a
    simple region-growing algorithm. This is necessary for stereo rectified
    pairs where one camera was significantly rotated

Both the input and output view can be passed through a ScaledView if appropriate for your application. If the input view is a scaled view, the input image must have already been scaled appropriately (the dimensions will be checked with assertions). ScaledViews are best for advanced users -- you should be careful to check your model if you are, for example, making an image pyramid.


Source image

Source image


java -c calibration.config -s aprilCameraCalibration.camera0000 -r -i image0002.png

Max rectified view (shows entire image)


Here is an inscribed rectified view. Note that the image is essentially the MaxRectifiedView above, but cropped by an appropriate amount to remove the black padding that comes with the MaxRectifiedView. This is done using a heuristic that appears to work in single-camera cases. For multi-camera cases where one camera is rotated, like stereo rectification, use the MaxGrownInscribedRectifiedView

java -c calibration.config -s aprilCameraCalibration.camera0000 -r -i image0002.png

Max inscribed rectified view (shows image without undefined areas)

Stereo rectification

Here is an illustrative example of stereo rectification. The red lines are included by the tool for human inspection and can be omitted with a command-line option.

java -c calibration.config -l camera0/image0002.png -r camera1/image0002.png