ArmLab

From EECS467
Jump to: navigation, search

Overview

Your deliverables consist of:

  • A baseline level of performance in the challenge on 2/10
  • A lab report, due 2/10
  • A poster (in PDF format), due 2/10

Please note: this lab goes by really fast! Please waste no time in getting started.

Submitting

Your problem set should be emailed in the following format:

TO: eecs467-submit@april.eecs.umich.edu
SUBJECT: ArmLab: <uniqname1> <uniqname2> <uniqname3> ...

Your email should contain exactly three attachments.

  1. A PDF of your typeset write up. We will not accept any other formats (including .doc). We encourage you to use LaTeX, since it will make typesetting math easy. (Your write-up should briefly restate the question being answered so that is "self-contained".) The use of figures, screenshots, code snippets, etc., is strongly encouraged and is generally critical in obtaining full marks.
  2. A JPEG (at reasonable resolution, say 200kB) of your team collaboration form, stating the dates that your team met, and your signatures.
  3. A PDF of your poster. (Do NOT print a poster to submit to the staff.)

NB: Your email should not exceed 3 MB in size. If you cannot reduce the file below this size, our mail server may reject it. Instead, post your file on your UMICH web space and email us a link AND the md5sum of the PDF. This will serve to timestamp your submission.

Deviations from this format will be penalized. You should receive a "moderation pending" message due to you not being a member of the list; this indicates successful receipt of your email.

This is an team assignment. Your team must complete it together, collectively participating on each component. Please review the course policies for additional information.

Task 1. Kinematics and Inverse Kinematics

Objective: Understand how to compute the position of the end-effector given joint angles (kinematics) and how to compute joint angles in order to position the end-effector at a desired location (inverse kinematics). Build and use a 3D model of the arm to aid with debugging. Gain familiarity with modular software development and LCM.

Template code:

cd ~/eecs467/src/                   
git pull [class-remote-name]    # Replace [class-remote-name] with origin, olson, admin, etc. (whatever you named the class repo remote)
make clean all

In this lab, communication with the arm will occur via Lightweight Communications and Marshalling (LCM), a useful tool for interprocess communication. LCM allows you to send messages of predefined types to other processes using a publish/subscribe model. For example, in this project, an arm driver program will generate status messages on a channel called "ARM_STATUS" that will tell you the current state of the servos in the arm, including position, load, error state, etc. Likewise, you will publish commands to a different channel called "ARM_COMMAND" to order the arm to new positions. We've supplied some code to get your started, but follow the link above for more detailed documentation.

Likewise, in this lab, we will be working with matrices a lot. There is a simple matrix library which can be found in common/matd.[h/c]. You might find this useful.


The arm can be damaged by collisions with the table and/or itself. Implement a set of rules that prevent any commands from being transmitted to the arm driver that would cause a collision.

Q1.1. Describe your collision rules (both in terms of what type of collision is being tested for--- e.g., end-effector-to-table, and how your rule was implemented.)

Beginning with the template above (see rexarm_example.c), create a 3D kinematic model for the arm. You'll need to measure the lengths of the various parts of the arm. As sliders are moved, both the physical arm and the 3D model of the arm should move simultaneously. If a servo encounters an error state, turn the servo red. Note: before your arm ever moves, be sure to complete Q1.1. Do not send commands with max_torque > 0 until you are confident that your collision rules are correct!

Communicating with the Robot Arm: To communicate with the arm, first plug in the appropriate wires (power, servo signal cables, USB to USB port on your computer). Now, type in

ls /dev/ttyUSB*

The arm shows up as /dev/ttyUSB0 or /dev/ttyUSB1. To communicate and listen for commands over LCM, we need to have the arm_driver module running in the background (run this in a separate shell).

cd ~/eecs467/bin
./rexarm_driver -d /dev/ttyUSB0        # Or /dev/ttyUSB1, if appropriate

You can now subscribe to the arm status messages over LCM (see rexarm_example.c::status_handler() for an example). You can publish your own messages over LCM to set new servo target angles. Here's a code snippet to do that, which can also be seen in rexarm_example.c:command_loop().

dynamixel_command_list_t cmds;
cmds.len = NUM_SERVOS;
cmds.commands = malloc(sizeof(dynamixel_command_t)*NUM_SERVOS);
// Command all servos to position 0
for (int id = 0; id < NUM_SERVOS; id++) {
    cmds.commands[id].utime = utime_now();
    cmds.commands[id].position_radians = 0;
    cmds.commands[id].speed = 0.5;
    cmds.commands[id].max_torque = 0.0; // When at 0, the arm will remain limp, prevent you from hurting it.
}

dynamixel_command_list_t_publish(lcm, "ARM_COMMAND", &cmds);


Q1.2. Submit a picture of your GUI and arm and a table showing the lengths of the arm segments that you used.

Add an event handler to your GUI that listens for a mouse click. When this occurs, project the 3D ray onto the XY plane. Compute joint angles that position the arm and gripper as though it was about to pick up a ball centered on the point you clicked. To do this, you will need to implement your own vx_event_handler_t. This looks a little different in C than it does in C++, Java, or other object oriented languages. As can be seen in vx_event_handler.h, the event handler type actually just holds a variety of function pointers which the user must implement. Thus, to create your own event handler, you need to make an event handler object and point these function pointers to appropriate implementations. You can see an example of this process in default_event_handler.c::default_event_handler_create(...).

You may also notice a void* to something called impl. If you need to store additional state beyond what your event handler pointer can store, pack it into custom struct for holding this data and point the impl pointer at an instance of this struct. For example, if I wanted to store the x and y coordinates of my last mouse click for use in my event handler, I could do something like this:

typedef struct event_state event_state_t;
struct event_state {
    int x, y;
};

static int custom_mouse_event(vx_event_handler_t *vh, vx_layer_t *vl, vx_camera_pos_t *pos, vx_mouse_event_t *mouse)
{
    event_state_t *state = vh->impl;

    // Handle the event how you see fit

    // Store the last mouse click
    state->x = mouse->x;
    state->y = mouse->y;

    return 0; // Returning 0 says that you have consumed the event. If the event is not consumed (return 1), then it is passed down the chain to the other event handlers.
}

// Somewhere in your code...
event_state_t *event_state = malloc(sizeof(event_state_t));
event_handler_t *my_event_handler = malloc(sizeof(event_handler_t));
my_event_handler->dispatch_order = 0; // Lower number here means that you get higher priority in processing events.
my_event_handler->impl = event_state;
my_event_handler->mouse_event = custom_mouse_event;
// Set other functions ...
Q1.3. How did you compute your inverse kinematics? Include a diagram and provide equations/pseudo-code. We are particularly interested in how you resolved cases in which multiple solutions are possible, and how you handle cases in which no ideal "down-wrist" solution is possible. Also note that commanding the servos to travel between distant angular positions may result in the arm traveling through an undesirable range of motion; describe how you deal with this problem.

Consider a simple planar 2D badminton-playing robot with a servo at the origin, a rigid segment of length L1, a second servo, and another rigid segment length L2.

Badmintonrobot.png

Q1.4. Suppose that the lower servo has a maximum angular velocity of V1 and the upper servo has a maximum angular velocity of V2. What is the maximum speed at which the head could be made to hit the shuttlecock? (You may assume that the servo can accelerate instantaneously, and you can pick the initial configuration.).


Q1.5 [*Bonus (not required)]. Assume that the servos have finite accelerations A1 and A2. Compute a motion plan that results in the head achieving maximum velocity at the moment the racket is fully extended upwards. You can pick the initial configuration.

Task 2. Blob Detection

Objective: Understand how to detect an object with known characteristics by using blob detection. Understand how to project from pixel coordinates into physical (e.g. arm-relative) coordinates.

You can implement a golf ball detector using blob detection: locate blobs in the image and detect blobs the fit your object model (e.g. average color). For example, you might initialize your blob detector by click on a ball on screen and using the average color value of the nearby pixels as your goal average color (+/- some threshold).

Create a user interface that acquires images from the camera, diplays them on screen, and allows a user to identify a golf ball in the current image (e.g. by "painting" golf ball pixels with your mouse) to initialize a blob detector. Use your initialized blob detector to find all of the golf balls currently in the image. (Hint: this would be an excellent place to use the union find algorithm discussed in lecture).

Q2.1. A blob detection system is only good as the your object model. If your model is too weak, you'll encounter false positives. If your model is too strong, you'll have false negatives, meaning you'll miss some balls. What are some blob characteristics you could use to more reliably model golf balls in the scene? Identify at least three such characteristics and use them in your blob detector. Why did you choose these characteristics? What failure cases did they address?


Q2.2. In the warmup lab, you implemented template matching. This is also an effective way to identify golf balls. Use your experience (and hopefully code!) from the warmup lab to make a golf ball template matcher. Compare the performance of your golf ball template matcher to your golf ball blob detector. Include screenshots of the detections from each for an identical scene. Discuss why you might choose one detector over the other.

Task 3. Putting everything together.

Objective: Integrate inverse-kinematics from Task 1 with ball detection from task 2 in order to create a ball-collecting robot. You can use whichever detector you like from task 2. Understand the ways in which failures occur, and develop methods to improve the robustness of the system.

Your arm will be set up with a number of balls (more than 4, less than 20), and your goal is to pick them up and put them into the basket as quickly as possible. Suppose that you collect c balls in t seconds; your arm will score:

points(c,t) = \frac{c^2}{\frac{t}{30}+1}

No human intervention is allowed. Time ends when the program announces that it is finished. (Consequently, your user interface must *conspicuously* provide a "FINISHED" display. You are also encouraged to add your own timer, which will serve as a redundant backup to the staff.)

The first problem you must solve during integration is that your ball detector returns balls in PIXELS, which you need to convert to arm-relative coordinates (i.e., meters).

The "right" way to do this is by computing a homography. However, we will approximate a solution using linear regression. (we can get away with this because all of our balls are at roughly the same distance from the camera. We'll talk more about this in class.) We want to find a 3x3 transformation that maps pixel coordinates (px,py) into arm coordinates (ax, ay). We'll use 2D homogeneous coordinates; in short, we want to solve for the unknown parameters a, b, c, d, e, f:


\begin{bmatrix} a & b & c\\ d & e & f \\ 0 & 0 & 1 \end{bmatrix}
\begin{bmatrix} p_x \\ p_y \\ 1 \end{bmatrix}
=
\begin{bmatrix} a_x \\ a_y \\ 1 \end{bmatrix}

Given four correspondences (you can click on the image of four points whose arm-relative positions you know), we develop eight equations relating a, b, c, d, e, f. Rewrite these equations so that the unknown variables are put in a column:


A
\begin{bmatrix} a \\ b \\ c \\ d \\ e \\ f \end{bmatrix}
=
b

We can then solve this Ax = b problem, recovering the unknown parameters.

Q3.1. Construct a user interface that allows the user to click on the four calibration dots on the arm board, computes the parameters a, b, c, d, e, f, and allows future clicks to be converted into arm-relative coordinates. Create a visualization that shows the resulting coordinate frame by overlaying a virtual set of coordinate axes on top of the camera image. Show a screenshot of your interface. How accurate is your calibration (microns? millimeters? centimeters?) Support your claim with data.

A good way to set up your program is as a state machine that nominally 1) identifies a ball, 2) picks it up, 3) drops it in the bucket. Of course, you may add additional states--- picking up a ball is likely to be multiple states.

Q3.2. Draw your state machine, including what triggers state transitions. (Passage of time? Detection of a ball? Reaching a certain torque threshold?) We are particularly interested in how you make your system robust to failures, such as a ball falling out of the gripper, or a robot that gets stuck in an endless loop trying to pick up a ball that it can't manage. Comment on any other aspects of your system that are interesting.

Task 4. Technical Communication via Posters.

Objective: Develop skills for communicating technical content in the form of a poster.

Design a poster for a target size of 48" by 36". (You can adjust the dimensions and aspect ratio, but the sizing of your fonts (etc.) should reflect a poster of roughly this size). The target audience of your poster are technically savvy engineers that are NOT particularly knowledgeable about robotics. For example: your undergrad peers, or a judge at a poster symposium.

You should focus your poster on some aspect of your system that is especially interesting (perhaps a more elaborate ball detection system, or a better motion planner, or a more robust way of picking up balls, or how you tuned the whole system end-to-end). You can choose to present your work either in the context of the ArmLab competition or as a stand-alone problem--- which ever you think is most effective.

Be sure to follow good poster-designing guidelines (a well-designed header, appealing "bait", a claim, technical content that can be absorbed at multiple levels of detail, a quantitative evaluation that supports your claim, all while following the basic rules of graphic design that we discussed in class.)

Remember that your poster should address the 4 basic questions:

  1. What is the problem?
  2. Why is it important?
  3. Why is it hard?
  4. What did I do about it?
Q4.1. Design the poster and include it as a PDF in your submission.

Do not leave the poster to the last minute! Consider setting an internal milestone for a draft of the poster. In particular, you may need to collect data in order to provide an evaluation... and so you need to have time to design and run the necessary experiment.

Task 5. Certification and Peer Evaluation [required for credit, 0 points]

Print or write the following on a sheet of paper:

"I participated and contributed to team discussions on each problem, and I attest to the integrity of each solution. Our team met as a group on [DATE(S)]."

Each team member should sign the paper (physically, not digitally), and a photo of the paper should be included with your problem set. By doing so, you are asserting that you were materially involved in each answer, and that you certify that every solution complies with the collaboration policy of this course. In the event of a violation of the course's guidelines, your certification will be forwarded to the honor council.

If the statement requires qualification (i.e., it's not quite true), please describe the issue and the steps you took to mitigate it. If a team member did not materially contribute to a particular solution, then their score on that problem will generally be reduced. The staff will use their discretion based on the specific circumstances you identify, and we are happy to discuss potential problems prior to the problem set due date.

If your team works together, as required, this will be painless! :)

Each member of your team must also complete a peer evaluation, accessible from the apps page.

Your evaluations of your teammates will remain private. If problems develop with your team, it is up to your team to resolve them: we will not intercede except in the most extreme situations.