Ball Tracking

One common way to track the movement of the ball is to use optical mice.  Some disadvantages of the method is that mice need to be positioned carefully near the ball surface in each experiment, that at least  least two of them are needed to recover all 3 rotation parameters, and that it is not trivial how to synchronize such data to the imaging.

To circumvent some of these issues we opted instead to draw a bunch of dots on the surface of the Styrofoam ball using a black marker that absorbs light in the infrared.  After trying a few it turns out that the Expo dry erase marker works just fine.   We use a Dalsa Genie GigE camera to image the ball under infrared illumination (we use an IR ring light from Advanced Illumination).  We image a 2x2cm area of the ball from a working distance of about 60cm.  Once adjusted and focused, the system does not need to be calibrated or re-positioned between experiments — it is just ready to go.

 

Importantly, frame acquisition is triggered by hardware using the Scanbox camera synchronization signals.  Thus, frames of the camera correspond to those of the microscope.  Sometimes the cameras may need to run at a multiple of the imaging frequency.  The scanbox card provides for such option as well in its firmware.  Above we see an example of a segment showing the actual images collected (at 30Hz) as well as the estimated velocity in the imaging plane (which can be converted to rotational speed given we know the size of the ball and the dimension of the surface imaged).  Note that the rotation along the line of sight can also be measured from such data, although doing so requires longer processing times due to increased complexity of the registration.  This can be done offline as the raw imagery data of the ball is saved by the microscope as well.

I am sure others have come up with their own solutions — which one is yours?

5 comments

  1. We just keep it dirty 🙂

    The big question – what is your closed loop latency? (from movement of ball to update of display, measured by photodiode). We are in the low 40s ms – we use it for sensorimotor tasks, so latency is important for us.

    1. We are not doing any closed-loop experiments, so I have not measured. A guess? The camera is running at 60Hz, so we have 16ms per frame, plus the transfer of data over GigE (~1ms for 200×200 pixel image), the FFT translation estimation is negligible… but to update something on the screen you may need to wait on average another 1/2 frame for the next VSYNC of the screen (8ms). But yeah… I think we can around 32ms or so. For shorter latency you will have to use one of the optical mice sensors and interface with it directly over I2C. Something like this — http://www.photonics.philips.com/application-areas/sensing

      How are you keeping it dirty?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s