Month: April 2014

Eye Tracking

In addition to tracking the  movement of the ball one may want to track eye position along with pupil size. Once again, we use a Dalsa Genie GigE camera to image the eye. Conveniently, it turns out that if you are imaging near 920 nm, there is sufficient light that makes its way through the brain and out of the pupil.  Thus, the pupil is clearly visible during imaging and there is no need for additional illumination.  The short video below shows one such example of images collected during imaging in primary visual cortex.

Detecting the center and radius of the pupil is not a difficult task from these data. The biggest obstacles are posed by the occasional movement of the whiskers in front of the eye, blinks and periods of grooming when the eyes are closed and/or occluded.  Sample traces of pupil area and eye position obtained are shown below (representing 7 min of data).  The parameters are estimated by a straightforward use of Matlab’s imfindcircles function.

As done for the ball tracking, frame acquisition is also triggered by hardware using the Scanbox’s camera synchronization signals.  Thus, in our system, the measurements of ball movement, (x,y) eye position, and pupil size, are all synchronized to the frames of the microscope.

The code is based on Matlab’s imfindcircles():

function eye = sbxeyemotion(fn,varargin)</p>
   load(fn,'-mat'); % should be a '*_eye.mat' file

   if(length(varargin)>0)
      rad_range = varargin{1};
   else
      rad_range = [12 32]; % range of radii to search for
   end

   data = squeeze(data); % the raw images...
   xc = size(data,2)/2; % image center
   yc = size(data,1)/2;

   warning off;

   for(n=1:size(data,3))
      [center,radii,metric] = imfindcircles(squeeze(data(yc-W:yc+W,xc-W:xc+W,n)),rad_range,'Sensitivity',1);
      if(isempty(center))
         eye(n).Centroid = [NaN NaN]; % could not find anything...
         eye(n).Area = NaN;
      else
         [~,idx] = max(metric); % pick the circle with best score
         eye(n).Centroid = center(idx,:);
         eye(n).Area = 4*pi*radii(idx)^2;
      end
   end

   save(fn,'eye','-append'); % append the motion estimate data...

Ball Tracking

One common way to track the movement of the ball is to use optical mice.  Some disadvantages of the method is that mice need to be positioned carefully near the ball surface in each experiment, that at least  least two of them are needed to recover all 3 rotation parameters, and that it is not trivial how to synchronize such data to the imaging.

To circumvent some of these issues we opted instead to draw a bunch of dots on the surface of the Styrofoam ball using a black marker that absorbs light in the infrared.  After trying a few it turns out that the Expo dry erase marker works just fine.   We use a Dalsa Genie GigE camera to image the ball under infrared illumination (we use an IR ring light from Advanced Illumination).  We image a 2x2cm area of the ball from a working distance of about 60cm.  Once adjusted and focused, the system does not need to be calibrated or re-positioned between experiments — it is just ready to go.

 

Importantly, frame acquisition is triggered by hardware using the Scanbox camera synchronization signals.  Thus, frames of the camera correspond to those of the microscope.  Sometimes the cameras may need to run at a multiple of the imaging frequency.  The scanbox card provides for such option as well in its firmware.  Above we see an example of a segment showing the actual images collected (at 30Hz) as well as the estimated velocity in the imaging plane (which can be converted to rotational speed given we know the size of the ball and the dimension of the surface imaged).  Note that the rotation along the line of sight can also be measured from such data, although doing so requires longer processing times due to increased complexity of the registration.  This can be done offline as the raw imagery data of the ball is saved by the microscope as well.

I am sure others have come up with their own solutions — which one is yours?