image registration

New alignment and segmentation tools

Improved alignment and segmentation tools have now been released in the latest version of Scanbox, while retaining much of the functionality of the last version.

sbxaligntool. The new alignment tool, shown below, adds batch processing of files, including the processing of eye and ball motion if those data are present.  A region-of-interest (ROI) can optionally be selected manually or automatically.  For file entries where manual selection was specified, the program will stop and present a rectangle on the screen for the user to specify the ROI.  Typically, automatic ROI works fine, and it does not require the user to stand by the computer to specify the ROI each time a new file starts to process.

aligntool

As the files are aligned, the Status column and Status message will display the progress. The alignment procedure can also be visualized by clicking the Live update checkbox, which will display the mean of the entire image stack as the process moves along.  Pan and Zoom buttons allow the user to inspect details in the live image, such as fine branches, as the system is carrying out the alignment. This tool performs rigid alignment and the result is stored in a *_rigid.sbx file.  The original data is left untouched. The tool can align images relatively fast (about 65 frames/sec in my computer), but it will take a few minutes to compute the reference image if the sequence is 15 min or more (please be patient). Alignment improves with the number of passes requested.  Usually one pass is very good, but you can try two or more passes by changing the appropriate entry in the column. The alignment algorithm has been improved.

sbxsegmenttool. The segmentation tool works in a similar way as before. After loading the aligned *_rigid.sbx file, it will display the correlation map.  Segmentation then proceeds as in the previous version.

segmenttool

Once a number of cells are selected, you must save the segmentation and then extract the signals by pressing the corresponding buttons. After the signals are extracted you can select a cell with the pull down menu on the bottom left and the traces corresponding to that cell (now highlighted in green) will be displayed.  The blue trace represents the average signal within the cell, the gray trace is the neuropil, and the trace is the estimated spike rate using the Vanilla algorithm with parameters optimized for GCaMP6f.

Improvements include an Undo button, which will remove the last cell segmented. The ability to load a previous segmentation (it will load automatically after you select the *_rigid.sbx file), to continue adding cells to it.  The ability to define a ROI in the correlation map to automatically increase the contrast of the correlation map as the most salient cells are selected. A zoomed version of the mean image on the right to go along with the correlation map.  And the tool now saves the neuropil and deconvolved signal as well.

Give these tools a try. Report back any suggestions for improvements or problems you encounter.

Using auto-stabilization

Real-time auto-stabilization in Scanbox Yeti is achieved by tracking the displacement of 2D features in a number of small windows relative to a reference image.  We prefer tracking cells within a handful of small windows because slow gradients in the image can easily bias the resulting estimates for large regions.

Here are some step-by-step instructions on how to use the auto-stabilization feature:

  • Start by collecting ~100 frames or more during a period of relative quiescence using the “Accumulate” mode under the “Image Display” panel.  When you stop the acquisition Scanbox will display the average image.
  • In the “Real time processing” panel click the “Ref” button.  This will bring up, one after another, a  small local windows that you should position (drag) to image areas that have two-dimensional structure and do not change much over time.
  • Once a location has been selected for a window, you should double-click inside the window to place it.  A new window will then appear on the top left corner of the image.  Repeat this process until the last window is placed.  When you double-click the last window your reference images are defined.  Typically, 4 windows is sufficient for accurate tracking.
  • The number and size of these regions are stipulated by the variables nroi_auto and nroi_auto_size in the scanbox_config.m file.
  • Once all the reference images are defined you can click the “Stabilize” checkbox in the “Real-time processing” panel to enable tracking.
  • Stabilization will then take place both during focusing and grabbing.   However, you should not move the microscope with stabilization enabled as the result will be chaotic.
  • Scanbox uses the motion estimates using FFTs from these regions to compute an optimal global, rigid translation that it uses to compensate to align the individual frames to the reference image.  This alignment is saved to an *.align file, where data about the location of the windows and their estimated movement are saved at the end of an acquisition session.

 Tips:

  • You can use the “Accumulate” mode with stabilization enabled.  This will produce a new accumulated image that will be much sharper than the original one, as the images are now stabilized.  You can then refine the reference image by clicking the “Ref” button and re-positioning the windows one more time.   This process takes another minute or so, but it will result in better motion correction estimates and stabilization.
  • Image stabilization is typically a first step before proceeding with the definition of ROIs to be tracked or segmentation, which will be explained in separate posts.
  • Important: Do not try to run the microscope without finishing the placement of the reference windows, as you will get an error.   Future releases will force you to do so.

yeti_holidays

 

Recursive Image Alignment and Statistics

If it is awake, it will move.  And if it moves, your images are likely to do it too.  This means that, almost surely, the first step in processing your data will be to align (or register) all the images in the sequence. This is such a basic problem that there are probably as many solutions as readers of this blog, and I invite you to share yours  in the comments below.

Below is our approach, which seems to work reasonably well.  (At some later time I will discuss the possibility of doing this alignment in real-time as images are being acquired.)

Suppose we have a stack of images indexed x_i to align.  We will write a function that takes as input this set of images and returns a corresponding set of transformations  t_i that, when applied to each of the images, results in their alignment.  The function also returns the mean image after the alignment \mu.

The general strategy consists in splitting the sequence in two sets, A and B.  The set A  holds the first half of the sequence and B the second half.  Then, we recursively call our alignment function to register the set in A, which returns (\mu_A, {t_i^A}), and then the one in B, which returns (\mu_B, {t_i^B}).

The final step is to find the transformation required to align the mean image of A with the mean image of B, t_{AB}, and return the mean after the alignment \mu = (|A|t_{AB}\mu_A + |B| \mu_B )/(|A| + |B|) and the transformations (t_{AB} \odot t_i^A, t_j^B).  Here, \odot means the composition of the transformations.  That is, the transformation t_{AB}  is applied to all the transformations of the set A.  The recursion ends when the function is called with a single image, in which case it returns the image itself as the mean and the identity transformation.

It turns that you can not only compute the mean recursively, but all the other moments as well. Thus, in a single pass through the data, the algorithm can align the images, and compute the mean, second, third and fourth moments of the aligned sequence.  From there it is easy to compute the images corresponding to the variance, skewness and kurtosis of the distributions of the signal at each location.

Note the algorithm works for any class of transformations, but it helps to keep it simple.  If you are using a resonant, two-photon microscope then, because of the relatively high rate of imaging the distortions from one frame to the next are well approximated by a simple translation.  Feel free to try other transformations, but beware of the fact that you will pay a heavy prize in processing time.

The resulting code is only a few lines long:

function r = sbxalign(fname,idx)

if(length(idx)==1)

 A = sbxread(fname,idx(1),1); % read the frame
 S = sparseint; % sparse interpolant for...
 A = squeeze(A(1,:,:))*S; % spatial correction of green channel

 r.m{1} = A; % mean
 r.m{2} = zeros(size(A)); % 2nd moment
 r.m{3} = zeros(size(A)); % 3rd moment
 r.m{4} = zeros(size(A)); % 4th moment

 r.T = [0 0]; % no translation (identity)
 r.n = 1; % # of frames

else

 idx0 = idx(1:floor(end/2)); % split into two groups
 idx1 = idx(floor(end/2)+1 : end);

 r0 = sbxalign(fname,idx0); % align each group
 r1 = sbxalign(fname,idx1);

 [u v] = fftalign(r0.m{1},r1.m{1}); % align their means

 for(i=1:4) % shift mean image and moments
 r0.m{i} = circshift(r0.m{i},[u v]);
 end

 delta = r1.m{1}-r0.m{1}; % online update of the moments (read the Pebay paper)
 na = r0.n;
 nb = r1.n;
 nx = na + nb;

 r.m{1} = r0.m{1}+delta*nb/nx;
 r.m{2} = r0.m{2} + r1.m{2} + delta.^2 * na * nb / nx;
 r.m{3} = r0.m{3} + r1.m{3} + ...
 delta.^3 * na * nb * (na-nb)/nx^2 + ...
 3 * delta / nx .* (na * r1.m{2} - nb * r0.m{2});
 r.m{4} = r0.m{4} + r1.m{4} + delta.^4 * na * nb * (na^2 - na * nb + nb^2) / nx^3 + ...
 6 * delta.^2 .* (na^2 * r1.m{2} + nb^2 * r1.m{2}) / nx^2 + ...
 4 * delta .* (na * r1.m{3} - nb * r0.m{3}) / nx;

r.T = [(ones(size(r0.T,1),1)*[u v] + r0.T) ; r1.T]; % transformations
 r.n = nx; % number of images in A+B

end

The algorithm lends itself to parallel processing by enclosing the recursive calls (lines 22 and 23) in a parfor loop. However, the gains (or not) will depend largely on the disk array you have.

The optimal translation (with single pixel resolution) is found using FFTs:

function [u,v] = fftalign(A,B)

N = min(size(A));

yidx = round(size(A,1)/2)-N/2 + 1 : round(size(A,1)/2)+ N/2;
xidx = round(size(A,2)/2)-N/2 + 1 : round(size(A,2)/2)+ N/2;

A = A(yidx,xidx);
B = B(yidx,xidx);

C = fftshift(real(ifft2(fft2(A).*fft2(rot90(B,2)))));
[~,i] = max(C(:));
[ii jj] = ind2sub(size(C),i);

u = N/2-ii;
v = N/2-jj;

You can allow for more complex transformations by using Matlab’s  imregtform but, I warn you once more, you will have to wait substantially longer. Remember that in a typical experiment you will be aligning tens of thousands of images (18,000 in the example below.)

So here is an example of the estimated translations along with the movement of the ball, which is obtained via a camera synchronized to the frames of the microscope. Not surprisingly, when the mouse runs there is relative movement detected in the estimated translation of the alignment.

movement

Also note that the movement in at this magnification has an extent of about 5-6 pixels, this is comparable to the radius of a cell body. A typical result of aligning the images using this method is shown below, where you can still discern individual processes after a 20 min imaging session.

stabilized

The outcome of the alignment is the mean image, along with the moments and the sequence of transformations.  We do not store a new set of aligned images, as doing so would require much space. Instead, next time we read the image from the raw data file, if the alignment data is present,  we can apply the alignment transformation so we can read the aligned stack.

What do you do?  Do you have a different method to share?