scanbox

Mesoscope features: real-time panoramas, ROI selection, zoom/pan, and dynamic ROIs

Here are some of the features we have been developing to make it easy to build panoramas, pick regions-of-interest, and to visualize ROI location and data during acquisition using the NLW Mesoscope (aka the Kraken) on the latest version of Scanbox.

The video below shows a brief demo of the following:

  1. The automatic creation of panoramas on-the-fly as data are collected.
  2. Selection of region-of-interests (ROIs) directly on the panorama view.
  3. Visualizing ROI data embedded in the panorama.
  4. Dynamically zooming and panning during data collection.
  5. Dynamically changing the location of ROIs by dragging them on the panorama during the collection of data.

Here are some raw data from two separate ROIs, one on each hemisphere.

The ROIs (full panorama FOV in this example is 4.0 x 3.5 mm)

Capture

And a short movie with raw data:

Thinking about getting a mesoscope?  Add your wish list of features in the comments below.  Better now than later, as some features can dictate design considerations.

 

 

Finding the optimal sample clock phase

Some time ago we talked about the advantages of synchronizing the digitizer sample clock to the laser.  Our new system incorporates an on-board PLL multi-phase clock that allows Scanbox to automatically measure the contrast in a test image as a function of sample phase delay and select the optimal value for your setup.

A typical measurement showing the change in normalized contrast of a target image as a function of phase delay (here the range from -8 to 8 covers the entire 12.5 ns period of the laser).

pll_1

Scanbox automatically shows the resulting images, on the same scale, for each phase delay value as well:

pll2

Scanbox will also plot the normalized images obtained for the settings that yield the lowest and and highest contrasts:

pll_best_worse

One can clearly see from the images that phase does not simply scale the contrasts of the images, but has an obvious effect on their SNR.  (Why this occurs exactly is still a matter of debate here, but data rule and the results are clear.)

This option will be automatically available to those who adopt the new tower system.  What can you do if you have one of the older systems?  You can simply change the sample clock phase by extending the length of the cable running from the laser SYNC OUT to the external clock of the digitizer.  Extensions of 50cm in length can be connected together to yield phase steps of 1/8th of the laser period.  So if you want to optimize the contrast and SNR of your images take a day off to find your optimal delay and improve the quality of your data.

Using the correct range and bias for PMT amplifiers

You may have one of two different PMT amplifiers in your setup.  One which has a variable gain.

DHPCA_100

The typical settings of this amplifier should be: left the switch on GND, middle switch on L or H but with the knob set to a gain of 10^4, next switch on FBW and DC for the rightmost switch.

If you have this amplifier, measure the output with an oscilloscope and making sure the channel is in high impedance mode (>1Mohm).  Adjust the output with the offset screw on the left (do not confuse with the bias screw!) so that the reading is around 380mV.  Then, edit the scanbox_config file and set pmt_amp_type to ‘variable’.

Alternatively, you may have a fixed gain amplifier that is a bit smaller and looks like this.

hca-400m-5k_w.jpg

Here the gain is fixed (your version will read 5 x 10^4 V/A and 100MHz bandwidth).  Follow the same instructions above to set the bias to 1.9V by means of the offset screw.  Then, make sure the pmt_amp_type is set to ‘fixed’ in the configuration file.

That’s all.  These settings will allow you to maximize the dynamic range of your digitizer card and the contrast of the images.

Note: the pmt_amp_type variable is only present in the latest release of the software.  So you will need to update to use this feature.

Taking the mesoscope for a test drive

The Neurolabware mesoscope is up and running using our new tower system and today we took it for our first test drive with a tetO mouse.

Here is the very first movie sequence of a large FOV with single-cell resolution.

 

We can also build a panorama of a large region using Lissajous scanning (other raster methods are available as well).  The panorama below covers 3 by 3.6 mm and it was off-center (my fault).  The microscope is designed for a 4 x 4 mm field of view.

golem

Here is one example of the software constructing such a panorama by stitching adjacent frames together.  This comes from a lily bud:

 

One can then pick different ROIs to scan fast at a higher magnification from within the panorama.

As it happens, the 3 ROIS were somewhat overlapping in this case (my fault), but it should be enough to show things are working.

The figure below shows the locations of the ROIs within the panorama

golem

So here is small segment of a movie showing the fast, sequential imaging of the three ROIs.  The movie does not look as neat as on my computer (it is Vimeo’s fault this time), but you get the idea.  I hope you can easily see by eye single cells firing.

So things are progressing nicely.  With electronics and optics out of the way, development will now focus on are working on a GUI to allow quick visualization of the panoramas and selection of ROIs and the integration of ROIs into alignment and segmentation tools.

If you are interested in finding more about the mesoscope technical specifications please contact Adrian.

 

Intrinsic & epi-fluorescence imaging using the port camera

Some colleagues have asked about the possibility of integrating imaging acquisition from the port camera to do intrinsic and/or epi-fluorescence imaging ahead of targeting a region of interest with two-photon experiments in Scanbox.

We have now added this option to Scanbox.

To use this  option you will need to wire the I2C port on the faceplate of Scanbox.  As nobody seems to be making use of I2c sensors we changed the functionality of these pins.

You will first need a camera with an output TTL signal that provides a rising edge on each frame. Most advanced cameras provide such an option.  That signal needs to be connected to pin #2 (second from the left).  We also need a synchronization signal from our visual stimulus, so we know on which frames it is presented.  A rising edge on pin #3 will be timestamped by the system with the frame number when it occurred.

That’s all you need in terms of hardware connections.

Now, you will find an extra line in the scanbox_config.m file that allows you to select the format you want to use for the port camera.  Some cameras have an 8-bit depth format that is convenient and sufficient if all you are using the camera for is to navigate around the sample. For imaging, you will probably want to use formats with pixel depths ranging from 12 to 16.  You can use Matlab’s imaqtool to see what formats are available for your camera.  Set the pathcamera_format variable to a string reflecting your selection.  For example:


sbconfig.pathcamera_format = 'Mono14';

In Scanbox, you will see a new panel which, at the moment, is sparsely populated with only 3 buttons, but should be enough to get things going.  Two buttons behave in the same way as for the eye and ball cameras: they allow you to define a ROI that will be saved.  The third button “Grab” allows you to manually start and stop the acquisition.  You can only use this button when the port camera is active. Otherwise, you will get a message complaining about it. When the port camera is active, sending a command to start and stop sampling will engage this button as well.  So the same experimental scripts you are now using for two-photon imaging can be used for imaging with the camera port without any change at all.

The data are stored in two separate files, one that contains the images themselves and the other containing the TTL data from the visual stimulation.  To read them, there are two separate functions: sbx_get_ttlevents() and sbx_read_camport_frame().

The first function takes the file name as input and simply returns a list of frame numbers during which a rising edge was present in the “stimulation” input on the I2C connector above.

So, for example:


>> ttl = sbx_get_ttlevents('xx0_111_222')'

ttl =

    200  240  280  320  360  400

This means the onset of six stimuli occurred during those frames.

The second function takes the file name and a vector of frame numbers.  It returns a volume of data where the third dimension corresponds to the selected frames.

So, for example:


>> data = sbx_read_camport_frame('xx0_111_222',ttl(1)-10:ttl(1)+10);

Will read the volume from ten frames before the onset of stimulus #1 to 10 frames after.

As things are now you have to control illumination externally.  We will work to integrate illumination and histograms of ROI values soon.

Sampling on a Surface

Not long ago I mentioned Scanbox’s new ability to sample on a surface. Now you can access this new feature in the GUI by navigating to the Surface Sampling panel.

Surface sampling allows you to link lines of the resonant scan to depths determined by the optotune setting.  In other words, it allows you to sample on a surface along the galvo axis (the vertical axis in Scanbox).  Of course, limits are imposed by the range of the optotune and its dynamical response.

Below is an example of how the process works.  Here, I am imaging a slide of pollen grains that is tilted along the vertical axis.

Because the slide is tilted, different settings of the optotune bring the grains in different lines into focus, as shown at the beginning of the video.

To compensate for the tilt we can establish a link between lines in the scan and depth. To do this, change the optotune setting while focusing, then hit the Link button, and then click on the grains that are in focus.  In this example, I repeat this a handful (3) of times.

Now, when the Enable button is clicked, Scanbox interpolates a value of the optotune for each line given the established links and uploads the resulting values to the Scanbox firmware.  When we image the slide with the link active we see all the grains in focus. In other words, Scanbox is now sampling on a slanted surface and compensating for the tilt of the slide.

This is a useful feature to use when compensating for the curvature of a structure that is being imaged or tilting the imaging plane without physically tilting the objective.

Try it and let me know if you run into any problems.

The use of this feature requires an update of Scanbox and the firmware to version 4.0.

 

Tiling with Knobby

Knobby scheduler allows the user to move the microscope by specifying a list of relative changes in position at given frame numbers.  One quick way to fill up the table is provided by a set of text fields in the knobby panel.  This allows one to perform (x,y,z) tiling with a given range and step size.

tiling

The first row of entries correspond to the range (in um) to be covered in the x, y and z axes respectively.  The second row specifies the step size in each case.  The last entry at the bottom tells Knobby how many frames to sample at each location.

In the example above, Knobby will perform a z-stack with 40um range and 20 um steps (that is, a total of 3 locations), on an (x,y) grid of 3×3 with 200um horizontal and vertical range and steps of 100um.

As you input the desired values the table will update automatically.

Once ready, click the “Arm” checkbox (and the “Return” checkbox if you want knobby to go back to its initial position at the end) and start your acquisition.

Here is the resulting scan for the example above:

Spatial Calibration for Multiple Objectives

Multiple users of the scope may be running projects that require different types of objectives.  How to keep a spatial calibration for each and switch between them when necessary?

Scanbox now includes an “objective” configuration variable — a cell array of strings each with the name of a different objective.  Right now I have:


% objective list
sbconfig.objectives = {'Nikon 16x','Nikon 25x'};

Now, within the Knobby panel you will see a pull down list containing the objective names:

objectives

Select the desired objective before performing a calibration.  Once the calibration is finished it will be saved.  If a calibration is not present for a given objective the calibration button will read “No Calibration” and the mouse control will be disabled. If you change the objective, simply select the corresponding entry in the pull down list to apply the new calibration (no need to restart Scanbox).

When collecting data the info structure will include the objective name and calibration data:


info =

struct with fields:

resfreq: 7930
postTriggerSamples: 5000
recordsPerBuffer: 512
bytesPerBuffer: 10240000
channels: 2
ballmotion: []
abort_bit: 0
scanbox_version: 2
scanmode: 1
config: [1×1 struct]
sz: [512 796]
otwave: []
otwave_um: []
otparam: []
otwavestyle: 1
volscan: 0
power_depth_link: 0
opto2pow: []
area_line: 1
calibration: [1×13 struct]
objective: 'Nikon 25x'
messages: {}
usernotes: ''

Visualizing Individual Slices during Volumetic Imaging

During volumetric imaging, Scanbox displays all images as they are acquired. This can be inconvenient if we are trying to visually assess the activity of neurons within any one optical slice.  One solution is to display the images separately in a montage by means of a Scanbox plug-in.  However, it would be better to have such an option integrated into Scanbox.  A new feature offers this possibility.

Turning on a  “Slice” checkbox within the Optotune panel will make a pull-down menu within the display appear.  This menu allows selection of the optical slice you want to visualize. Unchecking the slice checkbox allows Scanbox to go back to its normal operation of showing all images in the incoming stream.

Here is a brief demo showing this feature.  Enjoy…

 

 

 

Automatic Control of Laser Power

A new checkbox within the Laser panel (labeled AGC) allows you to turn an automatic control of laser power on and off.

When AGC is on, Scanbox checks the distribution of pixel values on the image every T seconds, and increases or decreases the laser power by a certain factor if the fraction of pixels above a threshold is outside the desired range.

The values of these parameters are found in a new section of the sbconfig.m file:

% Laser AGC
sbconfig.agc_period = 1;            % adjust power every T seconds
sbconfig.agc_factor = [0.93 1.08];  % factor to change laser power down or up if outside prctile bounds
sbconfig.agc_prctile = [1e-5 1e-3]; % bounds on percent pixels saturated wanted
sbconfig.agc_threshold = 250;       % threshold above which is considered saturated (255 is max value)

 

Below is a video showing AGC in action.

At the beginning of the video,  I focus on pollen grains using low power. When the AGC is turned on, it brings the power up. Then, if I increase the PMT gain, the laser power is decreased in response.  If the laser power is changed manually, AGC will re-adjust it to bring the pixel distribution within the desired limits.

 

AGC of laser power is useful when running a z-stack with a range that is larger than 100um or so.  In that case, engaging AGC will make Scanbox adjust laser power as a function of depth. Another situation where AGC may be useful is while running very long experiments/sessions where water may evaporate slowly leading to a reduction of the signal. In that case, turning AGC would compensate and could render the data usable.