Month: October 2017

Taking the mesoscope for a test drive

The Neurolabware mesoscope is up and running using our new tower system and today we took it for our first test drive with a tetO mouse.

Here is the very first movie sequence of a large FOV with single-cell resolution.

 

We can also build a panorama of a large region using Lissajous scanning (other raster methods are available as well).  The panorama below covers 3 by 3.6 mm and it was off-center (my fault).  The microscope is designed for a 4 x 4 mm field of view.

golem

Here is one example of the software constructing such a panorama by stitching adjacent frames together.  This comes from a lily bud:

 

One can then pick different ROIs to scan fast at a higher magnification from within the panorama.

As it happens, the 3 ROIS were somewhat overlapping in this case (my fault), but it should be enough to show things are working.

The figure below shows the locations of the ROIs within the panorama

golem

So here is small segment of a movie showing the fast, sequential imaging of the three ROIs.  The movie does not look as neat as on my computer (it is Vimeo’s fault this time), but you get the idea.  I hope you can easily see by eye single cells firing.

So things are progressing nicely.  With electronics and optics out of the way, development will now focus on are working on a GUI to allow quick visualization of the panoramas and selection of ROIs and the integration of ROIs into alignment and segmentation tools.

If you are interested in finding more about the mesoscope technical specifications please contact Adrian.

 

Intrinsic & epi-fluorescence imaging using the port camera

Some colleagues have asked about the possibility of integrating imaging acquisition from the port camera to do intrinsic and/or epi-fluorescence imaging ahead of targeting a region of interest with two-photon experiments in Scanbox.

We have now added this option to Scanbox.

To use this  option you will need to wire the I2C port on the faceplate of Scanbox.  As nobody seems to be making use of I2c sensors we changed the functionality of these pins.

You will first need a camera with an output TTL signal that provides a rising edge on each frame. Most advanced cameras provide such an option.  That signal needs to be connected to pin #2 (second from the left).  We also need a synchronization signal from our visual stimulus, so we know on which frames it is presented.  A rising edge on pin #3 will be timestamped by the system with the frame number when it occurred.

That’s all you need in terms of hardware connections.

Now, you will find an extra line in the scanbox_config.m file that allows you to select the format you want to use for the port camera.  Some cameras have an 8-bit depth format that is convenient and sufficient if all you are using the camera for is to navigate around the sample. For imaging, you will probably want to use formats with pixel depths ranging from 12 to 16.  You can use Matlab’s imaqtool to see what formats are available for your camera.  Set the pathcamera_format variable to a string reflecting your selection.  For example:


sbconfig.pathcamera_format = 'Mono14';

In Scanbox, you will see a new panel which, at the moment, is sparsely populated with only 3 buttons, but should be enough to get things going.  Two buttons behave in the same way as for the eye and ball cameras: they allow you to define a ROI that will be saved.  The third button “Grab” allows you to manually start and stop the acquisition.  You can only use this button when the port camera is active. Otherwise, you will get a message complaining about it. When the port camera is active, sending a command to start and stop sampling will engage this button as well.  So the same experimental scripts you are now using for two-photon imaging can be used for imaging with the camera port without any change at all.

The data are stored in two separate files, one that contains the images themselves and the other containing the TTL data from the visual stimulation.  To read them, there are two separate functions: sbx_get_ttlevents() and sbx_read_camport_frame().

The first function takes the file name as input and simply returns a list of frame numbers during which a rising edge was present in the “stimulation” input on the I2C connector above.

So, for example:


>> ttl = sbx_get_ttlevents('xx0_111_222')'

ttl =

    200  240  280  320  360  400

This means the onset of six stimuli occurred during those frames.

The second function takes the file name and a vector of frame numbers.  It returns a volume of data where the third dimension corresponds to the selected frames.

So, for example:


>> data = sbx_read_camport_frame('xx0_111_222',ttl(1)-10:ttl(1)+10);

Will read the volume from ten frames before the onset of stimulus #1 to 10 frames after.

As things are now you have to control illumination externally.  We will work to integrate illumination and histograms of ROI values soon.