Uncategorized

Quick start guide to the Optotune and volumetric scanning

Scanbox uses an on-board current source to control the Optotune lens, so that changes in depth are synchronized to the beginning of each microscope frame.

Once the lens is installed, the easiest way to verify its function is to start imaging a sample and move the slider in the Optotune panel to change the plane of focus.  The value of the slider can be read at the bottom, ranging from 0 to 4095.  In the example below, the slider is set at 1879. (*)

ot_panel

To image a volume at approximately constant intensity Scanbox allows linking the power of the laser to different settings of the focal distance.

Start by moving the slider to the top of your imaging volume and adjust the laser power to the desired level.  By pressing the Link button on the right of the slider you establish a link between the current depth and the power level.

Repeat the process by moving the slider to focus at lower depths, which will probably require you to increase the laser power to achieve the same intensity as before.  It is typically sufficient to define 3-5 points bracketing the range of depths you will image.  Scanbox linearly interpolates among these points to cover the entire range.

Finally, to activate the link between focal plane and laser power click the Enable checkbox right below the “Link” button.  If at any point you want to define a new link between depth and power you must first clear the existing table by clicking on Clear.

Once a link between focal point and laser power is activated it will be automatically used by the system.  Try imaging a volume and change the depth using the optotune slider.  You should see the power of the laser change accordingly on the fly while keeping an approximately uniform illumination. If this is the case, you are in a position to start your volumetric imaging.

Finally, you can use the controls on the right of the panel to choose a z-scanning method.  The three boxes define the minimum value of the current, the maximum value, and its period in microscope frames. The pull-down menu allows you to switch among different waveforms: square, triangular, sinusoidal and saw-tooth.  Once you finished defining the parameters of the waveform click the upload button to send it to the Scanbox card then check Enabled in the check-box below to make the waveform active.

Now you are ready to go!

If you start scanning you should see the system scanning the desired volume, at approximate intensity throughout, and with the waveform selection you used.

You can use the Optotune in both unidirectional and bidirectional scanning.

All the optotune parameters used to collect data are stored in the info structure saved by Scanbox.

The video below illustrates the process described above:

(*) Presently the slider saturates at around 3000 due to the maximum current the present version of the board can generate (this will be corrected in future version of the board.)

 

 

Fast volumetric scanning with an electrically tunable lens

We have implemented volumetric scanning in Scanbox by means of Optotune‘s electrically tunable lens, which allows for fast changes in focal plane without moving parts.

Some salient features of this particular implementation are:

  • Laser power and focus can can co-vary, allowing the brightness of the images to remain approximately constant while performing volumetric imaging.  This is achieved by the user setting the laser power a handful of depths and the computer interpolating for other values.
  • Changes in focal plane are synchronized to the flyback time for each frame, and constant throughout each frame, making the stacking and volumetric visualization of images straightforward.
  • Arbitrary waveforms can be loaded to control the lens. Standard waveforms include sinusoidal, sawtooth, triangular and square waves, but custom waveforms can be uploaded to the controller as well .
  • Volumetric scanning works in both unidirectional and bidirectional scanning modes.
  • We are working on a closed-loop feedback system to stabilize an optical section along the z-axis, which is particularly important in when there can be relative movement between the sample and the objective.

Below is an example of imaging with a 512-line image, 8kHz resonant mirror, bidirectional scanning, resulting in at 30 frames per second, along with a triangular scanning waveform at 1Hz.  The range of the scan is about 15oum, and you can see the brightness of the images is relatively constant.

[vimeo 129498546]

The slight translation of the image as depth is changed is due to coma produced by gravity deforming the Optotune lens, as it is currently mounted vertically in the microscope.  A new design will provide for horizontal mounting thereby minimizing this artifact.

The latest Github release already includes these updates.  A more detailed post on how to work with volumetric scanning will follow.

Setting up the gige cameras

The following explains how to connect GigE cameras to your Scanbox.  If you have trouble following the instructions below just let me know — I can help remotely.

ScanBox has the ability to acquire images from cameras synchronized to the frames of the microscope.  We typically use one camera to monitor eye movements and  a second one to monitor the movement of the ball, but you can add more if you desire so long as your hardware can keep up with the data stream.

Any GigE camera supported by the Matlab image acquisition toolbox should work, but we have been using the Dalsa Genie series successfully.  In our current setup, we use an M640 camera for monitoring the ball and a M1280 to monitor the eye.

We have a dedicated Ethernet port for each camera, both set up with static (or persistent) IPs.  To begin, follow the instructions in the Genie manual to setup the persistent IP addresses for your NIC interface and card.  As an example, in our setup, one NIC has the IP 10.20.1.1 connected to a camera with 10.20.1.2 (mask 255.255.0.0), and the second pair has a NIC with 10.19.1.1 and a camera with 10.19.1.2 (mask 255.255.0.0).

Once you are done, start the Dalsa Network Configuration tool, and you should see both of your cameras show up in blue, as shown in the figure below.  If any of them appear in red, or do not appear at all, something is wrong.  You can find the Dalsa Configuration Tool can be found under All Programs -> DALSA -> Sapera Networking Package.

network_settting

 

If you get to this stage, you can test if the cameras are working within Matlab.  First, make sure you have the image acquisition toolbox.  You should have also downloaded the GigE Vision Hardware support package by following these instructions.  Restart Matlab after downloading the GigE support package.  Launch the image tool in matlab (by typing imaqtool in the Matlab console), and you should see both cameras listed on the top left.

imaqtool

 

If this is the case, you are ready to move to the next step.  Quit Matlab and launch the CamExpert and configure the cameras so that acquisition can be triggered.  This is done by settting the Trigger to True and Trigger Source to Input 2, as shown below:

trigger

After this configuration is set up, you should save it as “Camera Configuration 1” and make it the default configuration when the camera is powered up, by going to the Camera Information section on the left and clicking on Power-up config setting:

config

The final step is to let Matlab know which camera is which.  You do this by editing the configuration file and providing part of the name of the camera that will be in charge of eye and ball movement, as well as turning on the flags for the cameras you want to use.

camera_config

In this case,  the M640 will be used for the ball and the 1280 for the eye.  After editing the scanbox_config.m file you can start ScanBox and both camera boxes should be enabled.  (These fields will show up only if you are running the latest version of Yeti). If you hit the preview button you should be able to see a live feed of the corresponding camera. If both cameras work you, can proceed to the final test.

The final step is to make sure both cameras are working.  Select a ROI for both cameras by clicking the appropriate button and dragging the ROI to the desired location.  Check both of the ‘Enable’ boxes to tell ScanBox that you want to acquire data from the cameras.  Finally, collect 100 frames by setting the total frames field to 100 and hitting the grab button.  You should get two additional files — a *_ball.mat file and an * _eye.mat file.  These should have the imagery and the total number of entries should match the number of lines collected (they may contain an excess frames of 1 or 2 at the end that you may ignore.)

A note about the triggers sources: the rightmost connector on the ScanBox provides a trigger signal at the same frequency as the frames of the microscope.  We use it for the eye camera so we get an eye position at each frame.  The connector next to it provides a signal at twice the frequency; we use it to trigger the ball camera.

Happy tracking…

Sorting calcium imaging signals

xcorr: comp neuro

Calcium imaging can record from several dozens of neurons at once. Analyzing this raw data is expensive, so one typically wants to define regions of interest corresponding to cell bodies and work with the average calcium signal within.

Dario has a post on defining polygonal ROIs using the mean fluorescence image. Doing this manually is fairly time-consuming and it can be easy to miss perfectly good cells. Automated sorting methods still require some oversight, which can quickly become as time-consuming as defining the ROIs manually.

I’ve worked on an enhanced method that makes defining an ROI as simple as two clicks. The first enhancement is to use other reference images in adding to the mean fluorescence image: the correlation image, standard deviation over the mean, and kurtosis. The correlation image, discussed earlier on Labrigger, shows how correlated a single pixel is with its neighbour. When adjacent pixels are strongly correlated, that’s a good sign that that pixel belongs to a potential…

View original post 252 more words

Real-time signal extraction, visualization and processing in scanbox 2.0

Here is a sneak preview of the new features to be released with Scanbox 2.0.

Some of the salient additions include:

Automatic stabilization: The system can automatically correct for rigid (x,y) translation in real time.

Selection of regions of interest (ROIs): Allows for the selection of regions of interest that need to be tracked in real-time.  After their definition, the polygons can be edited by translating them or individually moving their vertices.  The ROIs calculation can be used in conjunction with automatic stabilization.

Real-time calculation of ROI traces: Scanbox computes the mean image (other statistics also possible) in each ROI and displays its z-scored version it in real time on a rolling graph, where the vertical, dashed red line displays the present time.

Stimulus onset/offset marking: The beginning and end of stimuli are displayed on the trace graph by a background of a light blue color, allowing one to easily verify the cells are responding to a stimulus.

On-line processing: The ROI trace data can be mapped into analog output channels, streamed over the network, or streamed to the disk for further on-line processing.  So be ready to compute your tuning curves on the fly!

Here is a snapshot of how the GUI is looking at the moment:

Scanbox real time

Here is a movie without actual cells, but showing the ability to mark the location of the stimuli and the analog output.  The signal is generated by waving my finger near the front of the objective.

 

If you need any other features, please let your suggestion below (no, the system will not generate the figures and write the paper.)

Non-rigid deformation for calcium imaging frame alignment

xcorr: comp neuro

Oh yeah!

It’s been a while since I last updated the blog – I graduated from McGill, went on a trip to Indonesia where I did a lot of diving (above), came back to Montreal for 16 hours to gather my coffee machine and about three shirts – all I need to survive, really – then moved to sunny California to do a postdoc in Dr. Dario Ringach’s lab at UCLA. Living the dream.

We’ve been doing calcium imaging – GCamp6 – in mice via a custom-built microscope – you can read more about the microscope over at the Scanbox blog.

If you’re used to working with single electrode or even MUAs, calcium imaging will blow your freaking mind – you can get more than a hundred shiny neurons in a given field of view, so you can realistically ask and answer questions about how a local circuit works knowing that you’re not massively undersampling the population.

Inter-frame motion is an issue in awake…

View original post 515 more words

Scanbox prototype (circa 2013)

Having done its service with honor for more than a year we had so say our goodbyes to one of only two working models of the original Scanbox prototype. Today we transitioned the two-photon microscope to the latest version of the Scanbox card.  Some students and postdocs watched with much concern as cables were being pulled apart. One of them, who could not bear the spectacle, and wondered about the future of his experiments, decided to retreat to his office.  In the end, however, everything worked as expected and everyone is now back in business.

The last working version of the original Scanbox prototype is now in a nice laboratory at Hopkins.  I am sure our colleagues will take good care of it and provide a proper burial when time comes.

Scanbox prototype (circa 2013)

Scanbox prototype (circa 2013)