I’ve made substantial progress to read real-time DeepLabCut Live date (see paper by Kane et al here) from Scanbox. As you can see in the example below, I got to the point where we can do two-photon imaging and pose estimation in real time.
As you recall, the behavior cameras in Scanbox are triggered once per frame, which leads to the same number of behavior and microscope frames. The real-time pose data also gets displayed and saved directly in the info structure of Scanbox. Here is how it looks for the case above, where we have the pose for 8 different locations along the iris.
ans =
struct with fields:
pose: [3×8 single]
>> info.dlc_eye_data(1).pose
ans =
3×8 single matrix
72.7508 75.4426 84.9112 63.3472 80.4150 68.1809 67.1091 82.9950
19.3464 72.1103 43.8801 31.7922 23.4480 49.3159 21.9147 51.1657
0.9999 0.2841 0.9270 0.9900 0.9998 0.6011 1.0000 0.3656
>>
Let me share how I did this, as some of the methods may be useful to others that want to do the same thing with DLC Live outside of Scanbox.
My approach consists in doing image acquisition using the imaq toolbox in Matlab, and to share the data with a Python process via memory-mapped files. One advantage of this mechanism is that it allows all the cameras models supported in the image acquisition toolbox to feed data to DLC Live. This is convenient because, as of today , DLC Live does not support a huge range of models.
To provide a concrete example of how this approach works, consider the following, 16-lines code of Python:
import numpy as np
from dlclive import DLCLive
image_in = np.memmap('c:/2pdata/sbx2dlc', dtype='uint8',mode='r+',shape=(112,160)) # the image
data_out = np.memmap('c:/2pdata/dlc2sbx', dtype='float32',mode='r+',shape=(8,3)) # the data
dlc_live = DLCLive('C:/Users/dario/Documents/eye2p-Dario-2021-03-21/exported-models/DLC_eye2p_resnet_50_iteration-0_shuffle-1')
dlc_live.init_inference(image_in)
print('DLCLive Ready to process Matlab data stream')
while True:
if image_in[0,0] != 0: # wait for a new image_in
pose = dlc_live.get_pose(image_in) # get the pose
image_in[0,0] = 0 # report we are next for the next one
data_out[:] = pose[:]
It simply opens two memory mapped files — one to read the incoming images and another to write the results of pose estimation. The handshake is rather primitive. I use the first pixel as rudimentary semaphore. When Matlab writes a new image, it sets that pixel to 1. When DLC Live finishes estimating the pose, it tells Matlab it is done by sharing the estimates and and setting the pixel to 0.
You can run the above script in the DLC-Live environment and it will first print a bunch of CUDA gibberish, and finally you will see “DLCLive Ready to process Matlab data stream”. Now the script is just sitting there waiting for data from Matlab.
On the Matlab side you can have something like:
% Closing the DLC Live loop with Matlab (dlr 3/22/2021)
% memory mapped files
image_out = memmapfile('c:\2pdata\sbx2dlc','Writable',true,'Format',{'uint8', [112 160], 'img'});
data_in = memmapfile('c:\2pdata\dlc2sbx','Writable',false,'Format',{'single', [3 8], 'pose'});
% I am reading from an old file here, but you can get the data from a
% camera instead
vr = VideoReader('bmi00_063_001_eye.mj2');
% create a Figure
figure(1)
h_img = image(vr.readFrame);
axis image off
colormap gray
hold on
h_data = plot(zeros(1,8),zeros(1,8),'r.','markersize',12);
hold off;
image_out.Data.img(1,1)=0; % nothing written yet.
while true
tic
x = vr.readFrame;
x(1)=1;
image_out.Data.img = x';
while(image_out.Data.img(1,1) ~=0) % wait for it to finish
end
h_img.CData = x;
h_data.XData = data_in.Data.pose(1,:)+1; % update pose in graph
h_data.YData = data_in.Data.pose(2,:)+1;
drawnow;
fprintf("Frame rate = %.1f\n",1/toc);
end
In this example, I just send images from an old movie file I have, but you can easily do it by using a hook to the preview function in Matlab, which is what I did the Scanbox video above.
In this example, just for benchmarking, purposes, I am displaying the images and the estimated pose as well. This code runs at 50-60 fps on an old Quadro M6000. One would expect it to do much better on newer cards.
Thanks to the Mathis Lab for producing these tools and answering questions. If you have moments and/or suggestions about the code above let me know (can you think of a solution that does not involve polling?). If you have questions about DeepLabCut the best people to answer them in the Github forums.
Thank you for sharing your approach and the code!
Have you also considered using the matllab-engine for the exchange of variables between Matlab/Python workspaces? (mathworks.com/help/matlab/matlab_external/use-the-matlab-engine-workspace-in-python.html)
Your solution might however be easier to maintain (and maybe even faster).
I tried importing DLC-Live into Matlab (https://www.mathworks.com/help/matlab/call-python-libraries.html?s_tid=CRUX_topnav) but it failed while importing Keras. The import implementation seems limited and I gave up. The data are in Matlab and I simply need to call DLC-Live to estimate the pose. The Matlab engine seems designed to do the reverse, have Python call Matlab functions. Not sure how that would work out (did you have something in mind). But I suspect this might be the fastest way to do it, as all the other methods are more general, do a lot of variable type checking and conversion, etc.
I tried using the Matlab.engine. But looks like Matlab does not service engine requests while running (it seems that it has to return to the command window to do so). Inserting a pause() or drawnow() did not help either. Do you happen to know if you can somehow force servicing engine requests?
Sorry, I have not encountered this problem with non-servived engine requests before. Seems like it is probably not possible to solve it (https://ch.mathworks.com/matlabcentral/answers/442381-servicing-python-matlab-api-queries). Executing the Scanbox Matlab code from within Python sounds like it would definitely fail… Maybe your solution to share data between Matlab and Python is the best after all!
Yes, I confirmed with Mathworks that this can’t be done. Yes, maybe memory mapping is the simplest solution after all.