We previously describe how we can incorporate DeepLabCut models to run concurrently to Scanbox to provide real-time pose estimation while imaging.
Of course, the mechanism for communicating with DeepLabCut is general enough to allow processing by other means. In some simple cases, just as detecting the center of the pupil doing two-photon imaging, simple image processing techniques are sufficient to provide good estimates.
Given that Open-CV Python is now an official OpenCV project, I though I would share a simple snippet of code that detects the center of the pupil using standard OpenCV functions. In the case of tracking the center of the pupil, a simple call to its blob detection function works well.
import cv2
import numpy as np
from scipy.spatial import distance
import sys
import win32gui, win32con, win32api
#argv[1] - sbx2dlc file
#argv[2] - dlc2sbx file
#argv[3] - image rows
#argv[4] - image cols
#argv[5] - # pose positions
w = win32gui.GetForegroundWindow() # minimize the window...
win32gui.ShowWindow(w, win32con.SW_MINIMIZE)
nrow = int(sys.argv[3])
ncol = int(sys.argv[4])
nbody = int(sys.argv[5])
image_in = np.memmap(sys.argv[1], dtype='uint8',mode='r+',shape=(nrow,ncol)) # the image
data_out = np.memmap(sys.argv[2], dtype='float32',mode='r+',shape=(nbody,3)) # the data
data_out[:] = np.NAN
#Set Area filtering parameters
params = cv2.SimpleBlobDetector_Params()
params.filterByArea = False
params.minArea = 30 # radius 3
params.maxArea = 450 # radius 12
#Create a blob detector
detector = cv2.SimpleBlobDetector_create(params)
print('\nOpenCV Eye Tracker Ready')
while True:
if image_in[0,0] != 0: # wait for a new image_in
im2 = cv2.bitwise_not(image_in) # invert
kp = detector.detect(im2) # detect blob
pts = cv2.KeyPoint_convert(kp)+1 # get the keypoints
if len(pts) == 1:
data_out[0,0:2] = pts[0,0:2] # if only only one point, return it
else:
d = distance.cdist(pts,np.array([[nrow/2 , ncol/2]])) # otherwise...
data_out[0,0:2] = pts[np.argmin(d),0:2] # report the closest one to the center
image_in[0,0] = 0 # report we are ready for the next image
Similarly, we can use a second camera to image the movements of a Styrofoam ball and estimate its movement in real-time using OpenCV’s image registration functions. One difference is that here we need a pair of frames to estimate motion. The code below receives and stores pairs of images, and then computes and reports back their relative displacement.
import cv2
import numpy as np
import sys
import win32gui, win32con, win32api
#argv[1] - sbx2dlc file
#argv[2] - dlc2sbx file
#argv[3] - image rows
#argv[4] - image cols
#argv[5] - # pose positions
w = win32gui.GetForegroundWindow() # minimize the window...
win32gui.ShowWindow(w, win32con.SW_MINIMIZE)
nrow = int(sys.argv[3])
ncol = int(sys.argv[4])
nbody = int(sys.argv[5])
image_in = np.memmap(sys.argv[1], dtype='uint8',mode='r+',shape=(nrow,ncol)) # the image
data_out = np.memmap(sys.argv[2], dtype='float32',mode='r+',shape=(nbody,3)) # the data
data_out[:] = np.NAN
# Some prelims
sz = image_in.shape
warp_mode = cv2.MOTION_TRANSLATION
warp_matrix = np.eye(2, 3, dtype=np.float32)
number_of_iterations = 5000
termination_eps = 1e-2
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, number_of_iterations, termination_eps)
nimg = 0 # how many images so far...
data_out[:] = np.NAN
mask = 255*np.ones(sz,dtype='uint8')
print('\nOpenCV Ball Tracker Ready')
while True:
if image_in[0,0] != 0: # wait for a new image_in
if nimg<1:
img0 = image_in+0 # save the first image (avoid lazy copying)
nimg = 1
else:
try:
if (nimg % 2) != 0: # double buffering
img1 = image_in+0
(cc, warp_matrix) = cv2.findTransformECC (img0,img1,warp_matrix, warp_mode, criteria, mask,1)
else:
img0 = image_in+0
(cc, warp_matrix) = cv2.findTransformECC (img1,img0,warp_matrix, warp_mode, criteria, mask,1)
data_out[0,0:2] = warp_matrix[:,2] # report the best translation (and center on image...)
nimg = nimg+1
except cv2.error as e:
data_out[0,0:2] = np.NAN
nimg = 0 # force two new images
print(e)
image_in[0,0] = 0 # report we are ready for the next image
Benchmarking of both algorithms on ~100×100 pixel images, results in processing time of ~6 msec (that is roundtri — from copying the image to receiving the results). So both eye and ball motion can be performed in real time while Scanbox is running.
For each behavior camera, one can now choose a DLC model or an OpenCV python script that will be run to perform image processing. If both are specified, the DLC model takes precedence. The ROI size for each camera and the plotting style can also be specified.
sbconfig.balltracker = true; % enable ball tracker (0 - disabled, 1- enabled)
sbconfig.ballcamera = 'M1280'; % model of ball camera
sbconfig.ballstream = true;
sbconfig.ballroi = [150 150]; % roi size ball cam (width x height)
sbconfig.balldlcmodel = []; % deeplabcut model
sbconfig.ballpymodel = which('cv2_balltracker.py');
sbconfig.ballstyle = 'ro'; % style of pose markers for this camera
sbconfig.ballsize = 14;
sbconfig.sbx2dlcball = 'h:/2pdata/sbx2dlcball'; % DLC-Live interface for ball camera
sbconfig.dlc2sbxball = 'h:/2pdata/dlc2sbxball';
sbconfig.eyetracker = true; % enable eye tracker (0 - disabled, 1- enabled)
sbconfig.eyecamera = 'M1280'; % model of eye camera
sbconfig.eyestream = true;
sbconfig.eyeroi = [160 112]; % roi size eye cam (width x height)
sbconfig.eyedlcmodel = 'h:\dlc_live\eye2p-Dario-2021-03-21\exported-models\DLC_eye2p_resnet_50_iteration-0_shuffle-1';
sbconfig.eyedlcmodel = []; % deeplabcut model
sbconfig.eyepymodel = which('cv2_eyetracker.py');
sbconfig.eyestyle = 'r+'; % style of pose markers for this camera
sbconfig.eyesize = 14; % size of pose markers for this camera
sbconfig.sbx2dlc = 'h:/2pdata/sbx2dlc'; % DLC-Live interface for eye camera
sbconfig.dlc2sbx = 'h:/2pdata/dlc2sbx';
So in such simple cases where standard imaging processing is sufficient to get good results, writing your own OpenCV processing script may be preferable to developing a DeepLabCut model.