A number of sophisticated deconvolution algorithms have been proposed in the literature, most of which involve solving an optimization problem.
While trying to develop a fast, on-line spike detection algorithm for Scanbox I tried a simple idea that seems to work reasonably well and the resulting algorithm is trivial.
In all likelihood this must have been published somewhere. If you happen to know a relevant reference, just let me know in the comments below.
Suppose the calcium signal, , is the response of a linear system with an exponential impulse response to a train of impulses
.
Then, the response satisfies , where we assume
is Gaussian noise.
One simple way to estimate is to first find the value of
that minimizes the error
.
A few calculations yield where
,
and
.
As all these quantities are easily estimated recursively as new data arrive, one can also maintain a recursive estimate of the optimal .
Then, the estimated input to the system can be estimated by computing . So, all we need to do is to keep a recursive estimate of
and the last two samples of the signal.
Below is an example of how this simple algorithm performs.A simulated spike train,
(second panel form top) is convolved with an exponential impulse response and corrupted by additive Gaussian white noise to yield a simulated signal
(top panel). The third panel shows the estimated
. Finally, we threshold this signal to obtain the estimated spike train shown in the bottom panel (thresholding is done by means of Otsu’s algorithm, implemented by Matlab’s gray thresh).
Here is the code if you want to play with it:
% generate ca-trace u = double(rand(1,10000)<0.1); % original spikes h = exp(-(0:40)/10); % exponential impulse response y = conv(h,u); % convolve y = y+0.15*randn(size(y)); % additive gaussian noise % estimate alpha, uhat and input spike train m = mean(y); m02 = mean(y.^2); m12 = mean(y(2:end-1) .* y(1:end-2)); a = (m^2-m12) / (m^2-m02); uhat = [0 y(2:end) - a * y(1:end-1)]; % uhat shat = uhat>graythresh(uhat); % estimated spike train
Here is another example with a higher noise level and some bursting:
Here is how it runs on a some of the cases of the Chen et al dataset that have varying degrees of difficulty. The red dots represent the spikes detected as described in the original paper, the black dots represents the spikes as detected by the deconvolution algorithm above.
Given the simplicity of the algorithm, it performs remarkably well and can be run across many ROIs in real time.
But wait… there is more. One may expect the impulse response to be not purely exponential. In such a case, it may be helpful to generalize the above to a higher-order model. You can then estimate in just two lines of Matlab as follows:
a = lpc(y,10); % order=10 uhat = y-filter([0 -a(2:end)],1,y);
The above should bring back memories to anyone who studied speech compression and recognition at some point in time.
The graph below shows what happens when the impulse response is a gamma function (left). Now, the LPC coefficients are non-zero for lags larger than one.
Still, and simple thresholding of still recovers the original spike train even under the presence of substantial noise:
Go ahead, load some of your calcium imaging data and plot both your signal
along with
estimated by the two lines above. Here is are a couple of data segments, again, from the Chen et al dataset.
Note: There are other pre-processing steps I took (like projecting out the neuropil signal) that I have not described above.
The calculation:
We want to minimize . Taking the derivative with respect to
and equating to zero leads to
or
where and
.
But taking expectations on both sides of leads to
or
.
Substituting in the equation above and solving for
gives
.
I could not fully follow the calculations above (especially how u_n comes into the equation), but why don’t you simply use as an estimate \alpha = / ? Am I missing an important point?
Ok, WordPress does not like equations … next try: alpha = m02 / m12
Otherwise, I like the simplicity of the idea!
Oh… It only comes into play if the mean of u_n is not zero. If you assume u_n has zero mean then alpha = m12 / m02.
Added the calculation at the end.
Thanks for providing the calculation – I oversaw that u_n and y_(n-1) are (mostly) uncorrelated, but now it’s clear to me.
I tried your algorithm on some of my ca imaging data. The outcome looks very likely, but this not worth a lot without ground truth – maybe I can record some ground truth in the near future and come back to your algorithm.
The input at time n cannot be correlated with the response at earlier times (n-1).
I added a few samples of the Chen et al database with ground-truth… but it is not clear what would be a good measure of similarity (I guess some of the spike-distance metrics proposed in the literature).
Apparently the guys at Janelia have a competition of these algorithms coming up soon. It will be nice to see what works.
I think that is the basis of a first-order AR model described in Pnevmatikakis et. al. for the calcium dynamics (under experimental procedures).
Here’s the link to the paper:
http://www.cell.com/neuron/abstract/S0896-6273(15)01084-3
Indeed, many studies use higher-order AR models and encourage sparsity of the solution by means of L1 penalty terms. However, the approach above does not require the solution of such a problem. I added another example to show how you can extend the approach to high order models using linear predictive coding and some more examples.