Monday, April 11, 2016

Getting as close to continuous variables as possible

When collecting functional magnetic resonance imaging (fMRI) data, a functional image of a subject’s brain is repeatedly acquired at intervals ranging from a few hundred milliseconds to a few seconds. Functional data from MRI scans, then, is always going to be in the form of discrete variables, which means that there will always be information left out, particularly everything that happens between every image acquisition. 

In fact, often to collect whole-brain functional data, the time between subsequent image acquisitions in the MRI scanner has to be set to a few seconds to reach a reliable spatial resolution – typically ~2.5s. This means that a lot of information on brain activation is left out, and the data is highly discretized. A faster temporal resolution is possible, but it comes at the cost of lost spatial resolution. This is especially a problem as recent advancements in fMRI data analysis involve detecting dynamic spatiotemporal patterns in the brain, which require both high spatial and temporal resolution. 

However, in the development of a spatiotemporal pattern finding algorithm (Fig 1), a potential solution to this problem can be suggested, which can reduce the discretization of a spatiotemporal pattern in the brain. Though the result will never be theoretically continuous, it will be a significant improvement from current temporal resolution of patterns fMRI data.


Once a pattern that is known to repeat across a functional scan is detected in fMRI data, a sliding window correlation of it can be run with the rest of the functional timeseries. This will provide a vector depicting the correlation of the particular pattern along the timeseries. Peaks in correlation, which can be seen in this vector, can be labelled as points in the functional data where our pattern repeats itself. We can then store the pattern detected at each of these points and convolve them all together with our original pattern. Upon, convolving, it is important to decrease the discretization so that additional timepoints are added to our convolved pattern at each step. This will shift our pattern from being a highly discretized dataset, to something that looks a lot more continuous. 


No comments:

Post a Comment