1 Introduction

The aim in spatio-temporal processing is to *recover* signals coming from a direction of interest, while *attenuating* signals from other directions. The processing element that allows such selective recovery/attenuation is known as a *spatial filter* or *beamformer*^{26}.

Although new implementations of spatial filters may improve their poor resolution when resolving signals originating from closely-located regions^{2}, they also suffer of a fundamental limitation: their performance directly depends on the number of sensing elements (aperture), independently of the number of time samples or the signal-to-noise ratio (SNR).

One might think that the limitations on aperture could be solved by using a large number of
sensors (dense sensor array) and by using large-sized sensors. However, this cannot
be always achieved in practice^{18}.
First, because sensors might be costly, but mainly due to the fact that adding many
of them would result in an increase of computational cost. This is the result of the
output of the spatial filter being a linear combination of the data acquired by M
sensors at *N* time samples.

The computational cost increases as the number of sensors is increased, this is due to spatial filters being dependent on the covariance matrix and its inverse, which is calculated from raw data. Thus, the more sensors are considered, the greater the data matrix would be and therefore it will be more mathematically complex to calculate the covariance matrix and its inverse.

Given the limitations previously mentioned, spatial filters in biomedical applications were
initially used only for interference removal, such as the case of fetal heart
monitoring in^{28}. A spatial
filtering method for localizing sources of brain electrical activity suited for MEG
recordings was first described and analyzed in^{27}. However, their analysis only considered a sphere to
model the head given the limitations in computer power at the time.

The advent of high speed digital computers nowadays has led to the emergence of many numerical techniques and with the rapidly growing computing capabilities, numerous problems of real engineering interest can now be solved with relative ease and in a much shorter time. Applying new numerical techniques in the solution of the inverse problem using a realistic model as a conductive model would result in an increased resolution, then making the estimation of the magnitude and location of a current source within the brain more accurate.

In this paper, a new perspective on how today’s computers make it possible to handle the mathematical complexity involved in MEG array signal processing is presented, up to the point when new and ever more complex neural activity analysis methods can be developed and realistic geometries to model the head can be used.

2 Methods

This section briefly reviews the concepts related to spatial filters, then the processing steps involved in their use for MEG source localization are explained.

2.1 Measurement Model

MEG is a non-invasive technique that allows the measurement of ongoing brain activity
produced by the activation of multiple neurons (i.e.,50,000-100,000) in a
specific area generating a measurable but extremely small magnetic field
oriented at an orthogonal direction outside the head. Therefore, MEG requires an
array of extremely sensitive superconducting quantum interference devices
(SQUIDs) that can detect and amplify the magnetic fields generated by neurons a
few centimeters away from the sensors. MEG is an attractive technology to study
brain activity since magnetic fields pass unimpeded through the skull, resulting
in a undistorted signature of neural activity that can be recorded at the scalp
level^{13}^{,}^{10}.

MEG measurements are assumed to be produced by a neural source that can be modeled by an equivalent current dipole (ECD), whose magnitude is given by ^{19}. Then, the MEG data can be grouped, for the case of *M* x *N* at the *k* th trial such that

where *y*
_{m}(*t*)is the measurement at the *m*th sensor, for *m = 1,2,…,M*, and acquired at time sample *t = 1,2,…,N*. Under these conditions, the following measurement model can be proposed:

where *M x 3*
*array response* matrix, *3 x N* dipole moment matrix, and
*V*
_{k} is the noise matrix. The array response matrix is derived using the
quasi-static approximation of Maxwell’s equations, which connect time-varying
electric and magnetic fields produced by an equivalent current dipole on a
volume that approximates the head’s geometry (see^{15}^{,}^{7}, and references therein). Then, in a physical
sense, A(*r*
_{s}) represents the material and geometrical properties of the medium
in which the sources are submerged.

2.2 Spatial Filtering

A spatial filter

Note that the unit response in the recovery band is enforced by

while zero response at any point *r* in the attenuating band implies

There are many ways to compute

where

Therefore, the LCMV spatial filtering problem is posed mathematically as

The solution to (8) may be obtained using Lagrange multipliers (which is the classical method for finding local minima of a function subject to equality constraints) and completing the square, which results in^{25}

For the case of unknown *R*, a consistent estimate of this covariance matrix (denoted by

Applying (9) to the original MEG measurements provides an estimate of the dipole moment at location *r*
_{s}. Furthermore, the estimated variance or strength of the activity at *r*
_{s} is the value of the cost function in (8) at the minimum. Then, after some algebra, the estimated variance of the neural source is given by

It is also useful to estimate the amount of variance that can be credited to the noise. Hence, in a similar way as in (10), the noise variance is given by

where

which is equivalent to maximizing the source’s variance (normalized by the variance of the noise) as a function of *r*.

Equation (12) provides an accurate estimate of *r*
_{s} under the assumption that regions of large variance presumably have
substantial neural activity^{27}.
For that reason, *neural activity
index*. However, the sensitivity of the LCMV filter to imperfections
in the model knowledge is a well-documented fact ^{21}. The main approach to remedy this problem is to
improve the conditioning of the covariance matrix ^{20}^{,}^{24}:

where *U*
_{0} is the matrix whose columns are the orthonormal eigenvectors of ^{9}:

Note that in (14) the inverse of the matrix has been replaced by the generalized inverse (denoted by ^{8}.

While (10) uses the classical neural activity index based on an estimate of the signal’s variance, a similar calculation based on (14) will produce an estimate of the *sparsity* as a function of the position^{30}. Such estimate is obtained by replacing

while a similar sparsity measure can be defined for the case of the noise:

where

which is equivalent to minimizing the source’s sparsity (normalized by the sparsity of the noise) as a function of *r*. Hence, *neural sparsity index*.

Going back to the properties of the “classical” index in (12), they have been thoroughly investigated in ^{27} and derived works. Its main drawback has been found to be its sensitivity to correlated source cancellation and its poor performance under low SNR conditions. To circumvent this difficulty, a multi-source extension has been recently proposed in^{14}. Namely, the following multi-source activity index (MAI) has been proposed for the case of *l* neural sources as

Where

And

The applicability of ^{14}. Nevertheless, it shall be noted that small changes in ^{5}. Furthermore, ill-conditioning can also be the result of using the estimates *Q*, respectively, which is a common practice in source localization techniques based in EEG/MEG recordings. In order to alleviate the aforementioned shortcomings of the multi-source activity index defined in (18), a reduced-rank extension has been introduced in^{16} as follows:

where *p* is a natural number such that *l* is the unknown number of concurrently
active sources, and *p* eigenvectors corresponding to the largest eigenvalues of ^{17}^{,}^{29}. Based on that, another
reduced-rank activity index can be defined as^{16}

where *p* eigenvectors corresponding to the largest eigenvalues of

3 Numerical Examples

In this section, the applicability of the neural activity indexes in (12), (17), (18), and (22) is shown through numerical examples using real MEG data corresponding to measurements of visual responses. The goal of these experiments is to show the use of those spatial filters in finding the location of neural sources from the MEG data.

The data used for these experiments is available at the MEG-SIM portal, which is a repository that contains an extensive series of real and simulated MEG measurements freely available for testing purposes^{1}. The data were acquired at a sampling rate of 1200 Hz, and they were band-pass filtered between (0.5, 40) Hz with a sixth-order Butterworth filter. The data correspond to the response of Subject #1 to small visual patterns of two sizes (1.0 and 5.0 degrees visual angle) which were presented at 3.8 degrees eccentricity in the right and left visual fields, respectively. The small visual pattern was designed to activate

The MEG measurements were obtained with an array of *M*=275 channels with the spatial distribution of the VSM MedTech MEG system considered at the MEG-SIM portal. There, the anatomical MRI data of the subject is provided as well. Hence, a realistic head model can be created by first segmenting the MRI images with BrainVISA^{3}, next tessellated meshes were generated from the segmented volumes using Brainstorm^{23}. If a more refined and specific segmentation of brain structures is required as an aid in the source localization, brain atlases may be used to find homologous points or structures^{11}. Here, the head model was composed by three tessellated meshes which were nested one inside the other in order to approximate the geometry of the scalp, skull, and brain. Each volume was given a homogeneous conductivity of 0.33, 0.0041, and 0.33 S/m, respectively. In particular, the volume corresponding to the brain was constructed with 11520 triangles. A full rendering of the head model and the position of the magnetometers is shown in Figure 1.

Based on those conditions, the beamformers were evaluated at the position *r* corresponding to the centroid of each of the triangles comprising the brain mesh. In both cases, the array response matrix *A* was calculated using the computer implementation provided in^{22}, which corresponds to a solution of based on the boundary element method (BEM) of the quasi-static approximation of Maxwell’s equations. BEM is a numerical method for solving partial differential equations (in this case Maxwell’s equations) with the advantage of reformulating them as discrete integral equations that then are solved on simple geometrical elements of a boundary mesh^{12}. The data covariance matrix *l=1* active source in the calculation of MAI and RRMAI_{T2}.

Figures 2 and 3 show the results of computing the index values for each of the beamformers. Given that the magnitudes of the indexes are very different, we decided to compare them in terms of their distributions. Therefore, we show the histogram of each index, where the red bars indicate the percentiles that were necessary to display so that the positions *r* with most significant index values (the minimum for the case of the sparsity-based index and the maximum values for the others) had an anatomical correspondence with the expected position neural source. Note that most of the positions indicated with red dots on the surface of the brain’s mesh coincide with a neural activation located at the primary visual cortex, but different portions of the respective distributions of the indexes were accounted for in order to achieve such correspondence. In all cases, we show the mesh modeling the head in an orientation such that the occipital lobe is fully seen from the back of the head.

In the case of a large visual stimulus presented to the left visual field (Figure 2), both MAI and RRMAI_{T2} provide very good results in terms of focalization of the source, but RRMAI_{T2} outperforms MAI as its most significant index values correspond with the tail of the distribution. The classical beamformer also achieves good results in those terms, but the position of the estimated source location is biased. Finally, the sparsity-based index is accurate in detecting the region with less-sparse-sources (i.e., those more likely to be related with the stimulus), but fails in terms of focalization. Clearly, an extreme-value distribution would be the most desired outcome in the index calculation process, but the sparsity-based index tends to be better described by a Gaussian distribution.

For a small visual stimulus presented to the right visual field (Figure 3), we obtained similar results as those previously described in terms of the shape of the distributions. However, in this case RRMAI_{T2} fails to estimate the source location (an ipsi-lateral patch is detected instead). This can be credited to the fact that we maintained the same value of *p* in (22) for all our calculations, while it is well-known that such parameter requires to be adjusted in a case-to-case fashion (see^{16} for a full account of that issue).

In terms of the computational cost, the calculation of each of the indices here tested was fully implemented in Matlab^{®}, and the computer used was a HP ProLiant ML110 G7 server with a Xeon E3-1220 processor, 3.1 GHz of speed, and with 6 GB of RAM memory. Under those conditions, computing the sparsity neural index (which is the most mathematically complex of the four) took 42.3095 minutes. Nevertheless, the distance between the two possible solutions (i.e., the distance between the centroids of two triangles sharing a side) was 4.7 millimeters, which is much larger than the allowed error in source localization for clinical applications, such as in neurosurgery, where the sources must be located with a precision of at least 1 mm. Hence, for clinical applications, a much more dense brain mesh (i.e., more triangles in the tessellation) must be used. Still, such increase in the computing complexity is something that can be handled through many different types of hardware (e.g., graphic processing units), and with different algorithms to implement the beamformer (see, e.g., ^{4}). In fact, thanks to the increase in computer power, beamforming has been resurrected as a suited technique for analysis of brain activity.

4 Conclusions

The use of spatial filters in the solution of the neuroelectric inverse problem involves very complex mathematical calculations. However, it is possible to manage such calculations with today’s computer power. Furthermore, new spatial filters, such as those based on eigenspace projections, can be used to improve the classical LCMV solution originally proposed in^{27}.

Here, different indices of neural activity (all of them based on beamforming) were compared in terms of their ability to provide a focalized and anatomically correct estimation of a neural evoked response. Therefore, we looked for a beamformer to generate significant index values (i.e., at the tail of the distribution) and with an accurate correspondence with the expected location of the neural activity (primary visual cortex in this case). However, the methods here analyzed showed little consistency, that is, a single method not always provided good results for the same type of data.

Nevertheless, we do not expect to find a spatial filter that performs well for all types of data. For example, in the case of the sparsity-based index, we believe it did not provide good results for the evoked responses here tested as it is better suited for data with low SNR (i.e., with larger sparsity). Another example is the MAI, which is known to perform better for cases where correlated sources are involved, then MAI can be computed within a region-of-interest (ROI) in order to provide a focalized estimation. Therefore, it is necessary to continue with the development of new indexes based on more consistent estimates, as well as characterize the response of different beamformers for different types of data.

Finally, it is worth mentioning that the methods here tested are not mutually exclusive, and information obtained from a combination of methods may improve the overall result. An example of such approach has already been presented in a preliminary version of this paper (see^{6}), in which the solutions of the classical neural activity and the sparsity-based indexes were combined in order to increase the focusing in the estimation of auditory evoked responses. Therefore, we believe new techniques in brain source localization may benefit from using hybrid techniques that take advantage of complementary information. Such complementarity could be further extended to the joint analysis of electroencephalographic (EEG) and MEG data. While for this paper only MEG data was available, new acquisition systems allow for the simultaneous measurement of EEG and MEG.