Computational Intelligence - February 2016 - 26

The more degrees of freedom, the more data a model
requires to find a good fit, so by reducing the number
of parameters we also reduce the number
of necessary training trials.
as ERP timepoints could not be used instead. With this formulation, the linear regression function is given by
fs (X; w s, a s) =

T

as

p (y is|X is, w s, a s, i w, i a, m) ? N (y is; a Ts X is w s, m) N (w s; i w) N (a s; i a)
(13)
m s

T

as

X is w s - y is

3. Experiments

+ / X (w s; n w, R w)
s
s

(14)

where again, X (x; n, R) is the negative log prior probability
of the vector x given the Gaussian distribution parametrized by ^ n, R h . It is easy to see that w and a function
identically except for a transpose. The updates for the
weights over the features and the channels are linked, so we
first iterate until convergence within each subject/session
before continuing on to update the prior parameters, which
leads to Algorithm 2.
This reduces the size of the feature space from EF to
E + F, which simplifies learning the regression parameters
and also reduces calculation speed. The more degrees of freedom, the more data a model requires to find a good fit, so by
reducing the number of parameters we also reduce the number of necessary training trials. Also for the case of a model
with EF parameters, the matrix inversion necessary to compute a decision rule is O (E 3 F 3), which is changed for a
model with E + F parameters to O ((E + F) 3) . We also note
that the initialization of Algorithm 2 is non-informative. Our
experiments have suggested that the alternative method

26

In Section 2, we outlined a simple yet effective approach to
infer the subject-invariant BCI model, given by learning the
parameters of a Gaussian distribution over the weights. This
model can be used successfully on novel subjects immediately
via f (x; i) = n T x in the case of regular linear regression or
f (X; i w, i a) = n aT X n w in the case of feature decomposition,
though depending on the covariance of the learned priors this
can result in poor performance. It is possible to further improve
the performance of this model by adapting to the subject as
more subject-specific data becomes available by simply using the
learned priors and considering the problem independently as
discussed in Section 2.2. The standard regression case is discussed there; for the feature decomposition method, we consider n trials X i, where each X i ! R E # F is a matrix with columns denoting features and rows denoting electrodes. In this
setting the update equations are identical to the inner loop of
Algorithm 2. We emphasize that w and a are linked, so the
update steps must be iterated until convergence. The parameter
m is determined in practice through cross-validation over the
training data.

2

i

+ / X (a s; n a, R a) + C

online resource for multitask learning
Supplementary materials, appendix, and
MATLAB and Python implementations of
all three algorithms described here can be
found at http://brain-computer-interfaces.net/.
2.5. Adaptation to Novel Subjects

X w s,

where X ! R E # F denotes the matrix of features for each channel for a given trial. This causes the number of parameters in
the decoding model to be reduced from EF to E + F.
The new optimization problem is now over W, A,
T
n w, n a, R w, and R a, where A = [a 1, f, a S] . However, it
T
T
u , where Xu = X w, and
can easily be seen that a X w = a X
thus that y, instead of being a function of the features, can
now be considered a function of the aggregated features for
each electrode. As this formulation assumes that a and w are
independent, the prior over model parameters can be incorporated as the product of independent priors for both w and
a. As such, the same arguments used to define a prior of w
can be applied to a to define a new distribution for y is and a
new optimization problem (for readability we define the
parameters of the Gaussian priors over w and a as i w and
i a respectively):

LP (W, A, i w, i a|D, m) = 1 / /

shown in Algorithm 3 works more effectively in some cases.

IEEE ComputatIonal IntEllIgEnCE magazInE | FEbruary 2016

We conducted two experiments with real-world data sets. The
first used both the initial multitask learning algorithm as well as
the version with decomposition of spectral and spatial features
while the second only used the version with feature decomposition (hereafter referred to as FD). The first is an example of
subject-to-subject transfer with a motor imagery dataset
recorded for ten healthy subjects, and the second is an example
of session-to-session transfer for a neurofeedback paradigm
recorded in a single subject with ALS.
3.1. Subject-to-Subject Transfer

paradigm
As an initial test of this algorithm, we considered how it performs on the most common paradigm in spectral BCIs: motor
imagery. Specifically, subjects were placed in front of a screen
with a centrally displayed fixation cross. Each trial started with
a pause of three seconds. A centrally displayed arrow then
instructed subjects to initiate haptic motor imagery of either
the left or right hand, as indicated by the arrow's direction.
After a further seven seconds the arrow was removed from the
screen, marking the end of the trial and informing subjects to
cease motor imagery.


http://www.brain-computer-interfaces.net/

Table of Contents for the Digital Edition of Computational Intelligence - February 2016

Computational Intelligence - February 2016 - Cover1
Computational Intelligence - February 2016 - Cover2
Computational Intelligence - February 2016 - 1
Computational Intelligence - February 2016 - 2
Computational Intelligence - February 2016 - 3
Computational Intelligence - February 2016 - 4
Computational Intelligence - February 2016 - 5
Computational Intelligence - February 2016 - 6
Computational Intelligence - February 2016 - 7
Computational Intelligence - February 2016 - 8
Computational Intelligence - February 2016 - 9
Computational Intelligence - February 2016 - 10
Computational Intelligence - February 2016 - 11
Computational Intelligence - February 2016 - 12
Computational Intelligence - February 2016 - 13
Computational Intelligence - February 2016 - 14
Computational Intelligence - February 2016 - 15
Computational Intelligence - February 2016 - 16
Computational Intelligence - February 2016 - 17
Computational Intelligence - February 2016 - 18
Computational Intelligence - February 2016 - 19
Computational Intelligence - February 2016 - 20
Computational Intelligence - February 2016 - 21
Computational Intelligence - February 2016 - 22
Computational Intelligence - February 2016 - 23
Computational Intelligence - February 2016 - 24
Computational Intelligence - February 2016 - 25
Computational Intelligence - February 2016 - 26
Computational Intelligence - February 2016 - 27
Computational Intelligence - February 2016 - 28
Computational Intelligence - February 2016 - 29
Computational Intelligence - February 2016 - 30
Computational Intelligence - February 2016 - 31
Computational Intelligence - February 2016 - 32
Computational Intelligence - February 2016 - 33
Computational Intelligence - February 2016 - 34
Computational Intelligence - February 2016 - 35
Computational Intelligence - February 2016 - 36
Computational Intelligence - February 2016 - 37
Computational Intelligence - February 2016 - 38
Computational Intelligence - February 2016 - 39
Computational Intelligence - February 2016 - 40
Computational Intelligence - February 2016 - 41
Computational Intelligence - February 2016 - 42
Computational Intelligence - February 2016 - 43
Computational Intelligence - February 2016 - 44
Computational Intelligence - February 2016 - 45
Computational Intelligence - February 2016 - 46
Computational Intelligence - February 2016 - 47
Computational Intelligence - February 2016 - 48
Computational Intelligence - February 2016 - 49
Computational Intelligence - February 2016 - 50
Computational Intelligence - February 2016 - 51
Computational Intelligence - February 2016 - 52
Computational Intelligence - February 2016 - 53
Computational Intelligence - February 2016 - 54
Computational Intelligence - February 2016 - 55
Computational Intelligence - February 2016 - 56
Computational Intelligence - February 2016 - 57
Computational Intelligence - February 2016 - 58
Computational Intelligence - February 2016 - 59
Computational Intelligence - February 2016 - 60
Computational Intelligence - February 2016 - 61
Computational Intelligence - February 2016 - 62
Computational Intelligence - February 2016 - 63
Computational Intelligence - February 2016 - 64
Computational Intelligence - February 2016 - 65
Computational Intelligence - February 2016 - 66
Computational Intelligence - February 2016 - 67
Computational Intelligence - February 2016 - 68
Computational Intelligence - February 2016 - 69
Computational Intelligence - February 2016 - 70
Computational Intelligence - February 2016 - 71
Computational Intelligence - February 2016 - 72
Computational Intelligence - February 2016 - Cover3
Computational Intelligence - February 2016 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com