IEEE Computational Intelligence Magazine - May 2018 - 69

regression and was shown to outperform a non-adaptive classifier on the
BCI Competition III data set [57].
2) two-classifier Co-training approach
Panicker et al. [18] extended the idea of
naïve labeling using two classifiers-
Fisher linear discriminant analysis and
Bayesian linear discriminant analysis-
which co-train each other.To do so, both
classifiers are first initialized on a labeled
training data set. Then both classifiers
determine the labels for a chunk of
unseen and unlabeled data points. These
points with corresponding estimated
labels are then added to the current
training data set of the other classifier and
both classifiers are retrained. This procedure is repeated until convergence or
until the improvements (measured by a
confidence score) are minimal. The
authors evaluated this approach using an
offline visual ERP speller study with
data from five healthy subjects. For this
relatively small number of subjects, it was
found that the co-training approach outperforms the naïve labeling strategy of a
single classifier in most situations, however, runaway errors may still occur.
3) pooled mean and
Covariance adaptation
Vidaurre et al. [20] suggested an unsupervised adaptation method of a linear
discriminant analysis (LDA) classifier.
LDA assumes a class-wise normal distribution with class means (n 1, n 2) and
shared covariance matrix R C. It finds a
linear hyperplane defined by the orthogonal vector w by computing [58]:
w = R -C1 (n 2 - n 1)

(1)

Empirical data shows that the assumptions made by the LDA are closely met
by ERP data [58] and hence, LDA is a
widely used and competitive classifier in
BCIs [58], [59]. One can show that
replacing the shared covariance matrix
R C by the global covariance R, which
disregards label information, leads to the
same direction of w given the correctly
recovered class means. Technically, this
can be understood as a least squares classifier with re-scaled outputs [60]. For

that reason, Vidaurre et al. proposed an
adaptation scheme which adapts either
only the common class means or both,
the class means and the global covariance
matrix in an unsupervised fashion. This
approach was shown to outperform a
fixed supervised classifier on motor
imagery data both in simulations and
online. It can readily be applied to ERP
data as demonstrated in [21].
4) adaptation Based on
Error-Related potentials
When the user perceives a mistake, e.g.,
when an incorrectly spelled letter was
shown to the user, a time-locked errorrelated potential (ErrP) can be observed.
These ErrPs can be decoded with an
accuracy of around 80% [61]-[63] and-
depending on the application-may be
useful to automatically correct detected
errors [64]. Initially proposed for codemodulated visual evoked potentials, Spüler et al. [65] proposed to ignore the data,
if an ErrP is detected after showing the
predicted character since the true class
label is unknown and the estimated class
label is suspected to be wrong. Other
groups used ErrPs to adapt the policy of
a virtual or real robot in order to achieve
a certain goal [33]-[38]. In Section
II-B1, we review an approach that can
jointly learn to decode ErrPs and to
adapt its policy to control a device.
Recently, Zeyl et al. [19] compared an
adaptation of the decoder based on (a)
ErrPs, (b) a naïve-labeling approach
based on target confidence and (c) a
hybrid approach which combines (a) and
(b) in a visual ERP speller. The problem
with exploiting ErrPs in the context of
the classical visual ERP speller is that
feedback signals are only shown at the
end of each trial, and hence, ErrPs are
harvested rarely compared to the number of presented stimulus events. To alleviate this mismatch, Zeyl and colleagues
proposed to show both the row and the
column selection as two separate decisions to the user to collect ErrPs more
frequently. Interestingly, an offline analysis and a simulated online experiment
with 11 healthy subjects showed that the
naïve-labeling approach performed best,
with the hybrid approach close behind

and the pure ErrP approach significantly
worse. This indicates that additional
information from the ErrPs could not
contribute in improving the adaptation
in this specific experimental scenario.
5) alternatively training a Spatial
Filter and Riemannian Classifier
Barachant and colleagues proposed an
information theoretical framework which
allows measuring distances between trial
covariance matrices based on concepts of
the Riemannian geometry [66]. The use
of this representation and Riemannian distance has the advantage of being invariant
under affine transformations which would
not be the case in the original Euclidean
space. Supervised classifiers operating on
Riemannian distances have been successfully applied to ERP signals [55]. Although
mentioned as an option, unsupervised
adaptation was not implemented in their
work on EEG-based ERP data [55], but it
was implemented successfully on magnetoencephalography (MEG) data by
authors around Bolagh from the same
group [22]. Again, the premise is that
labeled historic data from earlier subjects is
available which is used to obtain an initial
estimate of the novel unlabeled data.
An iterative two-step procedure for
estimating these labels is at the core of
their approach. It makes use of a widelyused spatial filtering method, Common
Spatial Patterns (CSP) [67]. As this algorithm requires labels, which are not available in an unsupervised adaptation
approach, the current label estimates are
used in every iteration of the procedure.
The first step involves to replace the original trials by new "super trials". These are
formed by CSP-filtered original trials,
enlarged by the two CSP-filtered class
means. Super trials are then used to calculate the so-called feature covariance
matrices (one per trial). The second step
takes place in Riemannian space, where
distances between these novel feature
covariance matrices and mean covariance
matrices can be computed. A Riemannian classifier based on labels of the last
iteration (or on labels of historic data in
case of the first iteration) is used to update
the label estimate of each trial. These two
steps are repeated until convergence.

may 2018 | IEEE ComputatIonal IntEllIgEnCE magazInE

69



Table of Contents for the Digital Edition of IEEE Computational Intelligence Magazine - May 2018

Contents
IEEE Computational Intelligence Magazine - May 2018 - Cover1
IEEE Computational Intelligence Magazine - May 2018 - Cover2
IEEE Computational Intelligence Magazine - May 2018 - Contents
IEEE Computational Intelligence Magazine - May 2018 - 2
IEEE Computational Intelligence Magazine - May 2018 - 3
IEEE Computational Intelligence Magazine - May 2018 - 4
IEEE Computational Intelligence Magazine - May 2018 - 5
IEEE Computational Intelligence Magazine - May 2018 - 6
IEEE Computational Intelligence Magazine - May 2018 - 7
IEEE Computational Intelligence Magazine - May 2018 - 8
IEEE Computational Intelligence Magazine - May 2018 - 9
IEEE Computational Intelligence Magazine - May 2018 - 10
IEEE Computational Intelligence Magazine - May 2018 - 11
IEEE Computational Intelligence Magazine - May 2018 - 12
IEEE Computational Intelligence Magazine - May 2018 - 13
IEEE Computational Intelligence Magazine - May 2018 - 14
IEEE Computational Intelligence Magazine - May 2018 - 15
IEEE Computational Intelligence Magazine - May 2018 - 16
IEEE Computational Intelligence Magazine - May 2018 - 17
IEEE Computational Intelligence Magazine - May 2018 - 18
IEEE Computational Intelligence Magazine - May 2018 - 19
IEEE Computational Intelligence Magazine - May 2018 - 20
IEEE Computational Intelligence Magazine - May 2018 - 21
IEEE Computational Intelligence Magazine - May 2018 - 22
IEEE Computational Intelligence Magazine - May 2018 - 23
IEEE Computational Intelligence Magazine - May 2018 - 24
IEEE Computational Intelligence Magazine - May 2018 - 25
IEEE Computational Intelligence Magazine - May 2018 - 26
IEEE Computational Intelligence Magazine - May 2018 - 27
IEEE Computational Intelligence Magazine - May 2018 - 28
IEEE Computational Intelligence Magazine - May 2018 - 29
IEEE Computational Intelligence Magazine - May 2018 - 30
IEEE Computational Intelligence Magazine - May 2018 - 31
IEEE Computational Intelligence Magazine - May 2018 - 32
IEEE Computational Intelligence Magazine - May 2018 - 33
IEEE Computational Intelligence Magazine - May 2018 - 34
IEEE Computational Intelligence Magazine - May 2018 - 35
IEEE Computational Intelligence Magazine - May 2018 - 36
IEEE Computational Intelligence Magazine - May 2018 - 37
IEEE Computational Intelligence Magazine - May 2018 - 38
IEEE Computational Intelligence Magazine - May 2018 - 39
IEEE Computational Intelligence Magazine - May 2018 - 40
IEEE Computational Intelligence Magazine - May 2018 - 41
IEEE Computational Intelligence Magazine - May 2018 - 42
IEEE Computational Intelligence Magazine - May 2018 - 43
IEEE Computational Intelligence Magazine - May 2018 - 44
IEEE Computational Intelligence Magazine - May 2018 - 45
IEEE Computational Intelligence Magazine - May 2018 - 46
IEEE Computational Intelligence Magazine - May 2018 - 47
IEEE Computational Intelligence Magazine - May 2018 - 48
IEEE Computational Intelligence Magazine - May 2018 - 49
IEEE Computational Intelligence Magazine - May 2018 - 50
IEEE Computational Intelligence Magazine - May 2018 - 51
IEEE Computational Intelligence Magazine - May 2018 - 52
IEEE Computational Intelligence Magazine - May 2018 - 53
IEEE Computational Intelligence Magazine - May 2018 - 54
IEEE Computational Intelligence Magazine - May 2018 - 55
IEEE Computational Intelligence Magazine - May 2018 - 56
IEEE Computational Intelligence Magazine - May 2018 - 57
IEEE Computational Intelligence Magazine - May 2018 - 58
IEEE Computational Intelligence Magazine - May 2018 - 59
IEEE Computational Intelligence Magazine - May 2018 - 60
IEEE Computational Intelligence Magazine - May 2018 - 61
IEEE Computational Intelligence Magazine - May 2018 - 62
IEEE Computational Intelligence Magazine - May 2018 - 63
IEEE Computational Intelligence Magazine - May 2018 - 64
IEEE Computational Intelligence Magazine - May 2018 - 65
IEEE Computational Intelligence Magazine - May 2018 - 66
IEEE Computational Intelligence Magazine - May 2018 - 67
IEEE Computational Intelligence Magazine - May 2018 - 68
IEEE Computational Intelligence Magazine - May 2018 - 69
IEEE Computational Intelligence Magazine - May 2018 - 70
IEEE Computational Intelligence Magazine - May 2018 - 71
IEEE Computational Intelligence Magazine - May 2018 - 72
IEEE Computational Intelligence Magazine - May 2018 - 73
IEEE Computational Intelligence Magazine - May 2018 - 74
IEEE Computational Intelligence Magazine - May 2018 - 75
IEEE Computational Intelligence Magazine - May 2018 - 76
IEEE Computational Intelligence Magazine - May 2018 - 77
IEEE Computational Intelligence Magazine - May 2018 - 78
IEEE Computational Intelligence Magazine - May 2018 - 79
IEEE Computational Intelligence Magazine - May 2018 - 80
IEEE Computational Intelligence Magazine - May 2018 - Cover3
IEEE Computational Intelligence Magazine - May 2018 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com