Computational Intelligence - May 2015 - 72

is imposed, or the achievable training
set error is far from the optimum. In
regression, the user has no control over
the correlation amongst the bases (dictated by the data source), and in MLPs
this control is also nonexistent, because
backpropagation is blind to how big
the weights become. Therefore, regularized cost functions are the widespread
solution [1].
CULMs with Random Projections
Echo State Networks (ESN)

The first attempts to design CULMs appeared in the 1990s but never took off
[18], [34], [44]. Jaeger [22] simplified the
hidden layer training as proposed in the
ESNs as well as Mass [23] in the Liquid
State Machines (LSMs) [23], commonly
referred to as reservoir computing [24].
Both are recurrent networks, one (ESN)
using continuous-valued inputs and the
other (LSM) using spike train inputs.
The inputs of these learning machines
are functions (e.g. time series) instead of
static data as in most applications of
MLPs. They both have a hidden layer of
parameterized recurrent connections,
use sparse, fixed first layer, and fixed
feedback parameters in the hidden layer,
obtained with random number selection
to simplify the training. The only adaptation occurs in the output weights. We
will concentrate here on the analysis of
the ESN (Fig. 4), but a similar discussion
could be developed for LSMs.
The framework established in the
previous section is useful to understand
the ESN because the mapper is still a
recurrent MLP. The outputs of the

Input
Layer
win
X1

Dynamic
Reservoir
Rly
Rly

X2
Rly

z1

hidden PEs can still be interpreted as
the bases of the projection space, while
the optimal projection controlled by
the output trained weights can be
found using least squares or gradient
descent learning. The difference is that the
location of the bases is no longer a function
of the desired response. So this different
strategy avoids training the first layer
weights, but brings a new problem that is
the selection of the basis functions, i.e. the
fixed recurrent connections. The proponent of ESNs [22] suggests using
random weights and a high number of
hidden PEs, to create a "large" projection space. One can understand these
choices because the ESN is still a
recurrent MLP (RMLP), so the higher
the projection space dimensionality, the
better the approximation ability,
because the RMLP is closer to a universal mapper. Since the hidden parameters are obtained by realizations of a
random number generator (RNG) that
defines the bases vectors placement, the
problem is potential instability because
of feedback, and the cor relation
amongst the bases vectors, i.e. not all
reservoirs that span the same space (i.e.
with the same number of hidden PEs)
are equally good for learning.
Regarding stability, if we choose
arbitrarily the free parameters of RNG,
the reservoir singularities can lead to
divergent dynamics, and the reservoir
outputs become saturated and independent of the input values. Therefore the
norm of the eigenvalues' sum of the
recurrent connections' weight matrix
must be smaller than 1 (the echo state condition). Experience also shows that the

Read-Out
wout

z2
R

<
y=<
wout . z

zk

performance of the reservoir varies with
the norm of the recurrent weight vector
(the so called spectral radius). This is a little more difficult to understand, but it
has to do with two things: the memory
depth associated with the basis functions
and the "richness" of the basis functions,
i.e. how different they are in time.
RMLPs are myopic mappers [25], and as
such their finite memory affects the
quality of the approximation. The pole's
location of the linearized system controls the memory depth, as electrical
engineers surely understand, because the
real part of the pole controls the decay
time of the impulse response. Moreover,
when one looks at the response of the
hidden layer PEs when an input is
applied, we see that even for nonlinear
PEs their correlation over time is large,
and the more the PEs saturate, the less
correlated their time response becomes.
Both of these aspects are controlled by the
weight vector norm, which is in fact a userdefined free parameter. Therefore different
reservoirs provide different generalization as we explained in the previous section. The proponents suggest sparsifying
the connectivity of the feedback layer
weights, which is a second free parameter
that the user has to select [22]. In spite of
these three aspects, when the parameters
are "well tuned for the problem", the
ESN works very well for function
approximation. This was an eye-opener,
because effectively it was the first
instance of a dynamic learning machine
that was CULM, i.e. at the same time, a
universal function approximator and its
free trainable parameters (the projection
in the space) were determined via a
convex optimization algorithm. However, the ESN was not studied from a
statistical learning point of view. We proposed an information theoretic metric
to describe the richness of the ESN
dynamics, called the average state
entropy (ASE), and an adaptive bias that
avoids the weight norm normalization
and effectively reduces one of the free
parameters in the design [26].
Extreme Learning Machine (ELM)

Figure 4 Block diagram of a simplified echo state network (ESN). Notice that the hidden
layer is recurrent, so this is also called a recurrent MLP.

72

IEEE ComputatIonal IntEllIgEnCE magazInE | may 2015

The ELM [27] was proposed in 2004
and exploits the same basic idea of ESN's



Table of Contents for the Digital Edition of Computational Intelligence - May 2015

Computational Intelligence - May 2015 - Cover1
Computational Intelligence - May 2015 - Cover2
Computational Intelligence - May 2015 - 1
Computational Intelligence - May 2015 - 2
Computational Intelligence - May 2015 - 3
Computational Intelligence - May 2015 - 4
Computational Intelligence - May 2015 - 5
Computational Intelligence - May 2015 - 6
Computational Intelligence - May 2015 - 7
Computational Intelligence - May 2015 - 8
Computational Intelligence - May 2015 - 9
Computational Intelligence - May 2015 - 10
Computational Intelligence - May 2015 - 11
Computational Intelligence - May 2015 - 12
Computational Intelligence - May 2015 - 13
Computational Intelligence - May 2015 - 14
Computational Intelligence - May 2015 - 15
Computational Intelligence - May 2015 - 16
Computational Intelligence - May 2015 - 17
Computational Intelligence - May 2015 - 18
Computational Intelligence - May 2015 - 19
Computational Intelligence - May 2015 - 20
Computational Intelligence - May 2015 - 21
Computational Intelligence - May 2015 - 22
Computational Intelligence - May 2015 - 23
Computational Intelligence - May 2015 - 24
Computational Intelligence - May 2015 - 25
Computational Intelligence - May 2015 - 26
Computational Intelligence - May 2015 - 27
Computational Intelligence - May 2015 - 28
Computational Intelligence - May 2015 - 29
Computational Intelligence - May 2015 - 30
Computational Intelligence - May 2015 - 31
Computational Intelligence - May 2015 - 32
Computational Intelligence - May 2015 - 33
Computational Intelligence - May 2015 - 34
Computational Intelligence - May 2015 - 35
Computational Intelligence - May 2015 - 36
Computational Intelligence - May 2015 - 37
Computational Intelligence - May 2015 - 38
Computational Intelligence - May 2015 - 39
Computational Intelligence - May 2015 - 40
Computational Intelligence - May 2015 - 41
Computational Intelligence - May 2015 - 42
Computational Intelligence - May 2015 - 43
Computational Intelligence - May 2015 - 44
Computational Intelligence - May 2015 - 45
Computational Intelligence - May 2015 - 46
Computational Intelligence - May 2015 - 47
Computational Intelligence - May 2015 - 48
Computational Intelligence - May 2015 - 49
Computational Intelligence - May 2015 - 50
Computational Intelligence - May 2015 - 51
Computational Intelligence - May 2015 - 52
Computational Intelligence - May 2015 - 53
Computational Intelligence - May 2015 - 54
Computational Intelligence - May 2015 - 55
Computational Intelligence - May 2015 - 56
Computational Intelligence - May 2015 - 57
Computational Intelligence - May 2015 - 58
Computational Intelligence - May 2015 - 59
Computational Intelligence - May 2015 - 60
Computational Intelligence - May 2015 - 61
Computational Intelligence - May 2015 - 62
Computational Intelligence - May 2015 - 63
Computational Intelligence - May 2015 - 64
Computational Intelligence - May 2015 - 65
Computational Intelligence - May 2015 - 66
Computational Intelligence - May 2015 - 67
Computational Intelligence - May 2015 - 68
Computational Intelligence - May 2015 - 69
Computational Intelligence - May 2015 - 70
Computational Intelligence - May 2015 - 71
Computational Intelligence - May 2015 - 72
Computational Intelligence - May 2015 - 73
Computational Intelligence - May 2015 - 74
Computational Intelligence - May 2015 - 75
Computational Intelligence - May 2015 - 76
Computational Intelligence - May 2015 - 77
Computational Intelligence - May 2015 - 78
Computational Intelligence - May 2015 - 79
Computational Intelligence - May 2015 - 80
Computational Intelligence - May 2015 - Cover3
Computational Intelligence - May 2015 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com