Computational Intelligence - August 2015 - 35

reduced layer by layer. Due to the rounding
down operation 5? in Eq. (12), the dimensionality of data will be converged to a fixed number
finally regardless of the original dimensionality,
which can ensure the stability of the latent representation learned by this stacked structure.
The final dimension is only conditioned on the
number m. The number of layers can be large
since the computation of SRP is cheap due to
the low input dimensionality. When the stacked
SRP with L layers is trained, the latent representation of data is given by the output of the
last layer sigmoid ^[W L] T H L -1h. We refer this
stacked extension of SRP as SSRP. There are
three hyper-parameters in SSRP: the dimension relationship parameter m, the regularization parameter h in each SRP layer and the
number of SRP layers L. The overall structure
of our proposed stacked SRP is shown in Fig 3.

Original
Representation
Latent
Representation
The 1st
SRP Layer

H0 Dimension
Reduction

In this section, we first review Extreme Learning Machine
(ELM) and then discuss the connection between RP and ELM.
Based on our analysis and work described in Section III, we
modify the basic ELM model, and this leads to the Partially
Connected ELM.
A. Extreme Learning Machine (ELM)

ELM is an effective machine learning algorithm. It can be
regarded as a single-hidden-layer feedforward neural network
whose hidden layer nodes need not be tuned [10], [11]. Given a
set of training data (x i, y i), i = 1, 2, f, N, where x i ! R d and
y i ! {- 1, 1}, the output of ELM for the binary classification
task can be given as follows:
V

(13)

i =1

where h (x) = [h 1 (x), f, h V (x)] represents the output of the
hidden-layer containing V nodes with regard to the input
sample x, and b = [b 1, f, b V ] T denotes the weights between
the hidden layer and the output layer. ELM tries to minimize
the training error and the norm of vector b. The output
weights b can be calculated as follows:
b

-1
= H T ` I + HH T j y
C

(14)

where C is the regularization parameter, y = [y 1, f, y N ] T is the
label of the training data and H is the hidden-layer output matrix:
h (x 1)
h 1 (x 1) f h V (x 1)
H => h H => h
j
h H.
h (x N )
h 1 (x N ) f h V ( x N )

H1

Multi-SRP
Layers

The Last
SRP Layer

HL-1

H2
Dimension
Reduction

Dimension
Stability

HL

Figure 3 Architecture of Stacked SRP with L layers. The training is processed layer by
layer, in which SRP is recursively applied. The left several layers project data on a lowdimensional space. The last several layers keep the dimension unchanged. After training,
the latent representation is given by the H L .

V. Semi-Random Projection for
Extreme Learning Machine

f (x) = sign = / b i h i (x)G = sign (h (x) b)

The 2nd
SRP Layer

(15)

Different from the conventional feed-forward neural networks, ELM uses random nodes in the hidden layer regardless
of the input data. For an input x, the output of hidden layer is
given as:
h (x) = [G (w 1, b 1, x), f, G (w V , b V , x)]

(16)

where G denotes a nonlinear piecewise continuous activation
function. The commonly used activation function G is the sigmoid function:
1
1 + exp (- z)
z = wi x + bi

G (w i, b i, x) =

(17)
(18)

where {(w i, b i)} iV= 1 are randomly sampled according to any
continuous probability distribution.
B. Random Projection and Extreme Learning Machine

ELM can be divided into three parts:
1) Linear Random Mapping: data is mapped from the original space to the latent space based on Eq. (18).
2) Nonlinear Activation: the output of the hidden layer is
obtained by applying the sigmoid function on the output
of the previous step, as described in Eqs. (16) and (17).
3) Linear Model Learning: a linear decision function is
applied on the output of hidden layer, where the parameters of decision function are trained via the minimal norm
least square method based on Eq. (14).
This structure of ELM is illustrated in Fig. 4. Since the parameters {(w i, b i)} iV= 1 are randomly generated, the first step "Linear
Random Mapping" can be regarded as RP. Mathematically,
{(w i)} iV= 1 in Eq. (16) can be regarded as the ith column in the
transformation matrix W in Eq. (10) in RP. But different from
RP, feature mapping in ELM does not focus on dimensionality
reduction. In fact, ELM needs to use a large number of hidden

august 2015 | IEEE ComputatIonal IntEllIgEnCE magazInE

35



Table of Contents for the Digital Edition of Computational Intelligence - August 2015

Computational Intelligence - August 2015 - Cover1
Computational Intelligence - August 2015 - Cover2
Computational Intelligence - August 2015 - 1
Computational Intelligence - August 2015 - 2
Computational Intelligence - August 2015 - 3
Computational Intelligence - August 2015 - 4
Computational Intelligence - August 2015 - 5
Computational Intelligence - August 2015 - 6
Computational Intelligence - August 2015 - 7
Computational Intelligence - August 2015 - 8
Computational Intelligence - August 2015 - 9
Computational Intelligence - August 2015 - 10
Computational Intelligence - August 2015 - 11
Computational Intelligence - August 2015 - 12
Computational Intelligence - August 2015 - 13
Computational Intelligence - August 2015 - 14
Computational Intelligence - August 2015 - 15
Computational Intelligence - August 2015 - 16
Computational Intelligence - August 2015 - 17
Computational Intelligence - August 2015 - 18
Computational Intelligence - August 2015 - 19
Computational Intelligence - August 2015 - 20
Computational Intelligence - August 2015 - 21
Computational Intelligence - August 2015 - 22
Computational Intelligence - August 2015 - 23
Computational Intelligence - August 2015 - 24
Computational Intelligence - August 2015 - 25
Computational Intelligence - August 2015 - 26
Computational Intelligence - August 2015 - 27
Computational Intelligence - August 2015 - 28
Computational Intelligence - August 2015 - 29
Computational Intelligence - August 2015 - 30
Computational Intelligence - August 2015 - 31
Computational Intelligence - August 2015 - 32
Computational Intelligence - August 2015 - 33
Computational Intelligence - August 2015 - 34
Computational Intelligence - August 2015 - 35
Computational Intelligence - August 2015 - 36
Computational Intelligence - August 2015 - 37
Computational Intelligence - August 2015 - 38
Computational Intelligence - August 2015 - 39
Computational Intelligence - August 2015 - 40
Computational Intelligence - August 2015 - 41
Computational Intelligence - August 2015 - 42
Computational Intelligence - August 2015 - 43
Computational Intelligence - August 2015 - 44
Computational Intelligence - August 2015 - 45
Computational Intelligence - August 2015 - 46
Computational Intelligence - August 2015 - 47
Computational Intelligence - August 2015 - 48
Computational Intelligence - August 2015 - 49
Computational Intelligence - August 2015 - 50
Computational Intelligence - August 2015 - 51
Computational Intelligence - August 2015 - 52
Computational Intelligence - August 2015 - 53
Computational Intelligence - August 2015 - 54
Computational Intelligence - August 2015 - 55
Computational Intelligence - August 2015 - 56
Computational Intelligence - August 2015 - 57
Computational Intelligence - August 2015 - 58
Computational Intelligence - August 2015 - 59
Computational Intelligence - August 2015 - 60
Computational Intelligence - August 2015 - 61
Computational Intelligence - August 2015 - 62
Computational Intelligence - August 2015 - 63
Computational Intelligence - August 2015 - 64
Computational Intelligence - August 2015 - 65
Computational Intelligence - August 2015 - 66
Computational Intelligence - August 2015 - 67
Computational Intelligence - August 2015 - 68
Computational Intelligence - August 2015 - 69
Computational Intelligence - August 2015 - 70
Computational Intelligence - August 2015 - 71
Computational Intelligence - August 2015 - 72
Computational Intelligence - August 2015 - 73
Computational Intelligence - August 2015 - 74
Computational Intelligence - August 2015 - 75
Computational Intelligence - August 2015 - 76
Computational Intelligence - August 2015 - 77
Computational Intelligence - August 2015 - 78
Computational Intelligence - August 2015 - 79
Computational Intelligence - August 2015 - 80
Computational Intelligence - August 2015 - Cover3
Computational Intelligence - August 2015 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com