Computational Intelligence - August 2016 - 47

Two common empirical estimators are the empirical
Lt emp ( f ) = 1/n / z ! Dn , ( f, z) [22] and leave-one-out Lt loo ( f ) =
1/n / zi ! {1,f,n} ! Dn , (A (Dni, H ), z i) [46] errors.

In the ELM model, the quantities {v j, v 0j } in Eq. (4) are set
randomly and are not subject to any adjustment, and the quantity w in Eq. (5) is the only degree of freedom. Hence, the
training problem reduces to minimization of the convex cost:

IV. Extreme Learning Machines

The ELM approach [9], [47], [48] was introduced to overcome
problems posed by the back-propagation training algorithm
[49]; specifically, potentially slow convergence rates, the critical tuning of optimization parameters, and the presence of
local minima that call for multi-start and re-training strategies.
In this section, we recall conventional ELM and then adapt it
to the Big Data framework. ELM was originally developed for
the single hidden layer feedforward neural networks [50], [51]
and then generalized in order to cope with cases where ELM
is not neuron alike:
	

f (x) =

h

/ w j g j (x),(1)

j=1

where g j: R d " R, j ! {1, f, h} is the hidden layer output
corresponding to the input sample x and w is the output
weight vector between the hidden layer and the output layer.
In our case, the input layer has d neurons and connects to
the hidden layer (having h neurons) through a set of weights
	

v j ! R d, j ! {1, f, h}, (2)

the j-th hidden neuron embeds a bias term,
	

v 0j , j ! {1, f, h}, (3)

0

{ (v j ·x + v j ),

j ! {1, f, h} .(4)

Note that Eq. (4) can be further generalized to include a
wider class of functions [50], [51], [52]; therefore, the
response of a neuron to an input stimulus x can be generically represented by any nonlinear piecewise continuous function characterized by a set of parameters. In ELM, these
parameters (v j and v 0j ) are set randomly. A vector of weighted
links, w ! R h, connects the hidden neurons to the output
neuron without any bias. The overall output function, f (x),
of the network is:
	

f (x) = / j = 1 w j { (v j ·x + v 0j ). (5)
h

It is convenient to define an activation matrix, V ! R n # h,
such that the entry Vi, j is the activation value of the j-th hidden neuron for the i-th input pattern. The V matrix is:
0

{ (v 1 ·x 1 + v 1 )

	

g
V=>
h
j
0
{ (v 1 ·x n + v 1 ) g

0

{ (v h ·x 1 + v h )

T

(x 1)
h
h
H=>
H.(6)
0
T
{ (v h ·x n + v h )
z (x n)
z

w ) = arg min
Vw - y 2 .(7)
w

A matrix pseudo-inversion yields the unique L 2 solution, as
proven in [51], [53]:
	

w ) = V + y.(8)

The simple, efficient procedure to train the ELM therefore
involves the following steps:
1)	Randomly generate hidden node parameters (in or case
v i and bias v 0i ) for each hidden neuron;
2)	Compute the activation matrix V, of Eq. (6);
3)	Compute the output weights by solving the pseudoinverse problem of Eq. (8).
Despite the apparent simplicity of the ELM approach, the
crucial result is that even random weights in the hidden layer
endow a network with notable representation ability. Moreover,
the theory derived in [53] proves that regularization strategies
can further improve the approach's generalization performance.
As a result, the cost function of Eq. (7) is augmented by a regularization factor as follows:
	

and a nonlinear activation function, { : R " R. Thus the neuron's response to an input stimulus, x, is:
	

	

Vw - y
w ) = arg min
w

2

and

w , (9)

where w can be any suitable norm of the output weights
[53]. A common approach is then to use the L 2 regularizer
	

Vw - y 2 + m w 2, (10)
w ) = arg min
w

and consequently the vector of weights w ) is then obtained as follows:
	

w ) = (V T V + mI ) -1 V T y, (11)

where I ! R h # h is an identity matrix.
V. ELM for Big Data on Spark

Spark [17] is a state-of-the-art framework for high performance memory parallel computing designed to efficiently
deal with iterative computational procedures that recursively
perform operations over the same data [14], [17], [54]. One
recent solution for Big Data analytics is the use of cloud computing [19], [55], [56], which makes available hundreds or
thousands of machines to provide services such as computing
and storage.
Various cluster management options are available for running Spark [57]. In this work, we chose to deploy Spark in a
Hadoop cluster. The selected Hadoop architecture was composed of NM slave machines and two additional machines that
run as masters: one for controlling HDFS and the other for
resource management.

AUGUST 2016 | IEEE Computational intelligence magazine

47



Table of Contents for the Digital Edition of Computational Intelligence - August 2016

Computational Intelligence - August 2016 - Cover1
Computational Intelligence - August 2016 - Cover2
Computational Intelligence - August 2016 - 1
Computational Intelligence - August 2016 - 2
Computational Intelligence - August 2016 - 3
Computational Intelligence - August 2016 - 4
Computational Intelligence - August 2016 - 5
Computational Intelligence - August 2016 - 6
Computational Intelligence - August 2016 - 7
Computational Intelligence - August 2016 - 8
Computational Intelligence - August 2016 - 9
Computational Intelligence - August 2016 - 10
Computational Intelligence - August 2016 - 11
Computational Intelligence - August 2016 - 12
Computational Intelligence - August 2016 - 13
Computational Intelligence - August 2016 - 14
Computational Intelligence - August 2016 - 15
Computational Intelligence - August 2016 - 16
Computational Intelligence - August 2016 - 17
Computational Intelligence - August 2016 - 18
Computational Intelligence - August 2016 - 19
Computational Intelligence - August 2016 - 20
Computational Intelligence - August 2016 - 21
Computational Intelligence - August 2016 - 22
Computational Intelligence - August 2016 - 23
Computational Intelligence - August 2016 - 24
Computational Intelligence - August 2016 - 25
Computational Intelligence - August 2016 - 26
Computational Intelligence - August 2016 - 27
Computational Intelligence - August 2016 - 28
Computational Intelligence - August 2016 - 29
Computational Intelligence - August 2016 - 30
Computational Intelligence - August 2016 - 31
Computational Intelligence - August 2016 - 32
Computational Intelligence - August 2016 - 33
Computational Intelligence - August 2016 - 34
Computational Intelligence - August 2016 - 35
Computational Intelligence - August 2016 - 36
Computational Intelligence - August 2016 - 37
Computational Intelligence - August 2016 - 38
Computational Intelligence - August 2016 - 39
Computational Intelligence - August 2016 - 40
Computational Intelligence - August 2016 - 41
Computational Intelligence - August 2016 - 42
Computational Intelligence - August 2016 - 43
Computational Intelligence - August 2016 - 44
Computational Intelligence - August 2016 - 45
Computational Intelligence - August 2016 - 46
Computational Intelligence - August 2016 - 47
Computational Intelligence - August 2016 - 48
Computational Intelligence - August 2016 - 49
Computational Intelligence - August 2016 - 50
Computational Intelligence - August 2016 - 51
Computational Intelligence - August 2016 - 52
Computational Intelligence - August 2016 - 53
Computational Intelligence - August 2016 - 54
Computational Intelligence - August 2016 - 55
Computational Intelligence - August 2016 - 56
Computational Intelligence - August 2016 - 57
Computational Intelligence - August 2016 - 58
Computational Intelligence - August 2016 - 59
Computational Intelligence - August 2016 - 60
Computational Intelligence - August 2016 - 61
Computational Intelligence - August 2016 - 62
Computational Intelligence - August 2016 - 63
Computational Intelligence - August 2016 - 64
Computational Intelligence - August 2016 - 65
Computational Intelligence - August 2016 - 66
Computational Intelligence - August 2016 - 67
Computational Intelligence - August 2016 - 68
Computational Intelligence - August 2016 - 69
Computational Intelligence - August 2016 - 70
Computational Intelligence - August 2016 - 71
Computational Intelligence - August 2016 - 72
Computational Intelligence - August 2016 - 73
Computational Intelligence - August 2016 - 74
Computational Intelligence - August 2016 - 75
Computational Intelligence - August 2016 - 76
Computational Intelligence - August 2016 - Cover3
Computational Intelligence - August 2016 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com