Computational Intelligence - February 2016 - 48

object and speech recognition. CNN
combines three architectural ideas to
incorporate shift, scale and distortion
invariances: shared weights, sub-sampling and local receptive fields. A simple
CNN is shown in Fig. 3.
design a kernel which had a better genWe can easily stack this architecture
into this framework. In [149], SpicyMKL
eralization ability. In [157], the authors
into deep architectures by setting the
which iteratively solved the smoothed
showed that each decision tree is actuoutput of one CNN to be the input of
minimization problems without solving
ally a kernel. Then an MKL algorithm
the next. CNN employs ensemble
SVM, LP, or QP internally was proposed.
was employed to prune the decision
methods in the inner str ucture.
Experiments showed SpicyMKL was
tree ensemble.
Recently, researchers have successfully
faster than other methods especially
demonstrated the power of an ensemble
when the size of the kernel ensemble is
of CNNs. In [160], [161], 'Multi-collarge (several thousands). Kloft et al. [150]
G. Deep Learning Based
umn Deep Neural Networks' were proextended MKL to arbitrary norms to
Ensemble Methods
posed where each 'column' is actually a
allow for robust kernel mixtures that
Recently deep learning [158] has been a
CNN. The outputs of all columns were
generalize well. Experiments showed the
hot topic in computational intelligence
averaged. The proposed method
advantage of this L p norm of MKL over
research. In deep learning, deep structure
improved state-of-the-art performance
which is composed of multiple layers of
the conventional L 1 norm based sparse
on several benchmark data sets.
non-linear operations is able to learn highMKL method. Motivated by deep learnAutoencoder [162] is also a popular
level abstraction. Such high-level abstracing structure, in [151], the authors probuilding block of deep learning struction is a key factor leading to the success
posed a two-layer MKL which differed
ture. An autoencoder can be decomof many state-of-the-art systems in vision,
from the conventional 'shallow' MKL
posed into two parts: encoder and
language, and other AI-level tasks. Commethods. In their work, the MLMKL
decoder. The encoder is a deterministic
plex training algorithms combined with
offered higher flexibility through multimapping that maps the input x to the
carefully chosen different types of parameple feature mappings. In [152], the
ters (e.g. learning rate, mini-batch size,
authors proposed to train the MKL with
hidden representation y through:
number of epochs) may lead to deep neusequential minimal optimization (SMO)
fH (x) = s (Wx + b) where H = {W, b},
ral networks (DNN) with high-perforwhich is simple, easy to implement and
and s is some non-linear activation
mance. We can find that ensemble
adapt, and efficiently scales to large probfunction such as the sigmoid. In the
methods successfully boost the perforlems. 'Support Kernel Machine' based on
decoder, the hidden representation is
mance of DNN in various scenarios.
the dual formulation of the quadratically
then mapped back to reconstruct the
Convolutional Neural Networks
constrained quadratic program (QCQP)
input x. This mapping is achieved by
(CNN) [159] have been successfully
as a second-order cone programming was
g lH (y) = s (W l y + bl ) . Fig. 4 shows the
applied to solve many tasks such as digit,
proposed in [153]. This work also shows
structure of a denoising autoencoder. In
how to exploit the Moreaua denoising autoencoder,
Yosida regularization to yield a
firstly the input is corrupted
formulation which can be
by some noise and the autoencombined with SMO. In [154],
coder aims to reconstruct the
Simple MKL was proposed
'clean' input.
where the MKL problem
One can easily generalize
results in a smooth and convex
this basic denoising autoenoptimization problem, which is
coder to some deep structure
Fi
lte
1
actually equivalent to other
by repeatedly mapping one
rn
r
lte
i
MKL formulations available in
hidden representation to anF
the literature. In [155], the
other and then decoding each
authors proposed MKL for
mapping in the decoder. In
Map
joint feature maps which pro[163], the author proposed an
Map 1
Map 2
Map n
n- 1
vided a convenient and princiensemble of stacked sparse depled way to employ MKL for
noising autoencoders [164]. In
solving multi-class problems.
that work, the weight of the
There are other types of
output of each base stacked
ensemble based kernel learnsparse denoising autoencoder
ing research in the literature. Figure 3 Basic structure of a convolutional neural network. The out- (or each column) is optimized
In [156], boosting was used to put is the input of a pre-defined classifier.
by a stand out network.
er
2

Filt

Subsampling Layer

1
n-

IEEE ComputatIonal IntEllIgEnCE magazInE | FEbruary 2016

er

48

Filt

Convolutional Layer

Ensemble methods can also be employed in the
contexts of ensemble clustering, ensemble features
selection and ensemble of evolutionary optimizers.



Table of Contents for the Digital Edition of Computational Intelligence - February 2016

Computational Intelligence - February 2016 - Cover1
Computational Intelligence - February 2016 - Cover2
Computational Intelligence - February 2016 - 1
Computational Intelligence - February 2016 - 2
Computational Intelligence - February 2016 - 3
Computational Intelligence - February 2016 - 4
Computational Intelligence - February 2016 - 5
Computational Intelligence - February 2016 - 6
Computational Intelligence - February 2016 - 7
Computational Intelligence - February 2016 - 8
Computational Intelligence - February 2016 - 9
Computational Intelligence - February 2016 - 10
Computational Intelligence - February 2016 - 11
Computational Intelligence - February 2016 - 12
Computational Intelligence - February 2016 - 13
Computational Intelligence - February 2016 - 14
Computational Intelligence - February 2016 - 15
Computational Intelligence - February 2016 - 16
Computational Intelligence - February 2016 - 17
Computational Intelligence - February 2016 - 18
Computational Intelligence - February 2016 - 19
Computational Intelligence - February 2016 - 20
Computational Intelligence - February 2016 - 21
Computational Intelligence - February 2016 - 22
Computational Intelligence - February 2016 - 23
Computational Intelligence - February 2016 - 24
Computational Intelligence - February 2016 - 25
Computational Intelligence - February 2016 - 26
Computational Intelligence - February 2016 - 27
Computational Intelligence - February 2016 - 28
Computational Intelligence - February 2016 - 29
Computational Intelligence - February 2016 - 30
Computational Intelligence - February 2016 - 31
Computational Intelligence - February 2016 - 32
Computational Intelligence - February 2016 - 33
Computational Intelligence - February 2016 - 34
Computational Intelligence - February 2016 - 35
Computational Intelligence - February 2016 - 36
Computational Intelligence - February 2016 - 37
Computational Intelligence - February 2016 - 38
Computational Intelligence - February 2016 - 39
Computational Intelligence - February 2016 - 40
Computational Intelligence - February 2016 - 41
Computational Intelligence - February 2016 - 42
Computational Intelligence - February 2016 - 43
Computational Intelligence - February 2016 - 44
Computational Intelligence - February 2016 - 45
Computational Intelligence - February 2016 - 46
Computational Intelligence - February 2016 - 47
Computational Intelligence - February 2016 - 48
Computational Intelligence - February 2016 - 49
Computational Intelligence - February 2016 - 50
Computational Intelligence - February 2016 - 51
Computational Intelligence - February 2016 - 52
Computational Intelligence - February 2016 - 53
Computational Intelligence - February 2016 - 54
Computational Intelligence - February 2016 - 55
Computational Intelligence - February 2016 - 56
Computational Intelligence - February 2016 - 57
Computational Intelligence - February 2016 - 58
Computational Intelligence - February 2016 - 59
Computational Intelligence - February 2016 - 60
Computational Intelligence - February 2016 - 61
Computational Intelligence - February 2016 - 62
Computational Intelligence - February 2016 - 63
Computational Intelligence - February 2016 - 64
Computational Intelligence - February 2016 - 65
Computational Intelligence - February 2016 - 66
Computational Intelligence - February 2016 - 67
Computational Intelligence - February 2016 - 68
Computational Intelligence - February 2016 - 69
Computational Intelligence - February 2016 - 70
Computational Intelligence - February 2016 - 71
Computational Intelligence - February 2016 - 72
Computational Intelligence - February 2016 - Cover3
Computational Intelligence - February 2016 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com