Computational Intelligence - August 2015 - 22

W i = z ^ c i h, so we will be able to predict a new point label
according to its similarity to those W i .
cl (c) = SIGN e / b i J ^z ^ c i h, z ^ c hho
h

We will refer to such units as Tanimoto neurons. We can now
define a Tanimoto ELM as an ELM with Tanimoto neurons in
the hidden layer (see Fig. 2).

i =1

T-ELM W, b ^X h = c

= SIGN e / b i J ^W i, z ^ c hho,
h

i =1

where b i ! R are some (possibly negative) weights associated
with a given c i .
More generally, for given sets X i and our predefined sets
W i, we build a pairwise set similarity matrix
H ij = J (X i, W j) =

Xi + W j
,
Xi + W j - Xi + W j

(3)

and (in the binary case) classify X i according to the sign of
T-ELM W, b ^X h = Hb.
If we are given our sets X i, W i in forms of binary matrices
X, W, we may put
T (X i, W j) =

where W ! {0, 1} d # h and X ! {0, 1} d # N . The very same
equation can be used for W ! [0, 1] d # h but as we show in
Section VI and in Theorem 1, it is preferable to use W 1 X 1.
Thus, as opposed to the classical ELM, we are selecting hidden
neurons randomly from the training set instead of some arbitrary continuous distribution. This way, we are ensuring, on one
hand, that the weights used are binary and sparse and, on the
other, that they correctly span the (possibly much degenerated)
sample space. This is conceptually similar to taking orthogonal
features [28] which ensures better spanning of the whole R d
but is focused on a very small subset of the input space, which
is actually filled with data points.
B. From the Generalized SLFN Perspective

X Ti W j
,
1X i + 1W j - X Ti W j

(4)

One can see the proposed method as a generalized SLFN with
a non-neuron like unbiased activation function

where 1 is a matrix of ones of appropriate dimension and A i
denotes i th column of matrix A. From (3) and (4), we obtain

G (x, w, b) = G (x, w)
G w, x H
=
1x + 1w -G w, x H
G w, x H
.
=
w 1 + x 1 -G w, x H

H ij = J (X i, W j) = T (X i, W j),
and consequently
H=

XT W
.
1X + 1W - X T W

(5)

Now let us think about SLFN consisting of hidden neurons
computing the Tanimoto coefficient between the input signal
and the set (encoded as a binary vector) in a particular neuron.

Input
Layer

Hidden
Layer

Output
Layer

T(W1, {(c))
1z1(c) = {(1) (c)
T(W2, {(c))

b1
b2

1z2(c) = {(2) (c)
T(W3, {(c))
...
...

b3
...
bh

!

cl(c)

1zd(c) = {(d) (c)
T(Wh, {(c))
Embedding
Layer

Tanimoto
Layer

Decision
Layer

Figure 2 T-ELM as a neural network with h hidden Tanimoto neurons for the classification of the compound c, where W i = { (c i) . All
weights are either randomly selected (dashed) or are the result of
closed-form optimization (solid).

22

XT W
m b,
1X + 1W - X T W

IEEE ComputatIonal IntEllIgEnCE magazInE | august 2015

(6)

We will refer to this function as the Tanimoto activation function. First, let us show some of its basic properties, which are of
interest to ELM theory [22], [29].
Remark 1. The Tanimoto activation function is nonconstant,
bounded, continuous and infinitely differentiable, for x, w ! 60, 1@d,
such that x + w ! 0.
Proof. The continuity is a direct consequence of the continuity of the scalar product and division. The only point of discontinuity is when the denominator is zero, which can only
occur if both w and x are zero vectors, which is beyond functions domain2. Similarly, the Tanimoto activation function is
C 3 as it is a composition of C 3 functions.
4
Now we proceed to the main theoretical result of our
paper. We will show, that under simple restrictions, a Tanimoto
ELM with N hidden neurons can perfectly learn any set of N
binary vectors using only sparse, binary weights.
Theorem 1. Given arbitrary N distinct samples x i ! " 0, 1 ,d
with fixed sparseness x i 1 = m, some labeling t i ! R p and hidden
layer of N distinct Tanimoto neurons w i selected from the training set,
Tanimoto ELM hidden layer activation matrix H is invertible and
Hb - T = 0.
A 1 Al means that 6 i 7 j 6 k A ki = Alkj .
One could also redefine the function to be continuous in zero by simply adding small constant f > 0 to both numerator and denominator G (x, w, b) =
^ w, x + f h / ^ w 1 + "x,1 - w, x + f h .

1
2



Table of Contents for the Digital Edition of Computational Intelligence - August 2015

Computational Intelligence - August 2015 - Cover1
Computational Intelligence - August 2015 - Cover2
Computational Intelligence - August 2015 - 1
Computational Intelligence - August 2015 - 2
Computational Intelligence - August 2015 - 3
Computational Intelligence - August 2015 - 4
Computational Intelligence - August 2015 - 5
Computational Intelligence - August 2015 - 6
Computational Intelligence - August 2015 - 7
Computational Intelligence - August 2015 - 8
Computational Intelligence - August 2015 - 9
Computational Intelligence - August 2015 - 10
Computational Intelligence - August 2015 - 11
Computational Intelligence - August 2015 - 12
Computational Intelligence - August 2015 - 13
Computational Intelligence - August 2015 - 14
Computational Intelligence - August 2015 - 15
Computational Intelligence - August 2015 - 16
Computational Intelligence - August 2015 - 17
Computational Intelligence - August 2015 - 18
Computational Intelligence - August 2015 - 19
Computational Intelligence - August 2015 - 20
Computational Intelligence - August 2015 - 21
Computational Intelligence - August 2015 - 22
Computational Intelligence - August 2015 - 23
Computational Intelligence - August 2015 - 24
Computational Intelligence - August 2015 - 25
Computational Intelligence - August 2015 - 26
Computational Intelligence - August 2015 - 27
Computational Intelligence - August 2015 - 28
Computational Intelligence - August 2015 - 29
Computational Intelligence - August 2015 - 30
Computational Intelligence - August 2015 - 31
Computational Intelligence - August 2015 - 32
Computational Intelligence - August 2015 - 33
Computational Intelligence - August 2015 - 34
Computational Intelligence - August 2015 - 35
Computational Intelligence - August 2015 - 36
Computational Intelligence - August 2015 - 37
Computational Intelligence - August 2015 - 38
Computational Intelligence - August 2015 - 39
Computational Intelligence - August 2015 - 40
Computational Intelligence - August 2015 - 41
Computational Intelligence - August 2015 - 42
Computational Intelligence - August 2015 - 43
Computational Intelligence - August 2015 - 44
Computational Intelligence - August 2015 - 45
Computational Intelligence - August 2015 - 46
Computational Intelligence - August 2015 - 47
Computational Intelligence - August 2015 - 48
Computational Intelligence - August 2015 - 49
Computational Intelligence - August 2015 - 50
Computational Intelligence - August 2015 - 51
Computational Intelligence - August 2015 - 52
Computational Intelligence - August 2015 - 53
Computational Intelligence - August 2015 - 54
Computational Intelligence - August 2015 - 55
Computational Intelligence - August 2015 - 56
Computational Intelligence - August 2015 - 57
Computational Intelligence - August 2015 - 58
Computational Intelligence - August 2015 - 59
Computational Intelligence - August 2015 - 60
Computational Intelligence - August 2015 - 61
Computational Intelligence - August 2015 - 62
Computational Intelligence - August 2015 - 63
Computational Intelligence - August 2015 - 64
Computational Intelligence - August 2015 - 65
Computational Intelligence - August 2015 - 66
Computational Intelligence - August 2015 - 67
Computational Intelligence - August 2015 - 68
Computational Intelligence - August 2015 - 69
Computational Intelligence - August 2015 - 70
Computational Intelligence - August 2015 - 71
Computational Intelligence - August 2015 - 72
Computational Intelligence - August 2015 - 73
Computational Intelligence - August 2015 - 74
Computational Intelligence - August 2015 - 75
Computational Intelligence - August 2015 - 76
Computational Intelligence - August 2015 - 77
Computational Intelligence - August 2015 - 78
Computational Intelligence - August 2015 - 79
Computational Intelligence - August 2015 - 80
Computational Intelligence - August 2015 - Cover3
Computational Intelligence - August 2015 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com