Computational Intelligence - August 2015 - 33

generated randomly. Experimental results in [5], [6] have shown
that RP can achieve a significant speedup in computation time
with little distortion of pairwise information of data. RP is
emerging as a popular technique, for example, which is applied
to sentiment analysis in [13]. However, as pointed out in [5], if
the distances or similarities are not useful in the original space,
the performance of RP should be suspected. It was proven in
[14] that for any pairwise points in the high-dimensional space,
the similarity measures may become meaningless. This indicates
that for pattern classification tasks, the low-dimensional space
generated by RP may not guarantee the performance. In other
view, without data-based parameter tuning in transformation
vectors, RP may not capture the task-related information
underlying the original data.
As suggested in [6], one of the typical random assignments
for the transformation matrix W is based on an i.i.d sampling
described by:
Z
1
] 1 with prob. 2c
]
w i, j = c [ 0 with prob. 1 - 1
c
]
] -1 with prob. 1
2c
\

(5)

where c is set to d .
To demonstrate the limitation of RP, Fig. 1 shows a simple
4-dimension example in a 2-dimension subspace, where the
positive and negative samples are denoted by circles and triangles, respectively. According to Eq. (5), only two features out of
the original four features will contribute to the creation of a
new dimension in the reduced-dimensional space. The projection vector found by RP will be [1, -1], since the assignment
probabilities of 1 and -1 are equal to 1/4 according to Eq. (5).
The corresponding projected direction is shown in Fig. 1 by
the dashed line. It is obvious that the data projected in the
newly created dimension is not separable. However, the other
projected direction denoted by the solid line can preserve and
even emphasize the discriminative information. The simple
example illustrates the limitation of RP, and this motivates the
development of the "SRP" in this paper.

A. Random Feature Subsampling and
Transformation Vector Learning

In RP, only part of the weights in the transformation matrix
are non-zero. This is equivalent to random subsampling of features in SRP.
Different from RP whose non-zero weights are set to either
1 or -1, the weights for the chosen attributes in transformation vectors of SRP are learned from data. Assuming the number of chosen features is d s (d s is an integer close to d ), the
original data matrix X ! R d # N is reduced to a sub-matrix Z
Xi
with a smaller size of d s # N in the ith iteration. The transformation vector Z
w i ! R ds # 1 maps data Z
X i into the one-dimensional space as follows:
T
hi = Z
wi Z
Xi

where h i ! R 1 # N is a vector, in which each element is the
projection of a sample on the new dimension, and Z
w i is the
transformation vector. We adopt LDA to derive the transformation vector. By following the graph embedding framework
as described in Section II, the transformation vector can be
learned according to Eq. (2), in which matrices L and B can
be calculated as:
SRP : *

_c
_
_c
_
Nc
L = / c = 1 n c (X
x -V
x) (X
x -V
x) T
_ ci
_ ci
N
B = / i = 1 (X
xi - Y
x ) (X
x ) T + hI d s
xi - Y

(7)

where SRP denotes the transformation vector learning, I ds
denotes an identity matrix with a size of d s # d s, U) means the
corresponding value calculated on the randomly selected feature subset from the original high dimension space and h represents the regularization weight. In SRP, LDA is conducted in
the randomly generated low-dimensional subspace of the data,
the computational cost of the learning scheme is much less
than that in the original space. Here, the second term in B of
Eq. (7) represents a regularization term.
The optimization problem in Eq. (2) can be reformulated as
the following generalized eigenvalue problem:
(8)

L{ = mB{

III. Semi-Random Projection for
Dimensionality Reduction

Motivated by the analysis in Section II, we propose a novel
dimensionality reduction framework called Semi-Random
Projection (SRP). We aim to learn a latent space where the
beneficial task-related information can be preserved, while the
running speed is comparable to that of RP. As illustrated in Eq.
(5), approximately d elements in each row of the transformation matrix will be assigned 1 or -1. To reduce the randomness
of RP, these d elements will not be assigned 1 or -1, but
will be learned from the data. The SRP consists of two phases:
one is the random sampling of features and the other is the
transformation vector learning. The term "Semi-Random"
originates from the fact that the first phase is random, while the
second one is not.

(6)

Random Projection

Pos. Data
Neg. Data

Inseparable

Designed Projection
Separable

Figure 1 Illustrations of the possible ineffectiveness of RP.

august 2015 | IEEE ComputatIonal IntEllIgEnCE magazInE

33



Table of Contents for the Digital Edition of Computational Intelligence - August 2015

Computational Intelligence - August 2015 - Cover1
Computational Intelligence - August 2015 - Cover2
Computational Intelligence - August 2015 - 1
Computational Intelligence - August 2015 - 2
Computational Intelligence - August 2015 - 3
Computational Intelligence - August 2015 - 4
Computational Intelligence - August 2015 - 5
Computational Intelligence - August 2015 - 6
Computational Intelligence - August 2015 - 7
Computational Intelligence - August 2015 - 8
Computational Intelligence - August 2015 - 9
Computational Intelligence - August 2015 - 10
Computational Intelligence - August 2015 - 11
Computational Intelligence - August 2015 - 12
Computational Intelligence - August 2015 - 13
Computational Intelligence - August 2015 - 14
Computational Intelligence - August 2015 - 15
Computational Intelligence - August 2015 - 16
Computational Intelligence - August 2015 - 17
Computational Intelligence - August 2015 - 18
Computational Intelligence - August 2015 - 19
Computational Intelligence - August 2015 - 20
Computational Intelligence - August 2015 - 21
Computational Intelligence - August 2015 - 22
Computational Intelligence - August 2015 - 23
Computational Intelligence - August 2015 - 24
Computational Intelligence - August 2015 - 25
Computational Intelligence - August 2015 - 26
Computational Intelligence - August 2015 - 27
Computational Intelligence - August 2015 - 28
Computational Intelligence - August 2015 - 29
Computational Intelligence - August 2015 - 30
Computational Intelligence - August 2015 - 31
Computational Intelligence - August 2015 - 32
Computational Intelligence - August 2015 - 33
Computational Intelligence - August 2015 - 34
Computational Intelligence - August 2015 - 35
Computational Intelligence - August 2015 - 36
Computational Intelligence - August 2015 - 37
Computational Intelligence - August 2015 - 38
Computational Intelligence - August 2015 - 39
Computational Intelligence - August 2015 - 40
Computational Intelligence - August 2015 - 41
Computational Intelligence - August 2015 - 42
Computational Intelligence - August 2015 - 43
Computational Intelligence - August 2015 - 44
Computational Intelligence - August 2015 - 45
Computational Intelligence - August 2015 - 46
Computational Intelligence - August 2015 - 47
Computational Intelligence - August 2015 - 48
Computational Intelligence - August 2015 - 49
Computational Intelligence - August 2015 - 50
Computational Intelligence - August 2015 - 51
Computational Intelligence - August 2015 - 52
Computational Intelligence - August 2015 - 53
Computational Intelligence - August 2015 - 54
Computational Intelligence - August 2015 - 55
Computational Intelligence - August 2015 - 56
Computational Intelligence - August 2015 - 57
Computational Intelligence - August 2015 - 58
Computational Intelligence - August 2015 - 59
Computational Intelligence - August 2015 - 60
Computational Intelligence - August 2015 - 61
Computational Intelligence - August 2015 - 62
Computational Intelligence - August 2015 - 63
Computational Intelligence - August 2015 - 64
Computational Intelligence - August 2015 - 65
Computational Intelligence - August 2015 - 66
Computational Intelligence - August 2015 - 67
Computational Intelligence - August 2015 - 68
Computational Intelligence - August 2015 - 69
Computational Intelligence - August 2015 - 70
Computational Intelligence - August 2015 - 71
Computational Intelligence - August 2015 - 72
Computational Intelligence - August 2015 - 73
Computational Intelligence - August 2015 - 74
Computational Intelligence - August 2015 - 75
Computational Intelligence - August 2015 - 76
Computational Intelligence - August 2015 - 77
Computational Intelligence - August 2015 - 78
Computational Intelligence - August 2015 - 79
Computational Intelligence - August 2015 - 80
Computational Intelligence - August 2015 - Cover3
Computational Intelligence - August 2015 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com