Computational Intelligence - August 2015 - 50

work, we did implement all the classifiers in order to compare
them fairly on the same data for training/testing.
The pF image representation engaged one (no.features)-tuple of
T1 INs per class, whereas the pFV image representation engaged
one T2 IN per class. A pF (respectively, pFV) image representation
was processed by a T1 (respectively, T2) flrFAM classifier. For any
flrFAM classifier we tested all combinations of (2-D interval)/3-D
INs with v ,. /J I functions. On the one hand,T1 flrFAM classifiers
typically demonstrated a competitive image pattern recognition
capacity compared to traditional classifiers. On the other hand, the
T2 flrFAM classifiers on the average performed clearly (5%−15%)
less than their corresponding T1 counterparts thus confirming previous work [24]; that is, this work confirmed that the pF is a better
image representation than the pFV representation. Our explanation is that the pFV representation mingles features from different
data dimensions thus deteriorating their discriminative power.
Based on recorded experimental evidence, we confirmed that
a 3-D T1 flrFAM classifier clearly outperformed its 2-D T1
(interval) counterpart. Our explanation is that a 3-D T1 IN represents more data statistics. More specifically, given that an IN
represents a distribution, a 3-D T1 IN represents a distribution of
(image features) distributions. Likewise, for the same reason, the
(recorded) generalization rates for a 3-D T2 flrFAM classifier
were clearly better than for its 2-D T2 (interval) counterpart.
An inclusion measure v ,. (., .), in general, produced better generalization rates than a Jaccard similarity measure J I (., .). The latter
was attributed to the fact that J I (A, B ) equals zero for nonoverlapping intervals A and B. In additional experiments we confirmed
that the inclusion measure v + (A, B ) produced very similar results
to the ones reported above for the J I (A, B), for the same reason.
The average generalization rate of a 3-D T1 flrFAM classifier
D. Discussion of the Results
was not (statistically) significantly different from the correspondIn this section we studied experimentally the performance of
ing average of the best of ten traditional classifiers in three
two different image representations, namely pF and pFV, in image
benchmark image pattern recognition problems, namely YALE,
pattern classification applications using a number of flrFAM clasTERRAVIC and TRIESCH I. Only for the JAFFE benchmark,
sifiers applicable on INs induced from vectors of (image) features;
the performance of any 3-D T1 flrFAM classifier clearly lagged
alternatively, traditional classifiers were applied comparatively on
behind the performance of three traditional classifiers, namely
the aforementioned vectors of features. In the context of this
kNN (k = 1), RBF ELM and (polynomial) SVM.
We point out that our work in [16] has reported a
Table 3 Area under the curve (AUC) statistics "(average, standard deviation)" in
10 computational experiments by several classifiers on all datasets.
competitive performance of an flrFAM classifier
with the performance of a kNN (k = 1) classifier
DaTaSeT NaMe
due
to two reasons: first, an flrFAM classifier in [16]
ClaSSIFeR NaMe
Yale
TeRRaVIC
JaFFe
TRIeSCH I
01. MDC (Chisquare)
0.73 (0.11) 0.99 (0.02)
0.68 (0.13) 0.95 (0.03)
employed 6-tuple INs induced from six different
02. MDC (euCliDean)
0.72 (0.11) 1.00 (0.01)
0.64 (0.12) 0.95 (0.03)
types of features (moments), concatenated and, sec03. MDC (Manhattan)
0.73 (0.11) 1.00 (0.01)
0.65 (0.10) 0.94 (0.03)
ond, a different FLR classifier in [16] induced more
0.72 (0.10) 1.00 (0.00)
0.87 (0.07) 0.94 (0.03)
04. knn ^ k = 1h
than one (N-tuple of INs) granule per class; whereas,
05. naÏve-Bayes
0.78 (0.12) 1.00 (0.00)
0.81 (0.10) 0.97 (0.03)
here we used only one type of features as well as
06. rBF elM
0.90 (0.08) 0.83 (0.24)
0.80 (0.18) 0.92 (0.11)
07. neural network
0.54 (0.09) 0.93 (0.05)
0.67 (0.09) 0.70 (0.09)
only one (N-tuple of INs) granule per class.
(BaCkprop)
There is one more reason for characterizing
08. linear svM
0.86 (0.13) 1.00 (0.00)
0.84 (0.07) 0.88 (0.06)
"remarkable"
the capacity of a 3-D T1 flrFAM classi09. polynoMial svM
0.76 (0.16) 1.00 (0.00)
0.86 (0.04) 0.88 (0.07)
fier
here
for
generalization.
More specifically, recall
10. rBF svM
0.87 (0.10) 0.95 (0.08)
0.76 (0.17) 0.92 (0.09)
that no flrFAM classifier was used for selecting the
0.72 (0.16) 1.00 (0.01)
0.71 (0.10) 0.95 (0.03)
11. 2-D t1 flrFaM ^v ,o h
0.72 (0.17) 1.00 (0.01)
0.73 (0.13) 0.95 (0.03)
12. 2-D t1 flrFaM ^ J Ih
"best" feature type per benchmark dataset. Hence,
0.75 (0.15) 0.99 (0.04)
0.67 (0.12) 0.96 (0.03)
13. 3-D t1 flrFaM ^v ,o h
the generalization rate of an flrFAM classifier here
0.75 (0.15) 0.99 (0.03)
0.65 (0.12) 0.96 (0.02)
14. 3-D t1 flrFaM ^ J Ih
truly demonstrates its capacity for generalization.

rate of 100%. Moreover, a 3-D T1 flrFAM classifier (with
J I) performed clearly better than the corresponding 2-D T1
(interval) flrFAM classifier (with J I) .
3) Experiments with the JAFFE dataset: The dHMs was the best
feature selected as explained above. All flrFAM classifiers
performed rather poorly. An inclusion measure v ,o typically
produced better results than a Jaccard similarity measure J I .
4) Experiments with the TRIESCH I dataset: The HOG was the
best feature selected as explained above. An flrFAM classifier on the average performed as good as or better than most
classifiers. For the 2-D T1 (interval) flrFAM classifier, an
inclusion measure v ,. on the average produced clearly larger generalization rates than a Jaccard similarity measure J I;
it was vice versa for the 3-D T1 flrFAM classifier.
All computational experiments, for all benchmark datasets,
with a T2 flrFAM classifier produced a generalization rate around
5%−15% less than the generalization rate of its corresponding T1
flrFAM classifier. On the average, a 3-D T2 flrFAM classifier
clearly outperformed its 2-D T2 (interval) counterpart.
In order to show the significance of the results for each classifier we carried out Receiver Operating Characteristics (ROC)
curve analysis [16]. For lack of space, we only display the corresponding Area Under Curve (AUC) statistics (i.e., average and
standard deviation) in Table 3 for all classifiers and all datasets in
"10-fold cross-validation" computational experiments - Recall
that the closer a classifier's AUC is to number 1 the better the
classifier performs. Table 3 confirms comparatively the high confidence regarding the good performance of a 3-D T1 flrFAM
classifier for all datasets but the JAFFE dataset.

50

IEEE ComputatIonal IntEllIgEnCE magazInE | august 2015



Table of Contents for the Digital Edition of Computational Intelligence - August 2015

Computational Intelligence - August 2015 - Cover1
Computational Intelligence - August 2015 - Cover2
Computational Intelligence - August 2015 - 1
Computational Intelligence - August 2015 - 2
Computational Intelligence - August 2015 - 3
Computational Intelligence - August 2015 - 4
Computational Intelligence - August 2015 - 5
Computational Intelligence - August 2015 - 6
Computational Intelligence - August 2015 - 7
Computational Intelligence - August 2015 - 8
Computational Intelligence - August 2015 - 9
Computational Intelligence - August 2015 - 10
Computational Intelligence - August 2015 - 11
Computational Intelligence - August 2015 - 12
Computational Intelligence - August 2015 - 13
Computational Intelligence - August 2015 - 14
Computational Intelligence - August 2015 - 15
Computational Intelligence - August 2015 - 16
Computational Intelligence - August 2015 - 17
Computational Intelligence - August 2015 - 18
Computational Intelligence - August 2015 - 19
Computational Intelligence - August 2015 - 20
Computational Intelligence - August 2015 - 21
Computational Intelligence - August 2015 - 22
Computational Intelligence - August 2015 - 23
Computational Intelligence - August 2015 - 24
Computational Intelligence - August 2015 - 25
Computational Intelligence - August 2015 - 26
Computational Intelligence - August 2015 - 27
Computational Intelligence - August 2015 - 28
Computational Intelligence - August 2015 - 29
Computational Intelligence - August 2015 - 30
Computational Intelligence - August 2015 - 31
Computational Intelligence - August 2015 - 32
Computational Intelligence - August 2015 - 33
Computational Intelligence - August 2015 - 34
Computational Intelligence - August 2015 - 35
Computational Intelligence - August 2015 - 36
Computational Intelligence - August 2015 - 37
Computational Intelligence - August 2015 - 38
Computational Intelligence - August 2015 - 39
Computational Intelligence - August 2015 - 40
Computational Intelligence - August 2015 - 41
Computational Intelligence - August 2015 - 42
Computational Intelligence - August 2015 - 43
Computational Intelligence - August 2015 - 44
Computational Intelligence - August 2015 - 45
Computational Intelligence - August 2015 - 46
Computational Intelligence - August 2015 - 47
Computational Intelligence - August 2015 - 48
Computational Intelligence - August 2015 - 49
Computational Intelligence - August 2015 - 50
Computational Intelligence - August 2015 - 51
Computational Intelligence - August 2015 - 52
Computational Intelligence - August 2015 - 53
Computational Intelligence - August 2015 - 54
Computational Intelligence - August 2015 - 55
Computational Intelligence - August 2015 - 56
Computational Intelligence - August 2015 - 57
Computational Intelligence - August 2015 - 58
Computational Intelligence - August 2015 - 59
Computational Intelligence - August 2015 - 60
Computational Intelligence - August 2015 - 61
Computational Intelligence - August 2015 - 62
Computational Intelligence - August 2015 - 63
Computational Intelligence - August 2015 - 64
Computational Intelligence - August 2015 - 65
Computational Intelligence - August 2015 - 66
Computational Intelligence - August 2015 - 67
Computational Intelligence - August 2015 - 68
Computational Intelligence - August 2015 - 69
Computational Intelligence - August 2015 - 70
Computational Intelligence - August 2015 - 71
Computational Intelligence - August 2015 - 72
Computational Intelligence - August 2015 - 73
Computational Intelligence - August 2015 - 74
Computational Intelligence - August 2015 - 75
Computational Intelligence - August 2015 - 76
Computational Intelligence - August 2015 - 77
Computational Intelligence - August 2015 - 78
Computational Intelligence - August 2015 - 79
Computational Intelligence - August 2015 - 80
Computational Intelligence - August 2015 - Cover3
Computational Intelligence - August 2015 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com