Computational Intelligence - August 2016 - 41

Table 1 Statistics of IMDB, Yelp 2014 and Yelp 2013 datasets. #review, #user and #prod denote the number of reviews,
users and products, respectively.
DATASET

IMDB

TRAINING SET

DEVELOPMENT SET

TEST SET

CLASSES

#REVIEW

#USER

#PROD

#REVIEW

#USER

#PROD

#REVIEW

#USER

#PROD

67,426

1,310

1,635

8,381

1,310

1,635

9,112

1,310

1,635

10

YELP 2013

62,522

1,631

1,633

7,773

1,631

1,633

8,671

1,631

1,633

5

YELP 2014

183,019

4,818

4,194

22,745

4,818

4,194

25,399

4,818

4,194

5

where goldi is the true rating for the i-th review, predictedi is the
predicted rating and N is the total number of reviews in our
test set. Smaller values indicate more accurate prediction, and
hence a better model.
For training CNN, we use: ReLU as activation function, filter windows of 3, 4, 5 with 100 feature maps each, Adadelta
decay parameter of 0.95, dropout rate of 0.5, the size of initial
word vectors of 300. For training RNN-GRU, we use: ReLU
as activation function, dropout rate of 0.25, time distributed
fully connected layer with softmax, Adam as stochastic optimization method, categorical cross-entropy as loss function, 300
hidden units.

network (RNTN) [17], and document representations are
composed with RNN. UPNN uses a CNN to compose
reviews written by the same user or written on the same product for sentiment classification of documents [11]. In this method, the temporal order of reviews is ignored. UPNN (no UP)
uses review content only without considering user and product
information and UPNN (full) uses review content, user and
product information. Tang et al. report that UPNN (full)
achieves state-of-the-art performances on IMDB, Yelp 2013
and 2014 datasets in their work [11].

4.2 Baseline Methods

4.3.1 Overall Comparison
Table 2 shows the results achieved on the IMDB, Yelp 2013
and Yelp 2014 datasets. The best results are highlighted in bold
face. The methods marked with a star use both user and product information in addition to review content, while others
only use review texts. The results of the top 11 approaches have
been previously reported in [11]. The bottom two approaches
are ours. We present the results obtained using our approach
without user and product information, denoted as our
approach (no UP), or with review content, user and product
information taken into account, denoted as our approach (full).
It is observed that for the four text feature based methods,
Trigram gives comparable performance to TextFeature which
relies on hand-crafted features. Incorporating user and product
features improves the classification performance of Trigram and
TextFeature on all the three metrics and across all the three
datasets. Topic model based method, JMARS, performs similarly compared to text feature based methods on the IMDB dataset, but gives worse results on the other two datasets.
Distributed word representation based methods and distributed document representation based methods can automatically
generate features for classifier training. However, two word
embedding based methods and Paragraph Vector perform
wor se than hand-crafted features based methods.
RNTN+Recurrent and UPNN (no UP) give mixed results
compared to hand-crafted features based methods. UPNN
(full) outperforms all the other baselines by using CNN composition and taking into account of all three types of information, including review content, users and products.
Our sequence modeling based approach outperforms all the
baselines by a large margin. Without using user and product
information, our approach (no UP) gives relative improvements

We benchmark the following baseline methods as having been
previously used in [11] for document-level sentiment classification: Trigram, TextFeature, Trigram+UPF, TextFeature+UPF,
JMARS, AvgWordvec+SVM, SSWE+SVM, Paragraph Vector,
RNTN+Recurrent, UPNN (no UP), and UPNN (full).
Trigram, TextFeature, Trigram+UPF, and TextFeature+
UPF are hand-crafted text features based methods. In Trigram,
unigrams, bigrams and trigrams are used as features to train a
SVM classifier. In TextFeature, word/character n-grams, sentiment lexicon features, and negation features are used [28]. In
Trigram+UPF and TextFeature+UPF, user-leniency features
[52] and corresponding product features from training data
are concatenated with the features in Trigram and TextFeature, respectively.
JMARS is a state-of-the-art recommendation algorithm
proposed by Diao et al. [14], in which user and aspects fea-
tures of a review are used with collaborative filtering and
t- o--pic modeling.
AvgWordvec+SVM, and SSWE+SVM are two distributed
word representation based methods. In AvgWordvec+SVM,
word embeddings are learned by word2vec [15], and the mean
vector of each word in a document is used as document representation. In SSWE+SVM, sentiment-specific word embeddings (SSWE) [68] are learned, and document representation
are generated by max/min/average pooling. Both of them train
SVM classifiers for document sentiment classification.
Paragraph Vector, RNTN+Recurrent, UPNN (no UP), and
UPNN (full) are distributed document representation based
methods. Paragraph Vector is an unsupervised framework proposed by Le and Mikolov [16]. In RNTN+Recurrent, sentence representations are learned by recursive neural tensor

4.3 Results

AUGUST 2016 | IEEE Computational intelligence magazine

41



Table of Contents for the Digital Edition of Computational Intelligence - August 2016

Computational Intelligence - August 2016 - Cover1
Computational Intelligence - August 2016 - Cover2
Computational Intelligence - August 2016 - 1
Computational Intelligence - August 2016 - 2
Computational Intelligence - August 2016 - 3
Computational Intelligence - August 2016 - 4
Computational Intelligence - August 2016 - 5
Computational Intelligence - August 2016 - 6
Computational Intelligence - August 2016 - 7
Computational Intelligence - August 2016 - 8
Computational Intelligence - August 2016 - 9
Computational Intelligence - August 2016 - 10
Computational Intelligence - August 2016 - 11
Computational Intelligence - August 2016 - 12
Computational Intelligence - August 2016 - 13
Computational Intelligence - August 2016 - 14
Computational Intelligence - August 2016 - 15
Computational Intelligence - August 2016 - 16
Computational Intelligence - August 2016 - 17
Computational Intelligence - August 2016 - 18
Computational Intelligence - August 2016 - 19
Computational Intelligence - August 2016 - 20
Computational Intelligence - August 2016 - 21
Computational Intelligence - August 2016 - 22
Computational Intelligence - August 2016 - 23
Computational Intelligence - August 2016 - 24
Computational Intelligence - August 2016 - 25
Computational Intelligence - August 2016 - 26
Computational Intelligence - August 2016 - 27
Computational Intelligence - August 2016 - 28
Computational Intelligence - August 2016 - 29
Computational Intelligence - August 2016 - 30
Computational Intelligence - August 2016 - 31
Computational Intelligence - August 2016 - 32
Computational Intelligence - August 2016 - 33
Computational Intelligence - August 2016 - 34
Computational Intelligence - August 2016 - 35
Computational Intelligence - August 2016 - 36
Computational Intelligence - August 2016 - 37
Computational Intelligence - August 2016 - 38
Computational Intelligence - August 2016 - 39
Computational Intelligence - August 2016 - 40
Computational Intelligence - August 2016 - 41
Computational Intelligence - August 2016 - 42
Computational Intelligence - August 2016 - 43
Computational Intelligence - August 2016 - 44
Computational Intelligence - August 2016 - 45
Computational Intelligence - August 2016 - 46
Computational Intelligence - August 2016 - 47
Computational Intelligence - August 2016 - 48
Computational Intelligence - August 2016 - 49
Computational Intelligence - August 2016 - 50
Computational Intelligence - August 2016 - 51
Computational Intelligence - August 2016 - 52
Computational Intelligence - August 2016 - 53
Computational Intelligence - August 2016 - 54
Computational Intelligence - August 2016 - 55
Computational Intelligence - August 2016 - 56
Computational Intelligence - August 2016 - 57
Computational Intelligence - August 2016 - 58
Computational Intelligence - August 2016 - 59
Computational Intelligence - August 2016 - 60
Computational Intelligence - August 2016 - 61
Computational Intelligence - August 2016 - 62
Computational Intelligence - August 2016 - 63
Computational Intelligence - August 2016 - 64
Computational Intelligence - August 2016 - 65
Computational Intelligence - August 2016 - 66
Computational Intelligence - August 2016 - 67
Computational Intelligence - August 2016 - 68
Computational Intelligence - August 2016 - 69
Computational Intelligence - August 2016 - 70
Computational Intelligence - August 2016 - 71
Computational Intelligence - August 2016 - 72
Computational Intelligence - August 2016 - 73
Computational Intelligence - August 2016 - 74
Computational Intelligence - August 2016 - 75
Computational Intelligence - August 2016 - 76
Computational Intelligence - August 2016 - Cover3
Computational Intelligence - August 2016 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com