Systems, Man & Cybernetics - April 2017 - 7

confusion matrix and classification rate/precision. The
experimental results are very encouraging. For this
extremely complicated problem, we achieved 74.58% correct classification results without any pretraining.
The Setup of the Study
The nature of perception of sensory inputs has been a subject of many studies [1], [2]. One aspect of these works is
psychological [3], [4], and another is subjective mapping to
fuzzy sets [5]. Some studies attempt to introduce computational modeling aspects [6]. Human emotions are very hard
to analyze because of the differences among individuals.
In recent decades, machine-learning techniques have been used to
study and recognize human emotions [7]. Different studies have
been conducted to detect individuals' feelings of stress using computational classification/recognition
models based on different features
or to automatically recognize people's personalities and their interaction styles by analyzing their
Facebook profile pictures [8]. Studies on personality recognition
using people's essays have also
been conducted [9].
In this article, we introduce and
describe an approach to computational modeling of the learning
process, including the role of individuals' perception. Our approach
is different from existing studies in
cognitive neuroscience, where
learning is studied at the levels of
the neuron and various brain elements (e.g., the visual cortex and
hypothalamus) [10], [11]. It is also
significantly different from the
large body of literature on perception in a psychological context,
where the studies are qualitative
and descriptive [1]-[4]. Instead, we
adopt a systems theor y-based
(cybernetic) approach to the problem, focusing on the macrolevel
information processing and deliberately ignoring the psychological,
biochemical, and bioelectrical
aspects. The focus is on individuals' perceptions, observed through
easy-to-measure PVs (including the
use of wearable sensors), while
considering the brain of each individual as a system, the state of
which is of significance, not the

internal components' interactions. The proposed approach
is applicable to all five types of sensory inputs (visual,
auditory, tactile, olfactory, and gustatory). However, we
limit our experiments to visual and auditory inputs, and
we consider similarity based on the PVs and emotions, not
in terms of visual or audio similarities.
In our study, we review two easy-to-observe and -measure PVs: heartbeat and skin conductance. The proposed
approach conceives a layered hierarchical structure, with
the lower layer mapping and integrating the sensory inputs
expressed (and measured) by observable PVs into generalized, aggregated data clouds. We consider this a subconscious process and thus expect unsupervised learning to
take place. At a higher hierarchical level, these data clouds
gain semantic meaning and become concepts, such as positive and negative. In general, the granularity of the semantic categories can be higher, e.g., beach, mountain,
highway, train, dog, door, and gun. There can also be more
complex, composite concepts, such as office, war scene,
and marketplace. In addition to this hierarchically layered
scheme, we look at the conscious, decision-making block,
which may provide a cognitive or deliberative feedback
and is influenced by goals and results in the (re)actions of
the individual. The proposed approach further maps these
semantically meaningful, generalized concepts into emotions by a self-developing linguistically transparent AnYatype rule-based model [12].
We aim for proof of concept and, therefore, limit our
study to a steady-state case and to simple, noncomposite
concepts (e.g., positive and negative), using only images
and music and no active decision-making cognitive feedback. We are aware, however, that in reality, a dynamic
model, e.g., of the hierarchical Kalman filter type, is more
suitable; this will be studied in future work. For example,
in [10], emotions were not considered, and the study was
limited to a single sensory (video) input. According to the
proposed CyberMind approach, an autonomous learning
process maps the observable PVs, which represent individual-, context-, and temporal-specific emotions. This
corresponds to the subconscious manner in which our
emotions are associated and aggregated into types of
subconscious reactions. This means that an individual
does not need to do anything to produce these outputs
and usually can barely hide them (they are predominantly
objective). Thus, we are offering an emotion mapper/
reader/model that is essentially similar to a lie detector.
The specific role of the emotions is very important in
regard to understanding the nature of unsupervised
learning in individuals.
Furthermore, we propose a transparent IF-THEN
AnYa-type linguistic model. It can be used to classify
unseen images and thus predict the emotional reaction of
an individual during a given session of the experiment.
We further propose and perform an experiment with
wearable, nonintrusive devices such as heartbeat and
skin conductance sensors for identifying emotions. As a
Ap ri l 2017

IEEE SyStEmS, man, & CybErnEtICS magazInE

7



Table of Contents for the Digital Edition of Systems, Man & Cybernetics - April 2017

Systems, Man & Cybernetics - April 2017 - Cover1
Systems, Man & Cybernetics - April 2017 - Cover2
Systems, Man & Cybernetics - April 2017 - 1
Systems, Man & Cybernetics - April 2017 - 2
Systems, Man & Cybernetics - April 2017 - 3
Systems, Man & Cybernetics - April 2017 - 4
Systems, Man & Cybernetics - April 2017 - 5
Systems, Man & Cybernetics - April 2017 - 6
Systems, Man & Cybernetics - April 2017 - 7
Systems, Man & Cybernetics - April 2017 - 8
Systems, Man & Cybernetics - April 2017 - 9
Systems, Man & Cybernetics - April 2017 - 10
Systems, Man & Cybernetics - April 2017 - 11
Systems, Man & Cybernetics - April 2017 - 12
Systems, Man & Cybernetics - April 2017 - 13
Systems, Man & Cybernetics - April 2017 - 14
Systems, Man & Cybernetics - April 2017 - 15
Systems, Man & Cybernetics - April 2017 - 16
Systems, Man & Cybernetics - April 2017 - 17
Systems, Man & Cybernetics - April 2017 - 18
Systems, Man & Cybernetics - April 2017 - 19
Systems, Man & Cybernetics - April 2017 - 20
Systems, Man & Cybernetics - April 2017 - 21
Systems, Man & Cybernetics - April 2017 - 22
Systems, Man & Cybernetics - April 2017 - 23
Systems, Man & Cybernetics - April 2017 - 24
Systems, Man & Cybernetics - April 2017 - 25
Systems, Man & Cybernetics - April 2017 - 26
Systems, Man & Cybernetics - April 2017 - 27
Systems, Man & Cybernetics - April 2017 - 28
Systems, Man & Cybernetics - April 2017 - 29
Systems, Man & Cybernetics - April 2017 - 30
Systems, Man & Cybernetics - April 2017 - 31
Systems, Man & Cybernetics - April 2017 - 32
Systems, Man & Cybernetics - April 2017 - 33
Systems, Man & Cybernetics - April 2017 - 34
Systems, Man & Cybernetics - April 2017 - 35
Systems, Man & Cybernetics - April 2017 - 36
Systems, Man & Cybernetics - April 2017 - 37
Systems, Man & Cybernetics - April 2017 - 38
Systems, Man & Cybernetics - April 2017 - 39
Systems, Man & Cybernetics - April 2017 - 40
Systems, Man & Cybernetics - April 2017 - 41
Systems, Man & Cybernetics - April 2017 - 42
Systems, Man & Cybernetics - April 2017 - 43
Systems, Man & Cybernetics - April 2017 - 44
Systems, Man & Cybernetics - April 2017 - 45
Systems, Man & Cybernetics - April 2017 - 46
Systems, Man & Cybernetics - April 2017 - 47
Systems, Man & Cybernetics - April 2017 - 48
Systems, Man & Cybernetics - April 2017 - 49
Systems, Man & Cybernetics - April 2017 - 50
Systems, Man & Cybernetics - April 2017 - 51
Systems, Man & Cybernetics - April 2017 - 52
Systems, Man & Cybernetics - April 2017 - 53
Systems, Man & Cybernetics - April 2017 - 54
Systems, Man & Cybernetics - April 2017 - 55
Systems, Man & Cybernetics - April 2017 - 56
Systems, Man & Cybernetics - April 2017 - Cover3
Systems, Man & Cybernetics - April 2017 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/smc_202310
https://www.nxtbook.com/nxtbooks/ieee/smc_202307
https://www.nxtbook.com/nxtbooks/ieee/smc_202304
https://www.nxtbook.com/nxtbooks/ieee/smc_202301
https://www.nxtbook.com/nxtbooks/ieee/smc_202210
https://www.nxtbook.com/nxtbooks/ieee/smc_202207
https://www.nxtbook.com/nxtbooks/ieee/smc_202204
https://www.nxtbook.com/nxtbooks/ieee/smc_202201
https://www.nxtbook.com/nxtbooks/ieee/smc_202110
https://www.nxtbook.com/nxtbooks/ieee/smc_202107
https://www.nxtbook.com/nxtbooks/ieee/smc_202104
https://www.nxtbook.com/nxtbooks/ieee/smc_202101
https://www.nxtbook.com/nxtbooks/ieee/smc_202010
https://www.nxtbook.com/nxtbooks/ieee/smc_202007
https://www.nxtbook.com/nxtbooks/ieee/smc_202004
https://www.nxtbook.com/nxtbooks/ieee/smc_202001
https://www.nxtbook.com/nxtbooks/ieee/smc_201910
https://www.nxtbook.com/nxtbooks/ieee/smc_201907
https://www.nxtbook.com/nxtbooks/ieee/smc_201904
https://www.nxtbook.com/nxtbooks/ieee/smc_201901
https://www.nxtbook.com/nxtbooks/ieee/smc_201810
https://www.nxtbook.com/nxtbooks/ieee/smc_201807
https://www.nxtbook.com/nxtbooks/ieee/smc_201804
https://www.nxtbook.com/nxtbooks/ieee/smc_201801
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1017
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0717
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0417
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0117
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1016
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0716
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0416
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0116
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1015
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0715
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0415
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0115
https://www.nxtbookmedia.com