Computational Intelligence - February 2017 - 48

the standard XCS to model agents in
Wargus. The system was tested with both
immediate and delayed reward functions
and it was found that the immediate
reward function worked much better
than the delayed version. The system was
tested with a random player on four
types of scenarios relating to different
skills, such as training archers. The results
showed that XCS performed well in
terms of reward prediction and learning
sensible rules. Lujan's work further
emphasized the importance of appropriate design of reward functions for
improving LCS performance in RTS
games. Small and Congdon [76] applied
a Pittsburgh-style LCS approach to
model agents in an RTS game called
Unreal Tournament 2004. The real-time
nature of the learning environment did
not allow the system to train on any
winning scenario, and hence receive any
positive reinforcement, leading to learning difficulties. To bypass this problem,
the initial population was seeded by
injecting several high-level handcrafted
rules. Significant improvement in learning performance was reported with this
strategy. This work has been perhaps the
only study that tested Pittsburgh-style
LCS in a real-time game environment.
Computational time challenges were
expected, owing to the batch-processing
nature of this type of LCS. It would be
interesting to compare XCS performance on the same game with the LCS
used in this study. Nidorf et al. [77] used
XCS to model agents for RoboCode, in
which one or more (military) tanks
engaged in a mutual battle. Agents
needed to learn three kinds of strategies:
scanning for enemies; targeting and firing at them; and maneuvering to create a
strategic advantage. Apparently, a standard XCS implementation was used to
encode all strategies singularly; that is, in
the same population. The performance
of XCS was compared with a neuroevolutionary method (NEAT), and a
better XCS performance was observed
in some scenarios without a clear overall
winner. The authors noted that a continuous, skewed payoff, multi-step environment proved challenging for XCS,
especially with increasing difficulty in

48

test scenarios. This has been perhaps the
only study that compared the performance of LCS with another CI method
in the same game environment. The
authors noted that both algorithms were
good at learning competent strategies.
Further, while XCS struggled with an
increased number of actions, NEAT was
found to be prone to overfitting. In our
view, the former problem is much easier
to handle and is more related to the
environment representational issues than
the overfitting problem, which is a more
serious problem that may require algorithmic changes to address it. Tsapanos et
al. [78] used ZCS to model agents in a
military-flavored RTS game called
ORTS [79], in which agents engaged in
different tasks to build, defend, and
attack military bases. The performance of
ZCS was compared with a randomly
acting agent as well as with an agent that
used a standard RL algorithm (SARSA).
The experimental results showed a better ZCS performance over other methods. This was one of the few studies that
compared an LCS with a standard RL
algorithm in a game environment and
showed promising results. Clementis [80]
applied XCS to a simplified "battleship"
game. The authors tested two variations
of the action selection mechanism. The
first method used a probabilistic action
selection mechanism, in which the
action probabilities were determined
from the statistics computed during the
game play (e.g., by computing the number of hits over all hits for a specific
action). The second method fed the
action statistics to a neural network and
used its output to predict actions indirectly. The results showed a significantly
faster and better performance when
using neural network-supported actions
selection, highlighting the merits of
hybridizing LCS-based game agents
with other machine learning techniques.
Other game applications include
Irvan et al.'s work [81], which studied
LCS-based agents in the game of PacMan. A multi-agent LCS architecture
was proposed, in which multiple XCSbased agents successfully coordinated
with each other using a shared memory
concept in the Pac-Man game.

IEEE ComputatIonal IntEllIgEnCE magazInE | FEbruary 2017

The above studies have shown the
application of LCS in different roles in
several categories of video games,
including the modeling of NPCs; modeling of agents that compete with
human players and other intelligent or
scripted agents; modeling of multiagent teams; and in learning adaptive
game-playing strategies in real-time
environments. Despite the diversity of
the above-mentioned successful LCS
applications, the utility of LCS in modern and more complicated games is not
evident from the current literature.
Given the LCS potential, as demonstrated by these studies, this seems to be
a missed research opportunity.
B. Combinatorial Games

A small number of studies exist in this
category. Browne et al. [82], [83]
explored the performance of XCS in
the Connect4 game. Connect4 is a turnbased board game that has a goal of
placing the four counters of the same
color or shape consecutively in any
direction. The goal for an agent is to
learn a winning strategy. An abstraction
algorithm was introduced by the authors
to overcome the scalability issues posed
by the large, multi-step search space
imposed by this game. The algorithm
aimed at combining high fitness rules
learned by an XCS and constructing
rules with higher generalization. While
this algorithm improved XCS performance (particularly, the number of wins),
it also slowed the system. This work provided a solution to the scaling problem
without losing the model transparency.
The abstraction technique also aligned
well with Holland's original vision of
default hierarchies in LCS [84]. Other
mechanisms that could provide faster
abstraction solutions in LCS have been
proposed in the literature [85].
Sood et al. [86] attempted to test the
ability of LCS to learn strategies in a
complex supply-chain environment in
the aviation industry. As their test bed
for this purpose, they used three game
environments, including Nim, IPD and
matrix choice games [87]. An XCS was
used to model agents playing these
games and competing with other



Table of Contents for the Digital Edition of Computational Intelligence - February 2017

Computational Intelligence - February 2017 - Cover1
Computational Intelligence - February 2017 - Cover2
Computational Intelligence - February 2017 - 1
Computational Intelligence - February 2017 - 2
Computational Intelligence - February 2017 - 3
Computational Intelligence - February 2017 - 4
Computational Intelligence - February 2017 - 5
Computational Intelligence - February 2017 - 6
Computational Intelligence - February 2017 - 7
Computational Intelligence - February 2017 - 8
Computational Intelligence - February 2017 - 9
Computational Intelligence - February 2017 - 10
Computational Intelligence - February 2017 - 11
Computational Intelligence - February 2017 - 12
Computational Intelligence - February 2017 - 13
Computational Intelligence - February 2017 - 14
Computational Intelligence - February 2017 - 15
Computational Intelligence - February 2017 - 16
Computational Intelligence - February 2017 - 17
Computational Intelligence - February 2017 - 18
Computational Intelligence - February 2017 - 19
Computational Intelligence - February 2017 - 20
Computational Intelligence - February 2017 - 21
Computational Intelligence - February 2017 - 22
Computational Intelligence - February 2017 - 23
Computational Intelligence - February 2017 - 24
Computational Intelligence - February 2017 - 25
Computational Intelligence - February 2017 - 26
Computational Intelligence - February 2017 - 27
Computational Intelligence - February 2017 - 28
Computational Intelligence - February 2017 - 29
Computational Intelligence - February 2017 - 30
Computational Intelligence - February 2017 - 31
Computational Intelligence - February 2017 - 32
Computational Intelligence - February 2017 - 33
Computational Intelligence - February 2017 - 34
Computational Intelligence - February 2017 - 35
Computational Intelligence - February 2017 - 36
Computational Intelligence - February 2017 - 37
Computational Intelligence - February 2017 - 38
Computational Intelligence - February 2017 - 39
Computational Intelligence - February 2017 - 40
Computational Intelligence - February 2017 - 41
Computational Intelligence - February 2017 - 42
Computational Intelligence - February 2017 - 43
Computational Intelligence - February 2017 - 44
Computational Intelligence - February 2017 - 45
Computational Intelligence - February 2017 - 46
Computational Intelligence - February 2017 - 47
Computational Intelligence - February 2017 - 48
Computational Intelligence - February 2017 - 49
Computational Intelligence - February 2017 - 50
Computational Intelligence - February 2017 - 51
Computational Intelligence - February 2017 - 52
Computational Intelligence - February 2017 - 53
Computational Intelligence - February 2017 - 54
Computational Intelligence - February 2017 - 55
Computational Intelligence - February 2017 - 56
Computational Intelligence - February 2017 - Cover3
Computational Intelligence - February 2017 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com