IEEE Computational Intelligence Magazine - May 2023 - 71
three categories, namely, 1)model-based,
2) model-agnostic, and 3) example-based
methods.
Model-based methods mainly
explore AI models that can be explained.
Typical explainable AI models include
linear models, e.g., linear regression
and logistic regression, and tree-based
methods, e.g., decision tree and decision
rule [148], [149]. For instance, the weight
ofa feature in linear regression represents
the importance ofthe feature, which can
be useful in explanation. The if-then rules
in a decision rule algorithm clearly indicate
the relationship between inputs and
outputs of the model. Domain experts,
even without deep AI knowledge, can
help check whether the rules that are
automatically learnt from data really
make sense or can be further refined.
Model-agnostic solutions
are
more general in explainable AI, which
can be used in any AI models. A typical
method is based on partial dependence
plot (PDP) which shows the marginal
effect of features on prediction results of
an AI model [150].LIME [151] intended
to perturb inputs and observe the changes
in model outputs, which can be used to
explain the relationship between inputs
and outputs. Another popular method
named Anchors [152] tried to emphasize
a part of inputs that are sufficient for an
AI model to give predictions.
Example-based methods use a set
of examples from a dataset to explain
the underlying behaviors of AI models.
Counterfactual explanations describe
how an instance changes to significantly
influence its prediction [153].By
designing counterfactual samples, it is
possible to know how the model produces
certain predictions and how to
explain individual predictions. Another
example-based method leverages prototypes
and criticisms in which prototypes
select representative samples from data
and criticisms select samples that are not
well represented by prototypes [154].
The way of selecting influential samples
is to identify training instances that are
most critical to the model or predictions
[155]. Through analyzing these
influential samples, one can better
understand the model's behavior, which
leads to better explainability.
Although various techniques have
been developed for explaining AI models,
the existing solutions can only deal
with simple models or simply explain
the relationships between inputs and
outputs. The underline behaviour of
complex deep AI models, e.g., what has
actually been learnt in each layer of a
deep neural network, is still a mystery,
and more research efforts should be
given to this crucial area.
2) Safety
Deep learning-based AI systems have
witnessed growing adoption due to
their impressive performance and have
recently even been expanded into critical
applications where stakes are high, such
as self-driving vehicles where the safety
of such systems is paramount. Notably,
some motor accidents have already been
connected to self-driving systems,
highlighting the immense importance of
AI safety. Apart from implications on
human life, safe AI builds trust in society
that is pivotal to its sustainable adoption
by humans. Hence, it is key to consider
possible threats where the system fails to
perform according to its users' expectations.
Recent research has shown that an
adversary can undermine the safety ofthe
system by targeting either the AI's inference
or training phase, by means of
adversarial examples and data poisoning
respectively.
Adversarial Examples: Much like
visual blind spots in human vision,
visual deep learning systems are vulnerable
to imperceptible perturbations
called adversarial examples [156]. In the
self-driving car example, small perturbations
can be added to an image ofa road
sign to steer the car's image classifier
towards classifying the sign as a different
object (e.g., a high-speed sign when it is
a stop sign) with high confidence,
which could result in a catastrophic outcome.
Such perturbations can be crafted
as the input image is fed into the AI
model, through the access of the model's
prediction probability and gradient
information [157], [158]. This information
could be exploited by an adversary
to compute the subtle corruption to the
input image that can steer the model's
prediction towards a target class. Apart
from the image domain, the threat of
adversarial examples exists in the
audio [159], [160], [161] and natural
language domains [162], [163], [164].
As mentioned above, defenses
against adversarial examples have
emerged. Among them, the most effective
defenses are based on a simple concept
of adversarial training (AT) where
adversarial examples are generated and
incorporated as training samples so that
the AI models will learn to be robust
against them during test time [165],
[166], [167], [168]. Initially, AT-based
defenses were hindered by some limitations,
such as high computational costs,
due to the expensive steps of crafting
strong adversarial training samples [169],
[170]. To reduce the cost ofAT, recent
works that can reduce the number of
optimization steps required for generating
adversarial training examples have
been proposed [167], [171], [172], while
maintaining high adversarial accuracy.
In addition to the diversity of
defenses, several works improved the
robustness ofmodels by minimizing the
effects of small perturbations performed
on the models' prediction [173], [174],
[175] or training models to rely on lesssuperficial
features that are more aligned
with human vision [176], [177], [178].
Another class of defenses is provable
defenses which seek to provide a performance
guarantee for an AI model's performance
in the face of adversarial
examples [179], [180], [181], [182],
[183]. Provable defenses can offer assurance
to users by providing a theoretically
proven worst-case scenario of the
AI's performance when under attack.
This can help decision-makers weigh
the reward-risk trade-off of deploying
the AI system with more certainty.
Data Poisoning: Modern deeplearning-based
AI models rely heavily on
large amounts of training data, typically
mined from public sources, such as the
Internet or crowdsourcing platforms.
This exposes the models to the threat of
data poisoning where an attacker can
degrade a model's performance by corrupting
a small subset ofits training data as
adata contributor [184], [185], [186].
The corruption typically involves a combination
of altering the data'slabels and
MAY 2023 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE 71
IEEE Computational Intelligence Magazine - May 2023
Table of Contents for the Digital Edition of IEEE Computational Intelligence Magazine - May 2023
Contents
IEEE Computational Intelligence Magazine - May 2023 - Cover1
IEEE Computational Intelligence Magazine - May 2023 - Cover2
IEEE Computational Intelligence Magazine - May 2023 - Contents
IEEE Computational Intelligence Magazine - May 2023 - 2
IEEE Computational Intelligence Magazine - May 2023 - 3
IEEE Computational Intelligence Magazine - May 2023 - 4
IEEE Computational Intelligence Magazine - May 2023 - 5
IEEE Computational Intelligence Magazine - May 2023 - 6
IEEE Computational Intelligence Magazine - May 2023 - 7
IEEE Computational Intelligence Magazine - May 2023 - 8
IEEE Computational Intelligence Magazine - May 2023 - 9
IEEE Computational Intelligence Magazine - May 2023 - 10
IEEE Computational Intelligence Magazine - May 2023 - 11
IEEE Computational Intelligence Magazine - May 2023 - 12
IEEE Computational Intelligence Magazine - May 2023 - 13
IEEE Computational Intelligence Magazine - May 2023 - 14
IEEE Computational Intelligence Magazine - May 2023 - 15
IEEE Computational Intelligence Magazine - May 2023 - 16
IEEE Computational Intelligence Magazine - May 2023 - 17
IEEE Computational Intelligence Magazine - May 2023 - 18
IEEE Computational Intelligence Magazine - May 2023 - 19
IEEE Computational Intelligence Magazine - May 2023 - 20
IEEE Computational Intelligence Magazine - May 2023 - 21
IEEE Computational Intelligence Magazine - May 2023 - 22
IEEE Computational Intelligence Magazine - May 2023 - 23
IEEE Computational Intelligence Magazine - May 2023 - 24
IEEE Computational Intelligence Magazine - May 2023 - 25
IEEE Computational Intelligence Magazine - May 2023 - 26
IEEE Computational Intelligence Magazine - May 2023 - 27
IEEE Computational Intelligence Magazine - May 2023 - 28
IEEE Computational Intelligence Magazine - May 2023 - 29
IEEE Computational Intelligence Magazine - May 2023 - 30
IEEE Computational Intelligence Magazine - May 2023 - 31
IEEE Computational Intelligence Magazine - May 2023 - 32
IEEE Computational Intelligence Magazine - May 2023 - 33
IEEE Computational Intelligence Magazine - May 2023 - 34
IEEE Computational Intelligence Magazine - May 2023 - 35
IEEE Computational Intelligence Magazine - May 2023 - 36
IEEE Computational Intelligence Magazine - May 2023 - 37
IEEE Computational Intelligence Magazine - May 2023 - 38
IEEE Computational Intelligence Magazine - May 2023 - 39
IEEE Computational Intelligence Magazine - May 2023 - 40
IEEE Computational Intelligence Magazine - May 2023 - 41
IEEE Computational Intelligence Magazine - May 2023 - 42
IEEE Computational Intelligence Magazine - May 2023 - 43
IEEE Computational Intelligence Magazine - May 2023 - 44
IEEE Computational Intelligence Magazine - May 2023 - 45
IEEE Computational Intelligence Magazine - May 2023 - 46
IEEE Computational Intelligence Magazine - May 2023 - 47
IEEE Computational Intelligence Magazine - May 2023 - 48
IEEE Computational Intelligence Magazine - May 2023 - 49
IEEE Computational Intelligence Magazine - May 2023 - 50
IEEE Computational Intelligence Magazine - May 2023 - 51
IEEE Computational Intelligence Magazine - May 2023 - 52
IEEE Computational Intelligence Magazine - May 2023 - 53
IEEE Computational Intelligence Magazine - May 2023 - 54
IEEE Computational Intelligence Magazine - May 2023 - 55
IEEE Computational Intelligence Magazine - May 2023 - 56
IEEE Computational Intelligence Magazine - May 2023 - 57
IEEE Computational Intelligence Magazine - May 2023 - 58
IEEE Computational Intelligence Magazine - May 2023 - 59
IEEE Computational Intelligence Magazine - May 2023 - 60
IEEE Computational Intelligence Magazine - May 2023 - 61
IEEE Computational Intelligence Magazine - May 2023 - 62
IEEE Computational Intelligence Magazine - May 2023 - 63
IEEE Computational Intelligence Magazine - May 2023 - 64
IEEE Computational Intelligence Magazine - May 2023 - 65
IEEE Computational Intelligence Magazine - May 2023 - 66
IEEE Computational Intelligence Magazine - May 2023 - 67
IEEE Computational Intelligence Magazine - May 2023 - 68
IEEE Computational Intelligence Magazine - May 2023 - 69
IEEE Computational Intelligence Magazine - May 2023 - 70
IEEE Computational Intelligence Magazine - May 2023 - 71
IEEE Computational Intelligence Magazine - May 2023 - 72
IEEE Computational Intelligence Magazine - May 2023 - 73
IEEE Computational Intelligence Magazine - May 2023 - 74
IEEE Computational Intelligence Magazine - May 2023 - 75
IEEE Computational Intelligence Magazine - May 2023 - 76
IEEE Computational Intelligence Magazine - May 2023 - 77
IEEE Computational Intelligence Magazine - May 2023 - 78
IEEE Computational Intelligence Magazine - May 2023 - 79
IEEE Computational Intelligence Magazine - May 2023 - 80
IEEE Computational Intelligence Magazine - May 2023 - 81
IEEE Computational Intelligence Magazine - May 2023 - 82
IEEE Computational Intelligence Magazine - May 2023 - 83
IEEE Computational Intelligence Magazine - May 2023 - 84
IEEE Computational Intelligence Magazine - May 2023 - 85
IEEE Computational Intelligence Magazine - May 2023 - 86
IEEE Computational Intelligence Magazine - May 2023 - 87
IEEE Computational Intelligence Magazine - May 2023 - 88
IEEE Computational Intelligence Magazine - May 2023 - 89
IEEE Computational Intelligence Magazine - May 2023 - 90
IEEE Computational Intelligence Magazine - May 2023 - 91
IEEE Computational Intelligence Magazine - May 2023 - 92
IEEE Computational Intelligence Magazine - May 2023 - 93
IEEE Computational Intelligence Magazine - May 2023 - 94
IEEE Computational Intelligence Magazine - May 2023 - 95
IEEE Computational Intelligence Magazine - May 2023 - 96
IEEE Computational Intelligence Magazine - May 2023 - 97
IEEE Computational Intelligence Magazine - May 2023 - 98
IEEE Computational Intelligence Magazine - May 2023 - 99
IEEE Computational Intelligence Magazine - May 2023 - 100
IEEE Computational Intelligence Magazine - May 2023 - 101
IEEE Computational Intelligence Magazine - May 2023 - 102
IEEE Computational Intelligence Magazine - May 2023 - 103
IEEE Computational Intelligence Magazine - May 2023 - 104
IEEE Computational Intelligence Magazine - May 2023 - Cover3
IEEE Computational Intelligence Magazine - May 2023 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com