LEADING EDGE Ansgar Koene Algorithmic Bias n the context of the IEEE Global Initiative for Ethical Consid- erations in Artifici- al Intelligence and Autonomous Systems, and with support from its executive direc- tor John C. Havens, Paula Boddington from the University of Oxford and I have proposed the development of a new IEEE Standard on Algo- rithmic Bias Consider- ations (https://standards . i e e e .o r g /d eve l o p/ project/7003.html). The aim is for this to become part of a set of ethical design standards, such a s the IEEE P7001™ Standards Project called Tran sparency of Auto- nomous Systems with a Working Group that just started led by Alan Win- field. Whereas the Trans- parency of Autonomous Systems Standa rd w ill be focused on the im- portant issue of "break- ing open the black box" for users and/or regulators, the Algorithmic Bias Standard is focused on "surfacing" and evalu- ating societal implications of the out- comes of algorithmic systems, with the aim of countering non-operation- ally-justified results. I Istock Addressing Growing Concerns The rapid growth of algorithm driven services has led to growing concerns among civil society, legis- lators, industry bodies, and academ- ics about potential unintended and undesirable biases within intelligent systems that are largely inscrutable "black boxes" for users. Examples that have captured the headlines include: apparent racial Digital Object Identifier 10.1109/MTS.2017.2697080 Date of publication: 8 June 2017 JUNE 2017 ∕ IEEE Technology and Society Magazine bias by Correctional Offender Ma - nagement Profiling for Alternative Sanctions (COMPAS) software used in various U.S. jurisdictions to provide sentencing advice [1]; computer vision algorithms for passport pho- tos that mistakenly register Asian eyes as closed [2]; and beauty pag- eant judging algorithms that dis- proportionately favor white features 31https://standards