Testing for Bias
Machine learning introduces complex algorithms which affect our daily life. Despite all the benefits of Machine Learning, there are risks that are introduced which can manifest themselves as a bias towards customers or users, in the form of racism, sexism and other forms of discrimination.
These areas are starting to be regulated, and some organisations are calling for these technologies to be banned in some high-risk areas. Ultimately and practically it will be up to technologists to verify whether software is operating legally and fairly. This issue is explored in detail by Adam who proposes a framework for the ethical testing for bias.