Detecting and erasing biases
Home » AI » Step by step » Modeling (Training) » Detecting and erasing biases

Even though the mechanisms against biases are conveniently adopted in previous stages, it is still necessary to ensure that the results of the training phase minimize biases. This can be difficult, since some types of bias and discrimination are often particularly hard to detect. The team members who are curating the input data are sometimes unaware of them, and the users who are their subjects are not necessarily cognisant of them either. Thus, the monitoring systems implemented by the AI developer in the validation stage are extremely important factors to avoid biases.

There are a lot of technical tools that might serve well to detect biases, such as the Algorithmic Impact Assessment.[1] The AI developer must consider their effective implementation.[2] However, as the literature shows[3], it might happen that an algorithm cannot be totally purged of all different types of biases. The developer should however try to at least be aware of their existence and the implications that this might bring (see Lawfulness, fairness and transparency and Accuracy sections in Principles part).



1Reisman, D., Crawford, K. and Whittaker, M. (2018) Algorithmic impact assessments: a practical framework for public agency accountability. AI Now Institute, New York, NY. Available at: (accessed 15 May 2020).

2ICO (2020) AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow. Available at: (accessed 15 May 2020).

3Chouldechova, A. (2017)Fair prediction with disparate impact: a study of bias in recidivism prediction instruments’, Big Data 5(2): 153-163,


Skip to content