Detecting and erasing biases
Home » AI » Step by step » Modeling (Training) » Detecting and erasing biases

Even though the mechanisms against biases are conveniently adopted in previous stages, it is still necessary to ensure that the results of the training phase minimize biases. This can be difficult, since some types of bias and discrimination are often particularly hard to detect. The team members who are curating the input data are sometimes unaware of them, and the users who are their subjects are not necessarily cognisant of them either. Thus, the monitoring systems implemented by the AI developer in the validation stage are extremely important factors to avoid biases.

There are a lot of technical tools that might serve well to detect biases, such as the Algorithmic Impact Assessment.[1] The AI developer must consider their effective implementation.[2] However, as the literature shows[3], it might happen that an algorithm cannot be totally purged of all different types of biases. The developer should however try to at least be aware of their existence and the implications that this might bring (see Lawfulness, fairness and transparency and Accuracy sections in Principles part).
 

 

References


1Reisman, D., Crawford, K. and Whittaker, M. (2018) Algorithmic impact assessments: a practical framework for public agency accountability. AI Now Institute, New York, NY. Available at: https://ainowinstitute.org/aiareport2018.pdf (accessed 15 May 2020).

2ICO (2020) AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf (accessed 15 May 2020).

3Chouldechova, A. (2017)Fair prediction with disparate impact: a study of bias in recidivism prediction instruments’, Big Data 5(2): 153-163, http://doi.org/10.1089/big.2016.0047

 

Skip to content