Performing external audit of data processing
Home » AI » Step by step » Evaluation (Validation) » Performing external audit of data processing

In cases where the risks of the processing of personal data within the AI tool are high, an audit of the system by an independent third party must be considered. A variety of different audits can be used. These might be internal or external, they might cover the final product only, or be performed with less evolved prototypes. They could be considered a form of monitoring or a transparency tool. Annex I, at the end of this document, contains some recommendations by the Spanish Data Protection Agency that could serve as a model.

In terms of legal accuracy, AI tools must be audited to verify whether processing of personal data within their system fulfil obligations stipulated in GDPR considering wide range of issues that are arousing. The High-Level Expert Group on AI stated that “testing processes should be designed and performed by as diverse group of people as possible. Multiple metrics should be developed to cover the categories that are being tested for different perspectives. Adversarial testing by trusted and diverse “red teams” deliberately attempting to “break” the system to find vulnerabilities, and “bug bounties” that incentivize outsiders to detect and responsibly report system errors and weaknesses, can be considered.”[1] However, there are good reasons to be sceptical about the capability of an auditor to check the functioning of a machine learning system.

This is why it is sensible to focus on the items included by the AEPD in its recommended checklist: it would be more straightforward to focus on the measures implemented to avoid biases, obscurity, hidden profiling, etc., focusing on the implementation of principles such as data protection by design and by default (See “data protection by design” in the “Concepts” part of the Guidelines) or data minimization (see the “Data minimization” section in the “Principles” chapter) and the adequate use of tools such as the DPIA or the intervention of a skilled DPO, than trying to have an in depth understanding of the functioning of a complex algorithm (the “black box” problem is obviously very important to this purpose). Implementing adequate data protection policies form the first stages of the lifecycle of the tool is the best way to avoid data protection issues.

Box 20: The difficulty in auditing a machine-learning system: IBM’s Watson platform

IBM’s policy stresses that Watson is trained via “supervised learning”. In other words, the system is guided, step-by-step, in its learning. This should mean its process can be monitored, as opposed to unsupervised learning, in which the machine has full autonomy in determining its operating criteria. IBM also claims to check what the systems have been doing, before any decision to retain a certain type of learning. But experts researching this subject who have spoken out during the various organized debates (not least by Allistene’s research committee on ethics, CERNA) have insisted time and again that such statements are erroneous. Based on current research, the “output” produced by the most recent machine learning algorithms is not explainable, explainable AI being a concept on which research is ongoing. They also point out that it is very difficult to audit a machine learning system in practice.[2]

 

 

References


1High-Level Expert Group on AI (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels, p.22. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 15 May 2020).

2CNIL (2017) How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. Commission Nationale de l’Informatique et des Libertés, Paris, p.28. Available at: www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf (accessed 15 May 2020).

 

Skip to content