GDPR provisions: Transparency
Home » AI » General exposition » Transparency » GDPR provisions: Transparency

Understanding transparency and opacity

In general, transparency means that data subjects are provided with clear information about data processing (see “Transparency” in the “Lawfulness, fairness and transparency” subsection of the “Principles” within Part II of these Guidelines). They must be informed about how and for which purposes their information (including both observed and inferred data about them) is used, no matter whether this information is collected from the data subjects themselves or by others.[1]Data subjects shouldalways be aware ofhow and why an AI-assisted decision about them was made, or where their personal data was used to train and test an AI system. Controllers must keep in mind that in such cases, transparency is even more important than when they have no direct relationship with the data subjects.

In general, transparency must be guaranteed by using a number of complementary tools. Naming a DPO, who then serves as a single point of contact for queries from data subjects, is an excellent option. Preparing adequate records of processing for the supervisory authorities, or performing DPIAs, are also highly recommended measures to promote transparency. And undertaking analysis that evaluate the effectiveness and accessibility of the information provided to the data subjects helps to ensure the efficient implementation of this principle.

The main challenge with AI is that it encompasses a range of techniques that are very different from each other. Some are very simple, so it is easy for the controller to provide all the necessary information. Others, such as deep learning, suffer from serious problems in terms of transparency. This is often referred to as the ‘black box’ issue, which introduces the opacity issue in the AI framework, a circumstance that renders transparency difficult to achieve. Indeed, opacity is one of the main threats against fair AI, since it directly defies the need for transparency. There are at least three types of opacity that are inherent in AI to a greater or lesser extent: (1) as intentional corporate or state secrecy; (2) as technical illiteracy; and (3) epistemic opacity.

Opacity as intentional corporate or state secrecy

This kind of opacity can be legitimate under the protection of industrial secret regulations. It can also respond to legitimate interests, such as preserving competitive advantages, preserving the security of the system, or preventing malign users from gaming the system. However, it should be compatible with the incorporation of independent certification systems that are capable of accrediting that the mechanism meets the requirements of the GDPR. In most cases, furnishing the data subject with the information they need to protect their interests, without at the same time disclosing trade secrets, should not be problematic.[2] This is due to the simple fact that subjects do not need to understand in detail how the system works, only how it might cause damage to their interests, rights and freedoms.

Opacity as technical illiteracy

This kind of opacity derives from the specific skills required to design and program the algorithms, and the ability to read and write the code. We could say that the codes used in AI are a mystery for the vast majority of the population, who lack this specific knowledge, but this should not be an impediment to the fulfilment of the obligation of information stipulated by the GDPR.The ability to understand computer language must not be a barrier for providing an understandable explanation of the purpose of an AI system, not only to the stakeholders who are subject to profiling or automated decision, but to everyone else.

Epistemic opacity

This opacity arises from the characteristics of machine-learning algorithms and the scale required to apply them usefully. It is related to the fact that certain algorithmic models are not interpretable by humans. Put simply, the transit between the inputs that the model receives and the outputs that it throws out is inscrutable in terms of human understanding. At the regulatory level, there is no ban on the use of this type of model, although it is advisable to follow the precautionary principle when using it, since the lack of interpretability could aggravate the difficulties of identifying biases in the model, which could in turn lead to discriminatory results, or false or spurious correlations. Of course, not all machine-learning models are opaque in this sense.

The preference for transparent tools

In general, controllers should always provide for the development of more understandable algorithms over less understandable ones. Trade-offs between the explainability, transparency and best performance of the system must be appropriately balanced based on the context of use. For instance, in healthcare, the accuracy and performance of the system may be more important that its explainability; in policing, explainability is much more crucial to justify behavior and outcomes of law enforcement. In other areas, such as recruitment, both accuracy and explainability are similarly valued.[3] If a service can be offered through both by an easy to understand and an opaque algorithm – that is, when there is no trade-off between explainability and performance – the controller should opt for the one that is more interpretable.

If controllers have no choice but to use an opaque model, they should at least try to find technical solutions to the lack of interpretability. Of course, the sense to which an extension in explainability is achieved is extremely hard to measure precisely. For more information, see the section on “Right not be subject to automated decision-making” within Part II section “Data subject’s rights” of these Guidelines. If explanations are hard to find for controllers, they should seek external advice. The possibility of using independent audits may again be a reasonable option.

Additional information

EDPS (2015) Opinion 7/2015. Meeting the challenges of big data: A call for transparency, user control, data protection by design and accountability. European Data Protection Supervisor, Brussels. Available at: https://edps.europa.eu/sites/edp/files/publication/15-11-19_big_data_en.pdf

High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

ICO (2020) Explaining decisions made with AI. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf

Norwegian Data Protection Authority (2018)Artificial intelligence and privacy. Norwegian Data Protection Authority, Oslo. Available at: https://iapp.org/media/pdf/resource_center/ai-and-privacy.pdf

SHERPA (2019) Guidelines for the ethical use of AI and big data systems. Sherpa project. Available at: https://www.project-sherpa.eu/wp-content/uploads/2019/12/use-final.pdfwww.project-sherpa.eu/wp-content/uploads/2019/12/use-final.pdf

 

References


1Articles 13 and 14 of the GDPR.

2Norwegian Data Protection Authority (2018)Artificial intelligence and privacy.Norwegian Data Protection Authority, Oslo. Available at: https://iapp.org/media/pdf/resource_center/ai-and-privacy.pdf (accessed 20 May 2020).

3SHERPA (2019) Guidelines for the ethical use of AI and big data systems. SHERPA project, p.26. Available at: www.project-sherpa.eu/wp-content/uploads/2019/12/use-final.pdf(accessed 15 May 2020).

Skip to content