GDPR provisions
Home » AI » General exposition » Human agency and oversight » GDPR provisions

The requirement for human agency and oversight when developing AI tools is clearly linked to the right to obtain human intervention, the right not to be subject to a decision based solely on automated processing, the right to information on automatic decision-making (ADM), and the logic involved, which are all included in the GDPR. These rights are challenged by the use of AI tools. AI often involves a form of automated processing and, in some cases, decisions are directly taken by the AI model. Indeed, sometimes AI tools learn and make decisions without human supervision and sometimes the logic involved in their performance is hard to understand.[1]

In this respect, profiling is particularly problematic in AI development (see Box 1),because the process of profiling “is often invisible to the data subject. It works by creating derived or inferred data about individuals – ‘new’ personal data that has not been provided directly by the data subjects themselves. People have vastly different levels of comprehension of this subject and may find it challenging to understand the sophisticated techniques involved in profiling and automated decision-making processes”[2] (see section ‘’).

Of course, the GDPR does not prevent any form of profiling and/or automated decision-making: it only provides individuals with a qualified right to be informed about it, and a right not to be subject to a decision based on purely automated decision-making, including profiling. Their right to information (see “Right to information” within Part II section “Data subjects’ rights” of these Guidelines) must be satisfied through application of the lawfulness, fairness and transparency principle (see “Lawfulness, fairness and transparency principle” within Part II section “Principles” of these Guidelines). This means that, as a minimum,controllers have to inform the data subject that“they are engaging in this type of activity, provide meaningful information about the logic involved and the significance and envisaged consequences of the profiling for the data subject”[3] (see Articles 13 and 14 of the GDPR).

The information about the logic of a system, and explanations of decisions, should give individuals the necessary context to make decisions about the processing of their personal data. In some cases, insufficient explanations may prompt individuals to resort to other rights unnecessarily. Requests for intervention, the expression of views, or objections to the processing are more likely to happen if individuals do not feel they have a sufficient understanding of how the decision was reached. Data subjects must be able to exercise their rights in a simple and user-friendly manner. For example, “if the result of a solely automated decision is communicated through a website, the page should contain a link or clear information allowing the individual to contact a member of staff who can intervene, without any undue delays or complications”.[4]The full scope of the information to be provided is hard to state concretely, however. Indeed, there is a lively academic discussion about this issue at present.[5]

Box 0. The issue of ranking

Services or goods providers that participate in the so-called ‘collaborative economy’ (or ‘platform economy’) need to understand the functioning of ranking in the context of their use of specific online intermediation services or online search engines. This could be, for example, a hotel – whether big or small – offering its accommodation through Booking.com or TripAdvisor. To allow businesses to participate as providers on the platform, it is not necessary for platforms to disclose the detailed functioning of their ranking mechanisms, including the algorithms used. It is sufficient to provide a general description of the main ranking parameters (including the possibility to influence ranking against any direct or indirect remuneration paid by the provider), as long as this description is easily and publicly available, and written in clear and intelligible language.[6]

Furthermore, according to article 22(1), data subjects shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similar significantly affects them. Thus, controllers should always make sure that the AI tools they use or develop are not promoting unavoidable automatic decision-making in any way. Indeed, according to the Article 29 Working Party, “[i]f the controller envisages a ‘model’ where it takes solely automated decisions having a high impact on individuals based on profiles made about them and it cannot rely on the individual’s consent, on a contract with the individual or on a law authorising this, the controller should not proceed. The controller can still envisage a ‘model’ of decision-making based on profiling, by significantly increasing the level of human intervention so that the model is no longer a fully automated decision-making process, although the processing could still present risks to individuals’ fundamental rights and freedoms.”[7]

Box 1. Understanding profiling

Research by Kosinski et al. (2013)[8] showed that, in 2011, accessible digital records of behavior (such as pages ‘liked’ on Facebook) could be used to accurately predict a range of highly sensitive personal attributes. These included: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and gender. The analysis was based on a dataset of over 58,000 volunteers who provided their Facebook ‘likes’, detailed demographic profiles, and the results of several psychometric tests.

The model correctly discriminated between homosexual and heterosexual men in 88% of cases; between African Americans and Caucasian Americans in 95% of cases; and between Democrat and Republican voters in 85% of cases. For the personality trait, ‘Openness’, the prediction accuracy was close to the test–retest accuracy of a standard personality test. The authors also provided examples of association between attributes and ‘likes’ and discussed implications for online personalization and privacy.

This case constitutes and excellent example of how profiling works: data subjects’ information served well to classify them and make predictions about them.

Furthermore, a controller must always remember that, according to Article 9(2)(a) of the GDPR, automated decisions that involveprocessing special categories of personal data are permitted only if the data subject has given explicit consent to the processing of those personal data for one or more specified purposes, or if there is legal basis for the mentioned processing. This exception applies not only when the observed data fit into this category, but also if the alignment of different types of personal data can reveal sensitive information about individuals, or if inferred data fall into that category. Indeed, a study able to infer special categories of data is subject to the same legal obligations, pursuant to the GDPR, as one in which sensitive personal data are processed from the outset. In all such cases, we must consider the regulations applying to the processing of special categories of personal data and the necessary application of appropriate safeguards, able to protect the data subjects’ rights, interests and freedoms. Proportionality between the aim of research and the use of special categories of data must be guaranteed. Furthermore, controllers must be aware that their Member States may maintain or introduce further conditions, including limitations, with regard to the processing of genetic data, biometric data or data concerning health (Art 9 (4) GDPR).

If profiling infers personal data that were not provided by the data subject, the controllers must ensure that the processing is not incompatible with the original purpose (see “Data protection and scientific research”in Part II section “Main Concepts”; that they have identified a legal basis for the processing of the special-category data; and that they inform the data subject about the processing[9] (see “Purpose limitation”in Part II, section “Principles”).

Performing a ‘data protection impact assessment’ (DPIA) (see “DPIA” within Part II section “Main tools and actions”)is compulsory if there is real risk of unauthorized profiling or automated decision-making. Article 35(3)(a) of the GDPR states the need for the controller to carry out a DPIA in the case of a systematic and extensive evaluation of personal aspects relating to natural persons. This should be done for tools based on automated processing, including profiling, and for those on which decisions are based that produce legal effects concerning the natural person, or significantly affecting the natural person.

According to Article 37(1)(b)5 of the GDRP, an additional accountability requirement is the designation of a data protection office (DPO), where the profiling or the automated decision-making is a core activity of the controller and requires regular and systematic monitoring of data subjects on a large scale. Controllers are also required to keep a record of all decisions made by an AI system as part of their accountability and documentation obligations (see Accountability section in Principles chapter). This should include whether an individual requested human intervention, expressed any views, contested the decision, and whether a decision has been altered as a result.[10]

Some additional actions that might be extremely useful to avoid automated decision-making include the following.[11]

  • Consider the system requirements necessary to support a meaningful human review from the design phase.
  • In particular, consider the interpretability requirements and effective user-interface design to support human reviews and interventions.
  • Design and deliver appropriate training and support for human reviewers.
  • Give staff the appropriate authority, incentives and support to address or escalate individuals’ concerns and, if necessary, override the AI system’s decision.

In any case, controllers should be aware that Member States are introducing some concrete references to this issue in their national regulations, using different tools to ensure adequate compliance.[12]

Checklist: profiling and automated decision-making[13]

☐ The controllers have a legal basis to carry out profiling and/or automated decision-making, and document this in their data protection policy.

☐ The controllers send individuals a link to their privacy statement when they have obtained their personal data indirectly.

☐ The controllers explain how people can access details of the information that they used to create their profile.

☐ The controllers tell people who provide them with their personal data and how they can object to profiling.

☐ The controllers have procedures for customers to access the personal data input into their profiles, so they can review and edit for any accuracy issues.

☐ The controllers have additional checks in place for their profiling/automated decision-making systems to protect any vulnerable groups (including children).

☐ The controllers only collect the minimum amount of data needed and have a clear retention policy for the profiles that they create.

 

As a model of best practice

☐ The controllers carry out a DPIA to consider and address the risks when they start any new automated decision-making or profiling.

☐ The controllers tell their customers about the profiling and automated decision-making they carry out, what information they use to create the profiles, and where they get this information from.

☐ The controllers use anonymized data in their profiling activities.

☐ Those responsible guarantee the right to readability of algorithmic decisions.

☐ Decision-makers have a mechanism capable of notifying and explaining the reasons when a challenge to the algorithmic decision is not accepted due to lack of human intervention.

☐ The decision-makers have a model of human rights assessment in automated decision-making.

☐ Qualified human supervision is in place from the design phase onwards, in particular on the interpretation requirements and the effective design of the interface, and the examiners are trained.

☐ Audits are conducted with respect to possible deviations from the results of inferences in adaptive or evolutionary systems.

☐ Certification of the AI system is being, or has been, carried out.

Additional information

Article 29 Working Party (2018) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679. European Commission, Brussels. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053

ICO (2020) AI auditing framework: draft guidance for consultation, p.94-95. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf

ICO (2019) Data Protection Impact Assessments and AI. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/about-the-ico/news-and-events/ai-blog-data-protection-impact-assessments-and-ai/

Kosinski M., Stillwell, D. and Graepel, T. (2013) ‘Digital records of behavior expose personal traits’, Proceedings of the National Academy of Sciences 110(15): 5802- 5805, DOI:10.1073/pnas.1218772110

Malgieri, G. (2018) Automated decision-making in the EU Member States laws: the right to explanation and other ‘suitable safeguards’ for algorithmic decisions in the EU national legislations. Available at: https://ssrn.com/abstract=3233611 or http://dx.doi.org/10.2139/ssrn.3233611

Norwegian Data Protection Authority (2018) Artificial intelligence and privacy. Norwegian Data Protection Authority, Oslo. Available at: https://iapp.org/media/pdf/resource_center/ai-and-privacy.pdf

Selbst, A.D. and Powles, J. (2017) ‘Meaningful information and the right to explanation’, International Data Privacy Law 7(4): 233-242, https://doi.org/10.1093/idpl/ipx022

Wachter, S., Mittelstadt, B. and Floridi, L. (2017) ‘Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation’, International Data Privacy Law. Available at https://ssrn.com/abstract=2903469 or http://dx.doi.org/10.2139/ssrn.2903469

References


1Burrell, J. (2016) ‘How the machine ‘thinks’: understanding opacity in machine learning algorithms’, Big Data & Society 3(1): 1-12.

2Article 29 Working Part (2017) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679, WP 251, p.9. European Commission, Brussels.

3Ibid., pp.13-14.

4ICO (2020) AI auditing framework: draft guidance for consultation, p.94. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf(accessed 15 May 2020).

5Wachter, S., Mittelstadt, B. and Floridi, L. (2017) ‘Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation’, International Data Privacy Law. Available at https://ssrn.com/abstract=2903469 or http://dx.doi.org/10.2139/ssrn.2903469(accessed 15 May 2020); Selbst, A.D. and Powles, J. (2017) ‘Meaningful information and the right to explanation’, International Data Privacy Law 7(4): 233-242, https://doi.org/10.1093/idpl/ipx022 (accessed 15 May 2020).

6EU Regulation 1159/2019 of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services, Article 5 and Recital 27. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32019R1150&from=EN

7Article 29 Working Party (2018) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679. European Commission, Brussels, p. 30. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.

8Kosinski M.,Stillwell, D. and Graepel, T. (2013) ‘Digital records of behavior expose personal traits’, Proceedings of the National Academy of Sciences110(15): 5802- 5805, DOI:10.1073/pnas.1218772110

9Article 29 Working Party (2017) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679, WP 251, p.15. European Commission, Brussels.

10ICO (2020) AI auditing framework: draft guidance for consultation, p.94-95. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf(accessed 15 May 2020).

11Ibid. p.95.

12Malgieri, G. (2018) Automated decision-making in the EU Member States laws: the right to explanation and other ‘suitable safeguards’ for algorithmic decisions in the EU national legislations.Available at: https://ssrn.com/abstract=3233611 or http://dx.doi.org/10.2139/ssrn.3233611 (accessed 2 May 2020).

13ICO (no date) Rights related to automated decision-making including profiling. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/rights-related-to-automated-decision-making-including-profiling/(accessed 15 May 2020).

 

Skip to content