Focussing on profiling issues
Home » AI » Step by step » Data preparation » Focussing on profiling issues

In the case of a database that will serve to train or validate an AI tool, there is a particularly relevant obligation to inform the data subjects that their data might cause automated decision making or profiling on them, unless controllers can guarantee that the tool will in no way produce these consequences. Even though automatic decision-making can hardly happen in the context of research, developers should pay attention to this issue. Profiling, on the other hand, might bring some problems to AI development.

This is due to a simple reason: the process of profiling is “often invisible to the data subject. It works by creating derived or inferred data about individuals – ‘new’ personal data that has not been provided directly by the data subjects themselves. Individuals have differing levels of comprehension and may find it challenging to understand the complex techniques involved in profiling and automated decision-making processes.”[1] Thus, “if the controller envisages a ‘model’ where it takes solely automated decisions having a high impact on individuals based on profiles made about them and it cannot rely on the individual’s consent, on a contract with the individual or on a law authorizing this, the controller should not proceed.”[2] Risk for the individual’s rights, interests and freedoms is a very important factor that must always be considered. It is not the same sort of profiling to make a decision on someone’s taste for TV series, compared to profiling devoted to approve their health insurance policy. Thus, if processing presents risks to individuals’ fundamental rights and freedoms, the controllers must ensure that they can address these risks and meet the requirements.

If processing and/or automated decision-making happens, data subjects must be informed adequately about that processing and the way the algorithm works. In other words, their right to information must be satisfied in application of the lawfulness, fairness and transparency principle (see the “Lawfulness, fairness and transparency” section in the “Principles” chapter). This means that, at least, controllers have to tell the data subject that “they are engaging in this type of activity, provide meaningful information about the logic involved and the significance and envisaged consequences of the profiling for the data subject.”[3] (See the “Transparency” section in General Exposition of AI part of these Guidelines)

The information about the logic of a system and explanations of decisions should give individuals the necessary context to decide whether, and on what grounds, they would like to request human intervention. In some cases, insufficient explanations may prompt individuals to resort to other rights unnecessarily, or to withdraw their consent. Requests for intervention, expression of views, or contests are more likely to happen if individuals do not feel they have a sufficient understanding of how the decision was reached.[4]

Finally, a controller must always remember that according to Article 22(3), automated decisions that involve special categories of personal data are permitted only if the data subject has consented, or if they are conducted on a legal basis (see the ‘Human agency and oversight’ section in the General Exposition of AI part of these Guidelines). This exception applies not only when the observed data fit into this category, but also if the alignment of different types of personal data can reveal sensitive information about individuals or if inferred data enter into that category. In all of these cases, we must talk about processing of special categories of personal data. Indeed, a study capable of inferring data of special categories is subject to the same legal obligations pursuant to the GDPR as if sensitive personal data had been processed from the outset. If profiling infers personal data that were not provided by the data subject, the controllers should ensure that the processing is not incompatible with the original purpose, they have identified a legal basis for the processing of the data of special category, and they inform the data subject about the processing.[5]

Box 17: Example of inferring special categories data

Research showed that in 2011, easily accessible digital records of behavior, Facebook ‘likes’, could be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis was based on a dataset of over 58,000 volunteers who provided their Facebook ‘likes’, detailed demographic profiles, and the results of several psychometric tests. The model correctly discriminated between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait ‘Openness’, prediction accuracy was close to the test–retest accuracy of a standard personality test. The authors provided examples of associations between attributes and Likes and discuss implications for online personalization and privacy.[6]

Performing a DPIA is compulsory if there is a real risk of unauthorized profiling or automated decision-making. Article 35(3) (a) of the GDPR states the need for the controller to carry out a DPIA in the case of a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person. Controllers should be aware that, at the present moment, each country has submitted their lists of when a DPIA is required to the EDPB. If the controller is within the EEA, this list should also be locally verified.[7]

According to Article 37(1)(b). and (5) of the GDPR, controllers shall designate a data protection officer where “the core activities of the controller or the processor consist of processing operations which, by virtue of their nature, their scope and/or their purposes, require regular and systematic monitoring of data subjects on a large scale.” Controllers are also required to keep a record of all decisions made by an AI tool as part of their accountability and documentation obligations. This should also include whether an individual requested human intervention, expressed any views, contested the decision, and whether a decision has been altered as a result[8] (see the “Accountability” section in the “Principles” chapter).

Some additional actions that might be extremely useful to avoid automated decision-making are as follows:[9]

  • Consider the system requirements necessary to support a meaningful human review from the design phase. Particularly, the interpretability requirements and effective user-interface design to support human reviews and interventions.
  • Design and deliver appropriate training and support for human reviewers.
  • Give staff the appropriate authority, incentives and support to address or escalate individuals’ concerns and, if necessary, override the AI tool’s decision.

 

 

References


1Article 29 Working Party (2017) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017 as last Revised and Adopted on 6 February 2018. European Commission, Brussels, p.9. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053 (accessed 15 May 2020)

2Ibid., p.30.

3Ibid., pp.13-14.

4ICO (2020) AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow, p.94. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf (accessed 15 May 2020).

5Article 29 Working Party (2018) Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017 as last Revised and Adopted on 6 February 2018. European Commission, Brussels, p.15. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053 (accessed 15 May 2020).

6Kosinski, M., Stillwell, D. and Graepel, T. (2013) ‘Digital records of behavior expose personal traits’, Proceedings of the National Academy of Sciences  110 (15): 5802-5805, DOI: 10.1073/pnas.1218772110.

7EDPB (2019) Data Protection Impact Assessment. European Data Protection Board, Brussels. Available at: https://edpb.europa.eu/our-work-tools/our-documents/topic/data-protection-impact-assessment-dpia_es (accessed 3 June 2020).

8ICO (2020) Guidance on the AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow, p.94-95. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf (accessed 15 May 2020).

9Ibid, p.95.

 

Skip to content