Introducing safeguards and minimizing risks
Home » IoT » Human agency (automated decision making and profiling) » Introducing safeguards and minimizing risks

Fully automated decision-making and profiling are sensitive processing activities that need to be taken with caution. Controllers must be aware of this and act accordingly in order to identify risks, reduce them, and implement safeguards and guarantees.

We have just mentioned the existence of data subjects’ rights linked to these processed and recognized in article 22(3), namely, the right to obtain human intervention, to express the data subject’s point of view and to contest the decision.

In addition, controllers must inform in a simple but transparent and complete way of the existence on automated processes and profiling activities when they fall under the definition of article 22 GDPR. Information must also extend to an explanation of the logic involved in the process and the existence of the above-mentioned rights.

The information about the logic of a system and explanations of decisions should give individuals the necessary context to decide whether, and on what grounds, they would like to request human intervention. In some cases, insufficient explanations may prompt individuals to resort to other rights unnecessarily. Requests for intervention, expression of views, or contests are more likely to happen if individuals do not feel they have a sufficient understanding of how the decision was reached.[1]

Furthermore, Article 35(3) (a) GDPR states the obligation for the controller to carry out a DPIA in the case of a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person. Controllers should be aware that, now, each country has submitted their lists of when a DPIA is required to the EDPB. If the controller is within the EEA, this list should also be locally verified.[2] (see “DPIA” in “Main Tools and Actions”, Part II of these Guidelines and see section “Data protection Impact Assessment” within this part on IoT).According to Article 37(1) (b) and (5) GDPR, controllers shall designate a data protection officer where “the core activities of the controller or the processor consist of processing operations which, by virtue of their nature, their scope and/or their purposes, require regular and systematic monitoring of data subjects on a large scale”.

Controllers are also required to keep a record of all decisions made by an AI tool as part of their accountability and documentation obligations. This should also include whether an individual requested human intervention, expressed any views, contested the decision, and whether a decision has been altered as a result[3] (see the “Accountability” section in the “Principles” within Part II of these Guidelines). Some additional actions that might be extremely useful to avoid automated decision-making are as follows:[4]

  • Consider the system requirements necessary to support a meaningful human review from the design phase. Particularly, the interpretability requirements and effective user-interface design to support human reviews and interventions. This can indeed be considered a binding obligation to controllers in compliance with the data protection by design principle.
  • Design and deliver appropriate training and support for human reviewers. This should include an understanding of which variables the decision-making or profiling model is built on, and specially, which variables are not taken into account in the model. These can be key to spot specificities in a data subject that make them an outlier on the model or whose circumstances account for a different decision.
  • Give staff the appropriate authority, incentives and support to address or escalate individuals’ concerns and, if necessary, override the AI tool’s decision. For instance, where human reviewers face organizational pressures or negative professional consequences if they deviate from the automated decision, such authority and independence will be put at risk.
Checklist: Automated decision-making and profiling

☐ The controller informs users about the type of data that are collected and further processed by the IoT sensors, other types of data that they receive from external sources and how it will be processed and combined.

☐ The IoT systems can distinguish between different individuals using the same device so that they cannot learn about each other’s activities without an appropriate legal basis.

☐ The controllers work with standardization bodies and data platforms to support a common protocol to express preferences with regard to data collection and processing by data controllers especially when unobtrusive devices collect such data.

☐ The controllers have enabled local controlling and processing entities (the so-called personal privacy proxies) allowing users to have a clear picture of data collected by their devices and facilitating local storage and processing without having to transmit the data to the device manufacturer.

☐ The IoT systems provide:

  • a panoramic overview of what personal data have been disclosed to what data controller and under which policies;
  • online access to the personal data and how they have been processed;
  • counter profiling capabilities helping the user to anticipate how their data match relevant group profiles, which may affect future opportunities or risks (this is not required by the law, but recommendable).

☐ The IoT systems provide granular choices when granting access to applications. The granularity does not only concern the category of collected data, but also the time and frequency at which data are captured. Similarly to the “do not disturb” feature on smartphones, IOT devices should offer a “do not collect” option to schedule or quickly disable sensors.

Profiling and automated decision making only happens when a legal basis applies and adequate safeguards have been implemented. Mechanisms able to inform about it to all involved data subjects have been implemented.

☐ The controllers have performed a DPIA.

☐ The controllers have consulted a DPO on the processing.

☐ The controllers have ensured that all guarantees foreseen by Article 22 of the GDPR have been adequately implemented.

☐ The controllers have ensured that all those intervening in profiling and automatic data processing have been adequately trained on data protection issues.

☐ The controllers have documented all the information regarding this issue.

 

References


1ICO (2020) Guidance on the AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow, p.94. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf.

2EDPB (2019) Data Protection Impact Assessment. European Data Protection Board, Brussels. Available at: https://edpb.europa.eu/our-work-tools/our-documents/topic/data-protection-impact-assessment-dpia_es .

3ICO (2020) Guidance on the AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow, p.94-95. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf .

4Ibid, p.95.

 

Skip to content