Adopting a risk-based thinking approach
Home » AI » Step by step » Business understanding » Adopting a risk-based thinking approach

Controllers should minimize the risks to data subjects’ rights, interests, and freedoms. To this purpose, they should work on a risk-based approach (see the Integrity and confidentiality” section in the “Principles” chapter).

The risk-based approach of data protection law requires controllers to comply with their obligations and implement appropriate measures in the context of their particular circumstances – the nature, scope, context and purposes of the processing they intend to do, and the risks this poses to individuals’ rights and freedoms. Their compliance considerations therefore involve assessing the risks to the rights and freedoms of individuals and taking judgements as to what is appropriate in those circumstances. In all cases, controllers need to ensure that they comply with data protection requirements (see the “Accountability” section in the “Principles” chapter).

Risk-based thinking with regard to confidentiality of data, or a risk-based approach to questions of what harm may be done to people, must be included from the first steps of the process. It might be too late if it is only considered later. To manage the risks to individuals that arise from the processing of personal data in AI tools, it is important that controllers develop a mature understanding and articulation of fundamental rights, risks, and how to balance these and other interests. Ultimately, it is necessary for controllers to assess the risks to individuals’ rights that the use of AI poses, and determine how they need to address these and establish the impact this has on their use of AI.[1] To this purpose, there are two key factors that must be considered:[2]

Box 14: The importance of software providers in terms of security

In August 2015, an Indiana medical software company reported to the federal government that its networks had been hacked and the private information of 3.9 million people exposed. That included personal data such as names, addresses, birthdates, Social Security numbers and health records. According to IBM X-Force Research, this was one of the biggest healthcare data breaches in recent years. According to the company, the attack was detected nineteen days after the authors gained unauthorized access to its network. Clients were only notified almost a month after the attack began.[3]

In addition, controllers must ensure that appropriate technical and organizational measures are implemented to eliminate, or at least mitigate the security risk, reducing the probability that the identified threats will materialize, or reducing their impact. The general description of the technical and organizational security measures must become a part of the records of processing, where possible (Article 30(1) (g) for controllers, and 30(2)(d) for processors) and all implemented measures are part of the DPIA, as supporting remediation measures to limit risk. Finally, once the selected measures are implemented, the remaining residual risk should be assessed and kept under control. Both the risk analysis and the DPIA are the tools that apply.

A DPIA is very often compulsory in the case of AI development (see the “In what cases must I carry out a DPIA” subsection in the “Data Protection Impact Assessment” section of the “Main tools and actions” chapter). It depends on whether the risks associated with the processing are high or not, according to Article 35(3) of the GDPR. However, it is highly recommended as it supports accountability. In case of doubt, consultation of the competent supervisory authority prior to processing is highly recommended. Finally, do not forget that when using big data and AI it is hard to foresee what the future risks will be, so doing assessment of ethical implications will not be sufficient to address all possible risks. Therefore, it is important to consider having a reassessment of risks and also highly recommendable to integrate a more dynamic way of assessing research risks. Do not hesitate to perform additional DPIAs in other stages of the process if need be.
References


1ICO (2020) AI auditing framework – draft guidance for consultation. Information Commissioner’s Office, Wilmslow, p.13-14. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf (accessed 15 May 2020).

2AEPD (2020) Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción. Agencia Espanola Proteccion Datos, Madrid, p.30. Available at: www.aepd.es/sites/default/files/2020-02/adecuacion-rgpd-ia.pdf (accessed 15 May 2020).

3IBM X-Force® Research (2017) Security trends in the healthcare industry: Data theft and ransomware plague healthcare organizations. IBM Security, Somers, NY, p.7. Available at: www.ibm.com/downloads/cas/PLWZ76MM (accessed: 17 May 2020).

 

Skip to content