Fallback plan and general safety
Home » AI » General exposition » Technical robustness and safety » Fallback plan and general safety

“To error is human but to really foul things up requires a computer.”

– Paul Ehrlich / Bill Vaughan[1]

Like all ICT systems, AI systems may fail and provide incorrect results or predictions. However, in the case of AI systems, it may be particularly hard to explain why a particular (false) conclusion was reached in a tangible, human way. An example of undesirable behavior would be an AI system that makes decisions that significantly affect an individual (e.g. automatically denying a credit application). The GDPR requires controllers to implement suitable fallback plans protecting data subjects from such situations, including the right to contest an AI decision and to obtain a human intervention that considers the data subjects’ point of view.[2] Such safeguards should be considered during the systems design. Even in cases where the GDPR does not explicitly require such a fallback plan, it is desirable for controllers to consider implementing one.

Controllers should also be aware of safety issues. New technologies often lead to new risks. It is important to be aware that protection of personal data depends on IT security measures and therefore risks related with personal data are those related with IT. Consequently, appropriate technical and organizational measures implemented in IT will provide data protection as is stipulated by GDPR, and those should be regularly tested and upgraded to prevent or minimize security risks.(see the subsection ‘Main difference from other risks in the GDPR and from risks in IT security’ in the ‘Integrity and confidentiality’ section in the ‘Principles’ chapter).

To assess these risks and derive appropriate safeguards, the GDPR requires a DPIA to be performed prior to processing when there is a high risk to the rights and freedoms of a natural person[3] (see “DPIA” within Part II section “Main tools and actions” of these Guidelines). The use of new technologies such as AI increases the likelihood that the processing falls into the high-risk category. Some national data protection agencies have issued directives requiring a DPIA when using certain AI algorithms.[4] In case of doubt, it is recommended that controllers perform a DPIA.[5]

 

References


1The authorship the quote appears to be disputed, see i.e. https://quoteinvestigator.com/2010/12/07/foul-computer/#note-1699-18 (accessed 2 June 2020).

2Article 33(3) of the GDPR.

3Article 35(1) of the GDPR

4See, for example, the legal situation in Austria § 2(2)(4) DSFA-V.

5Article 29 Working Party (2017) WP248, Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679, p.8. European Commission, Brussels.

 

Skip to content