Resilience to attack and security
Home » AI » General exposition » Technical robustness and safety » Resilience to attack and security

Resilience to attack should be a goal of all ICT systems, including AI systems. When processing personal data, Article 32 of the GDPR explicitly requires the implementation of appropriate technical and organizational measures to ensure data security (see ‘Measures in support of confidentiality’ in the ‘Integrity and confidentiality’ subsection of the “Principles” in Part II.)

The required security measures depend on the likely impact of an AI system malfunction. These measures should also include steps taken to ensure the resilience of processing systems.[1] For certain types of AI system, the decision-making process may be particularly vulnerable to attack. For example, a malicious actor may create a misleading input to exploit the fundamental perception differences between humans and AI systems, as demonstrated by the example in Box 2.

Box 2. Example of the need for security in AI systems

An autonomous vehicle should automatically recognize street signs, by using on-board cameras and adjusting its speed accordingly. While AI algorithms based on Deep-Neural-Networks may excel in this task, special care must be taken to protect the system against targeted attacks by a malicious adversary. For example, small, targeted modifications to street signs could lead the AI system to mistake a stop sign for a speed limit sign, resulting in potentially dangerous situations. Meanwhile, the modification may appear as simple graffiti to the casual human observer. It is therefore of utmost importance to protect an AI system used for this purpose against such attacks, thereby increasing its resilience.[2]

Trained AI models can also be a valuable data source. Under certain circumstances, it may be possible to gain insights into the original input data using only the trained model.[3] Such ‘information leakage’ could be exploited by both internal and external actors. It is therefore important for controllers to take measures to limit access to the model and underlying training data, and for all actor categories (see “Measures in support of confidentiality” in the “Integrity and confidentiality”subsection of the “Principles” in Part II of these Guidelines).

Once trained, the resulting AI system may be used for very different purposes than originally intended. For example, face recognition AI system may be used to recognize and group photos containing a specific person within an online photo album. The same AI system could also be used to search the internet for photos of a specific person, potentially revealing sensitive personal details (i.e. using the photo location or capture context). This kind of multi-purpose use is often possible with AI systems, and it is up to the system designer to predict possible unlawful processing of personal data and implement security measures that would prevent or minimize it. This could be through measures such as restricting the usable data sources, or prohibiting certain usage patterns though licensing terms. Data protection legal framework may complement such restrictions, but is by no means a replacement for them.

 

References


1Article 32(1)(b) of the GDPR.

2Eykholt, K. et al. (2018) ‘Robust physical-world attacks on deep learning models’, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition10.4.2018, arXiv:1707.08945.

3Fredrikson, M. et al. (2015) ‘Model inversion attacks that exploit confidence information and basic countermeasures’, CCS ’15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015. Cornell University, Ithaca. Available at: https://rist.tech.cornell.edu/papers/mi-ccs.pdf (accessed 20 May 2020).

 

Skip to content