“A crucial component of achieving Trustworthy AI is technical robustness, which is closely linked to the principle of prevention of harm. Technical robustness requires that AI systems be developed with a preventive approach to risks and in a manner such that they reliably behave as intended while minimizing unintentional and unexpected harm, and preventing unacceptable harm. This should also apply to potential changes in their operating environment or the presence of other agents (human and artificial) that may interact with the system in an adversarial manner. In addition, the physical and mental integrity of humans should be ensured.”
– High-Level Expert Group on AI
Ethical principles and GDPR provisions
The High-Level Expert Group on AI splits the requirement for technical robustness and safety into four sub-components: (1) resilience to attack and security; (2) a fallback plan and general safety; (3) accuracy; and (4) reliability and reproducibility.
For easy referencing, this section mirrors this structure, while connecting these sub-components to legal (GDPR) requirements and recommendations. This is important, because while the GDPR requirements generally only apply when processing personal data, many practical AI systems are designed to produce a personalized result (i.e. recommender systems), and therefore have to process personal data at some point.
1High-Level Expert Group on AI (2019) Ethics guidelines for trustworthy AI, p.16 and ff. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 28 may 2020). ↑