Checklist: technical robustness and safety
Home » AI » General exposition » Technical robustness and safety » Checklist: technical robustness and safety
Checklist: technical robustness and safety[1]

Resilience to attack and security

☐ The controller assessed potential forms of attacks to which the AI system could be vulnerable.

☐ The controller considered different types and natures of vulnerabilities, such as data pollution, physical infrastructure and cyber-attacks.

☐ The controller put measures or systems in place to ensure the integrity and resilience of the AI system against potential attacks.

☐ The controller verified how the system behaves in unexpected situations and environments.

☐ The controller consideredto what degree the system could be dual-use. If so, the controller took suitable preventative measures against this (e.g. not publishing the research or deploying the system).

 

Fallback plan and general safety

☐ The controller ensured that the system has a sufficient fallback plan if it encounters adversarial attacks or other unexpected situations (e.g. technical switching procedures or asking for a human operator before proceeding).

☐ The controller considered the level of risk raised by the AI system in this specific use case.

☐ The controller put any process in place to measure and assess risks and safety.

☐ The controller provided the necessary information in case of a risk to human physical integrity.

☐ The controller considered an insurance policy to deal with potential damage from the AI system.

☐ The controller identified potential safety risks of (other) foreseeable uses of the technology, including accidental or malicious misuse. Is there a plan to mitigate or manage these risks?

☐ The controller assessed whether there is a probable chance that the AI system may cause damage or harm to users or third parties. The controller assessed the likelihood, potential damage, impacted audience and severity.

☐ The controller considered the liability and consumer protection rules, and take them into account.

☐ The controller considered the potential impact or safety risk to the environment or to animals.

☐ The controller’s risk analysis included whether security or network problems (e.g. cybersecurity hazards) could pose safety risks or damage due to unintentional behavior of the AI system.

☐ The controller estimated the likely impact of a failure of the AI system when it provides wrong results, becomes unavailable, or provides societally unacceptable results (e.g. discrimination).

☐ The controller defined thresholds and put governance procedures in place to trigger alternative/fallback plans.

☐ The controller defined and test fallback plans.

 

Accuracy

☐ The controller assessed what level and definition of accuracy would be required in the context of the AI system and use case.

☐ The controller assessed how accuracy is measured and assured.

☐ The controller put in place measures to ensure that the data used is comprehensive and up to date.

☐ The controller put in place measures to assess whether there is a need for additional data, for example to improve accuracy or eliminate bias.

☐ The controller verified what harm would be caused if the AI system makes inaccurate predictions.

☐ The controller put in place ways to measure whether the system is making an unacceptable amount of inaccurate predictions.

☐ The controller put in place a series of steps to increase the system’s accuracy.

 

Reliability and reproducibility

☐ The controller put in place a strategy to monitor and test if the AI system is meeting its goals, purposes and intended applications.

☐ The controller tested whether specific contexts or particular conditions need to be taken into account to ensure reproducibility.

☐ The controller put in place verification methods to measure and ensure different aspects of the system’s reliability and reproducibility.

☐ The controller put in place processes to describe when an AI system fails in certain settings.

☐ The controller clearly documented and operationalize these processes for the testing and verification of the reliability of AI systems.

☐ The controller established mechanisms of communication to assure (end-)users of the system’s reliability.

Additional information

Article 29 Working Party (2017) WP248, Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679. European Commission, Brussels. Available at: https://ec.europa.eu/newsroom/document.cfm?doc_id=47711

Eykholt, K. et al. (2018) ‘Robust physical-world attacks on deep learning models’, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition10.4.2018, arXiv:1707.08945

Fredrikson, M. et al. (2015) ‘Model inversion attacks that exploit confidence information and basic countermeasures’, CCS ’15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015. Cornell University, Ithaca. Available at: https://rist.tech.cornell.edu/papers/mi-ccs.pdf

 

References


1This checklist has been adapted from the one elaborated by the High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 20 May 2020).

 

Skip to content