Iñigo de Miguel Beriain (UPV/EHU)
Acknowledgements: The author thankfully acknowledges advice, input and feedback on drafts from Andres Chomsky, Oliver Feeney, Gianclaudio Malgieri Aurélie Pols and Marko Sijan. Needless to say, all mistakes are my full responsibility. |
This part of The Guidelines has been reviewed and validated by Marko Sijan, Senior Advisor Specialist, (HR DPA)
This part of the Guidelines is meant to help AI developers by analysing the main actions to be addressed on the basis of a step-by-step model, the CRISP-DM model,[1] which is widely employed for explaining the stages included in the development of data analytics and data-intensive AI tools. Indeed, it was the tool selected by the SHERPA project to develop their Guidelines for the Ethical Development of AI and Big Data Systems.[2] These six steps are: business understanding; data understanding; data preparation; modeling; evaluation; and deployment. This is not a fixed classification, since quite often developers mix some of these stages. For instance, a trained algorithm might be improved after the validation stage through a renewed training.
Nevertheless, it must be highlighted that some of the ethical and legal requirements regarding AI development must be evaluated through the life cycle of an AI development on a continuous basis. Controllers must monitor the ethical legitimacy of processing, and its unexpected effects They should also assess the possible collateral impact of such processing in a social environment, beyond the initially conceived limitations of purpose, duration in time and extension.[3] And this must be done all along the life cycle of an AI tool, according to Article 25 of the GDPR. As the Article 29 Working Party stated,
“Controllers should carry out frequent assessments on the data sets they process to check for any bias, and develop ways to address any prejudicial elements, including any over-reliance on correlations. Systems that audit algorithms and regular reviews of the accuracy and relevance of automated decision-making including profiling are other useful measures. Controllers should introduce appropriate procedures and measures to prevent errors, inaccuracies or discrimination on the basis of special category data. These measures should be used on a cyclical basis; not only at the design stage, but also continuously, as the profiling is applied to individuals. The outcome of such testing should feed back into the system design.”[4]
An additional idea that deserves a thought is that AI is a common label that encompasses a variety of different technologies. A fundamental distinction must be traced between supervised machine learning (input data labelled by humans is given to an algorithm, which then defines the rules based on examples which are validated cases) and unsupervised learning (unlabelled input data is given to an algorithm, which carries out its own classification and is free to produce its own output when presented with a pattern or variable). Supervised learning requires supervisors to teach the machine the output it must produce, i.e. they must “train” it. In principle, supervised learning is easier to understand and monitor.[5] Moreover, since the datasets used in training processes are selected by the trainers, we might handle some of the most worrying challenges posed by these technologies quite reasonably. Unsupervised AI, instead, and more especially techniques such deep learning, needs a more sophisticated monitor and control, since obscurity, biases or profiling are much more difficult to detect, at least in some of the stages of the life cycle of the AI development.
In this part of The Guidelines, we try to provide support for both supervised and unsupervised AI. We are aware that it is almost impossible to provide advice on every possible situation. However, we hope we will be able to highlight the fundamentals and include useful additional sources of information. Finally, we fully understand that some experts could consider that some of the recommendations we make could be moved from one step to another. Furthermore, some of them could apply to several different steps. Therefore, we strongly recommend that they adapt these Guidelines to their best convenience and knowledge.
The structure of the document is easy to follow. First, we introduce a quote to the section by Colin Shearer,[6] followed by a description of the tasks involved in each concrete stage of the process, according to the same author. Next, we introduce some recommendations that should be implemented at that point. Finally, the annexes include references to some tools that might serve the purposes of this part of The Guidelines. Annex I shows the recommendations for auditing AI tools elaborated by the Spanish Data Protection Agency. Annex II is more specific, as it refers to the use of AI in the healthcare sector. However, it is an excellent guide for those who are willing to develop an AI tool in that sector. In the future, we will try to incorporate more annexes, as soon as an efficient mechanism for doing so is produced.
References
1Shearer, C. (2000) ‘The CRISP-DM model: the new blueprint for data mining’, Journal of Data Warehousing 5(4): 13-23. Available at: https://mineracaodedados.files.wordpress.com/2012/04/the-crisp-dm-model-the-new-blueprint-for-data-mining-shearer-colin.pdf (accessed 15 May 2020). ↑
2SHERPA project (2019) Guidelines for the ethical development of AI and big data systems: an ethics by design approach. SHERPA Project. Available at: www.project-sherpa.eu/wp-content/uploads/2019/12/development-final.pdf (accessed 15 May 2020). ↑
3AEPD (2020) Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción. Agencia Espanola Proteccion Datos, Madrid, p.7. Available at: www.aepd.es/sites/default/files/2020-02/adecuacion-rgpd-ia.pdf (accessed 15 May 2020). ↑
4Article 29 Working Party (2017) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017 as last Revised and Adopted on 6 February 2018. European Commission, Brussel, p.28. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053 (accessed 15 May 2020). ↑
5CNIL (2017) How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. Commission Nationale de l’Informatique et des Libertés, Paris, p.17. Available at: www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf (accessed 15 May 2020). ↑
6Shearer, C. (2000) ‘The CRISP-DM model: the new blueprint for data mining’, Journal of Data Warehousing 5(4): 13-23. Available at: https://mineracaodedados.files.wordpress.com/2012/04/the-crisp-dm-model-the-new-blueprint-for-data-mining-shearer-colin.pdf (accessed 15 May 2020). ↑