This action is one of the most important pieces of advice to be considered from the very first moment of an AI business development. Algorithm designers (developers, programers, coders, data scientists, engineers), who occupy the first link in the algorithmic chain, are likely to be unaware of the ethical and legal implications of their actions. Furthermore, one of the main problems that AI raises is that it generally uses personal data that are included in large datasets. This somehow blurs the relationship between the data and the data subject, leading to violations of the regulations that rarely occur when the controller and the subject have a direct relationship. This could bring consequences in terms of adequate compliance with data protection standards. It is paramount that these key workers have the fullest possible awareness of the ethical and social implications of their work, and of the very fact that these can even extend to societal choices, even though the ‘rogue engineering’ alibi can hardly function after the Google Street View case.
In order to avoid that the misrepresentation of the ethical and legal issues provokes unwanted consequences, there are two main courses of action that can be adopted. First, developers might try to ensure that algorithm designers are able to understand the implications of their actions, both for individuals and society, and be aware of their responsibilities by learning to show continued attention and vigilance. In that sense, an optimal training for all subjects involved in the project (developers, programers, coders, data scientists, engineers, researchers) even before it starts could be one of the most efficient tools to save time and resources in term of compliance with data protection regulation. Thus, implementing basic training programs that include at least the fundamentals of the Charter of Fundamental Rights, the principles exposed in Article 5 of the GDPR, the need for a legal basis for processing (including contracts between the parties), etc.
However, training people who have never been in touch with data protection issues might be hard. An alternative policy is the involvement of an expert on data protection, ethical and legal issues in the development team, so as to create an interdisciplinary team. This might be done by hiring an expert for this purpose (an internal worker or an external consultant) to design the strategy and the subsequent decisions on personal data required by the development of the tools, with the close involvement of the Data Protection Officer.
Adopting adequate measures in terms of ensuring confidentiality, integrity and availability of data is also strongly recommendable (see the “Measures in support of confidentiality” subsection in the “Integrity and confidentiality” section of the “Principles” chapter).
1Kuyumdzhieva, A. (2018) ‘Ethical challenges in the digital era: focus on medical research’, pp. 45-62 in: Koporc, Z. (ed.) Ethics and integrity in health and life sciences research. Emerald, Bingley. ↑
2CNIL (2017) How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. Commission Nationale de l’Informatique et des Libertés, Paris, p.55. Available at: www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf (accessed 15 May 2020). ↑
4Ibid., p.55. ↑