One of the main problems with IoT systems is that they use personal data from data subjects other than the end-users of the devices interacting among themselves. Furthermore, they often provide the controllers with large datasets through the aggregation of data gathered from individual agents. These circumstances somehow blur the relationship between the controller and the data subjects. The controllers are simply unaware of who are the data subjects providing some of the data collected by the devices. This could bring consequences in terms of adequate compliance with data protection standards. For instance, it is hard to inform data subjects about the processing if controllers are not aware of who the data subjects are. Indeed, the scenario is hard, since those data subjects could be identifiable, though, if a reasonable effort were made.
It is paramount that the key employees have the fullest possible awareness of the legal implications of their work, so as to avoid unwanted unlawful data processing or, in general, lack of compliance with the data protection regulation. In addition, employees and other stakeholders should gain awareness of the ethical and social consequences derived from the processing of personal data through technological means.
IoT developers must be able to understand the implications of their action, both for individuals and society, and be aware of their responsibilities by learning to show continued attention and vigilance. This will help the IoT developers to properly take into account ethical and legal matters. In that sense, an optimal training for all agents involved in the project (developers, programers, coders, data scientists, engineers, researchers, etc.) before it starts could be one of the most efficient tools to save time and resources in terms of compliance with data protection regulations.
Thus, implementing basic training programs that include at least the fundamentals of the Charter of Fundamental Rights (specially, as regards the role of privacy as catalyst for other rights such as non-discrimination, or ideological freedom), the principles exposed in Article 5 of the GDPR, the need for a legal basis for the processing (including contracts between the parties), the practical consequences of the data protection by design and by default principles, and others. Useful sources are for example available by the Fundamental Rights Agency, IEEE and its ethics guidelines, and the European Commission. If training is not possible, implementing advice from an external expert from the very beginning of the project could be an acceptable alternative.
1CNIL (2017) How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. Commission Nationale de l’Informatique et des Libertés, Paris, p.55. Available at: www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf (accessed 15 May 2020). ↑
2Ibid., p.55. ↑