This requirement embeds three main different principles, namely:
- Privacy and data protection. AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle. This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system (e.g. outputs that the AI system generated for specific users or how users responded to particular recommendations). To allow individuals to trust the data-gathering process, it must be ensured that data collected about them will not be used to discriminate against them unlawfully or unfairly.
- Quality and integrity of data. The quality of the datasets used is paramount to the performance of AI systems. When data is gathered, it may contain socially constructed biases, inaccuracies, errors and mistakes. This needs to be addressed prior to training with any given dataset. In addition, the integrity of the data must be ensured. Feeding malicious data into an AI system may change its behavior, particularly with self-learning systems. Processes and datasets used must be tested and documented at each step, such as planning, training, testing and deployment. This should also apply to AI systems that were not developed in-house but acquired elsewhere.
- Access to data. In any given organization that handles individuals’ personal data (internal documents/policies stipulating who and under what conditions may access personal data including organizational and technical measures of access control, must be in place.Only duly qualified personnel with the competence and need to access individual’s personal data should be allowed to do so. Also, all personnel that are granted access has to sign confidentiality statement.
1Ibid., p.15 and ff. ↑