The validation of the processing including an AI component must be done in conditions that reflect the actual environment in which the processing is intended to be deployed. Moreover, the validation process requires periodic review if conditions change or if there is a suspicion that the solution itself may be altered. AI developers must make sure that validation reflects the conditions in which the algorithm has been validated accurately.
In order to reach this aim, validation should include all components of an AI tool, including data, pre-trained models, environments and the behavior of the system as a whole and be performed as soon as possible. Overall, it must be ensured that the outputs or actions are consistent with the results of the preceding processes, comparing them to the previously defined policies to ensure that they are not violated. Validation sometimes needs gathering new personal data. In some other cases, controllers use data for purposes others than the original ones. In all these cases, controllers should ensure compliance with the GDPR (see the “Purpose limitation” section in the “Principles” chapter and “Data protection and scientific research” in the “Concepts” section).
1High-Level Expert Group on AI (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels, p.22. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 15 May 2020). ↑