Pay attention to some general remarks
Home » AI » Step by step » Deployment » Pay attention to some general remarks

At the time of distribution of the AI tool, if it incorporates personal data, it will be necessary to do the following (see also the “Purchasing or promoting access to a database” section in the “Main tools and actions” chapter):

In principle, once the model is put into use, the training data is removed from the algorithm and the model will only process the personal data to which it is applied. The controller might retain the data subject’s data for the customization of the service being offered by the AI tool. However, once this service finishes, these data must be deleted, unless convincing reasons make it recommendable to keep them. This does not mean that data storage should last forever, of course.

The AI developer must make sure that the algorithm does not include personal data in a hidden way (or take necessary measures if this is unavoidable). In any case, the developer must perform a formal evaluation assessing which personal data from the data subjects could be identifiable.[2] This can be complicated at times. For example, some AI tools, such as Vector Support Machines (VSM) could contain examples of the training data by design within the logic of the model. In other cases, patterns may be found in the model that identify a unique individual.[3] In all of these cases, unauthorized parties may be able to recover elements of the training data, or infer who was in it, by analyzing the way the model behaves.

Under such conditions, it might be difficult to ensure that the data subjects are able to exercise and fulfil their rights of access, rectification, and erasure. Indeed, “unless the data subject presents evidence that their personal data could be inferred from the model, the controller may not be able to determine whether personal data can be inferred and therefore whether the request has any basis.”[4] However, controllers should take regular action to proactively evaluate the likelihood of the possibility of personal data being inferred from models in light of the state-of-the-art technology, so that the risk of accidental disclosure is minimized. If these actions reveal a substantial possibility of data disclosure, necessary measures to avoid it should be implemented (see the ‘Integrity and confidentiality’ section in the ‘Principles’ chapter).
References


1AEPD (2020) Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción. Agencia Espanola Proteccion Datos, Madrid, p.26. Available at: www.aepd.es/sites/default/files/2020-02/adecuacion-rgpd-ia.pdf (accessed 15 May 2020).

2AEPD (2020) Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción. Agencia Espanola Proteccion Datos, Madrid, p.41. Available at: www.aepd.es/sites/default/files/2020-02/adecuacion-rgpd-ia.pdf (accessed 15 May 2020).

3AEPD (2020) Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción. Agencia Espanola Proteccion Datos, Madrid, p.13. Available at: www.aepd.es/sites/default/files/2020-02/adecuacion-rgpd-ia.pdf (accessed 15 May 2020).

4ICO (2019) Enabling access, erasure, and rectification rights in AI tools. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/about-the-ico/news-and-events/ai-blog-enabling-access-erasure-and-rectification-rights-in-ai-systems/ (accessed 15 May 2020).

 

Skip to content