Documenting of processing
Home » AI » Step by step » Business understanding » Documenting of processing

As stated in the requirements and acceptance tests for the purchase and/or development of the employed software, hardware, and infrastructure (see the “Documentation of processing” section in the ‘Main tools and actions” within Part II of these Guidelines), the risk evaluation and the decisions taken “have to be documented in order to comply with the requirement of data protection by design” (of Article 25 of the GDPR).

Finally, the controllers should always be aware that, according to Article 32(1)(d) of the GDPR, data protection as a process. Therefore, they should test, assess, and evaluate the effectiveness of technical and organizational measures regularly. Procedures that serve controllers to identify changes that would trigger the revisit of the DPIA should be created at this moment. Whenever possible, controllers should try to impose a dynamic model of monitoring the measures at stake (see the ‘Integrity and confidentiality’ section in the ‘Principles’ chapter).

Box 15: The extreme difficulty of accountability in AI development

Even though accountability is a necessary goal and assigning responsibilities to a specific processor is absolutely necessary, controllers must always be conscious that AI functioning can make it extremely difficult to monitor a system. As the CNIL stated, “the question of where accountability and decision-making can be set up is to be approached in a slightly different way when dealing with machine learning systems”. Therefore, controllers should better think more in terms of a chain of accountability, from the system designer right through to its user, via the person who will be feeding the training data into this system. The latter will operate differently depending on such input data.

On this subject, one could mention Microsoft’s chatbot Tay. It was shut down a mere twenty-four hours after its release when, learning from the posts of social media users, it had begun to tweet racist and sexist comments of its own. Needless to say, working out the precise share of responsibility between these different links in the chain might be a laborious task.[1]

 

 

References


1CNIL (2017) How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. Commission Nationale de l’Informatique et des Libertés, Paris, p.29. Available at: www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf (accessed 15 May 2020).

 

Skip to content