Annex: Checklists – Compilation
Home » AI » General exposition » Annex: Checklists – Compilation
Checklist: profiling and automated decision-making[1]

☐ The controllers have a legal basis to carry out profiling and/or automated decision-making, and document this in their data protection policy.

☐ The controllers send individuals a link to their privacy statement when they have obtained their personal data indirectly.

☐ The controllers explain how people can access details of the information that they used to create their profile.

☐ The controllers tell people who provide them with their personal data and how they can object to profiling.

☐ The controllers have procedures for customers to access the personal data input into their profiles, so they can review and edit for any accuracy issues.

☐ The controllers have additional checks in place for their profiling/automated decision-making systems to protect any vulnerable groups (including children).

☐ The controllers only collect the minimum amount of data needed and have a clear retention policy for the profiles that they create.

As a model of best practice

☐ The controllers carry out a DPIA to consider and address the risks when they start any new automated decision-making or profiling.

☐ The controllers tell their customers about the profiling and automated decision-making they carry out, what information they use to create the profiles, and where they get this information from.

☐ The controllers use anonymized data in their profiling activities.

☐ Those responsible guarantee the right to readability of algorithmic decisions.

☐ Decision-makers have a mechanism capable of notifying and explaining the reasons when a challenge to the algorithmic decision is not accepted due to lack of human intervention.

☐ The decision-makers have a model of human rights assessment in automated decision-making.

☐ Qualified human supervision is in place from the design phase onwards, in particular on the interpretation requirements and the effective design of the interface, and the examiners are trained.

☐ Audits are conducted with respect to possible deviations from the results of inferences in adaptive or evolutionary systems.

☐ Certification of the AI system is being, or has been, carried out.

Checklist: Resilience to attack and security[2]

☐ The controller assessed potential forms of attacks to which the AI system could be vulnerable.

☐ The controller considered different types and natures of vulnerabilities, such as data pollution, physical infrastructure and cyber-attacks.

☐ The controller put measures or systems in place to ensure the integrity and resilience of the AI system against potential attacks.

☐ The controller verified how the system behaves in unexpected situations and environments.

☐ The controller consider to what degree the system could be dual-use. If so, the controller took suitable preventative measures against this (e.g. not publishing the research or deploying the system).

Checklist: Fallback plan and general safety[3]

☐ The controller ensured that the system has a sufficient fallback plan if it encounters adversarial attacks or other unexpected situations (e.g. technical switching procedures or asking for a human operator before proceeding).

☐ The controller considered the level of risk raised by the AI system in this specific use case.

☐ The controller put any process in place to measure and assess risks and safety.

☐ The controller provided the necessary information in case of a risk to human physical integrity.

☐ The controller considered an insurance policy to deal with potential damage from the AI system.

☐ The controller identified potential safety risks of (other) foreseeable uses of the technology, including accidental or malicious misuse. Is there a plan to mitigate or manage these risks?

☐ The controller assessed whether there is a probable chance that the AI system may cause damage or harm to users or third parties. The controller assessed the likelihood, potential damage, impacted audience and severity.

☐ The controller considered the liability and consumer protection rules, and take them into account.

☐ The controller considered the potential impact or safety risk to the environment or to animals.

☐ The controller risk analysis included whether security or network problems (e.g. cybersecurity hazards) could pose safety risks or damage due to unintentional behaviour of the AI system.

☐ The controller estimated the likely impact of a failure of the AI system when it provides wrong results, becomes unavailable, or provides societally unacceptable results (e.g. discrimination).

☐ The controller defined thresholds and put governance procedures in place to trigger alternative/fallback plans.

☐ The controller defined and test fallback plans.

Checklist: Accuracy[4]

☐ The controller assessed what level and definition of accuracy would be required in the context of the AI system and use case.

☐ The controller assessed how accuracy is measured and assured.

☐ The controller put in place measures to ensure that the data used is comprehensive and up to date.

☐ The controller put in place measures to assess whether there is a need for additional data, for example to improve accuracy or eliminate bias.

☐ The controller verified what harm would be caused if the AI system makes inaccurate predictions.

☐ The controller put in place ways to measure whether the system is making an unacceptable amount of inaccurate predictions.

☐ The controller put in place a series of steps to increase the system’s accuracy.

Checklist: Reliability and reproducibility[5]

☐ The controller put in place a strategy to monitor and test if the AI system is meeting its goals, purposes and intended applications.

☐ The controller tested whether specific contexts or particular conditions need to be taken into account to ensure reproducibility.

☐ The controller put in place verification methods to measure and ensure different aspects of the system’s reliability and reproducibility.

☐ The controller put in place processes to describe when an AI system fails in certain settings.

☐ The controller clearly documented and operationalize these processes for the testing and verification of the reliability of AI systems.

☐ The controller established mechanisms of communication to assure (end-)users of the system’s reliability.

Checklist: purpose limitation[6]

☐ The controllers have clearly identified their purpose or purposes for processing.

☐ The controllers have documented those purposes.

☐ The controllers include details of their purposes in the privacy information for individuals.

☐ The controllers regularly review their processing and, where necessary, update their documentation and privacy information for individuals.

☐ If the controllers plan to use personal data for a new purpose other than a legal obligation or function set out in law, they check that this is compatible with their original purpose or they get specific consent for the new purpose.

Checklist: consent

☐ The controllers have checked that consent is the most appropriate legal basis for processing.

☐ The controllers request the consent of the interested parties in a free, specific, informed and unequivocal manner.

☐ Broad consent is used only when it is difficult or improbable to foresee how this data will be processed in the future.

☐ Broad consent used for processing of special categories of data is compatible with national regulations.

☐ Where broad consent is used, the data subjects are given the opportunity to withdraw their consent and to choose whether or not to participate in certain research and parts of it.

☐ Controllers have a direct relationship with the subject who provides the data to be used for training, validation and deployment of the IA model.

☐ There is no power imbalance between controllers and data subjects.

☐ The controllers ask people to positively opt in.

☐ The controllers do not use pre-ticked boxes or any other type of default consent.

☐ The controllers use clear, plain language that is easy to understand.

☐ The controllers specify why they want the data and what they are going to do with it.

☐ The controllers give separate distinct (‘granular’) options to consent separately to different purposes and types of processing.

☐ The controllers tell individuals they can withdraw their consent and how to do so.

☐ The controllers ensure that individuals can refuse to consent without detriment.

☐ The controllers avoid making consent a precondition of a service.

Checklist: legitimate interest as a legal basis

☐ The controllers have checked that legitimate interest is the most appropriate basis.

☐ The controllers understand their responsibility to protect individuals’ interests.

☐ The controllers keep a record of the decisions made and the reasoning behind them, to ensure that they can justify their decision.

☐ The controllers have identified the relevant legitimate interests.

☐ The controllers have checked that the processing is necessary and there is no less intrusive way to achieve the same result.

☐ The controllers have done a balancing test and are confident that the individual’s interests do not override those legitimate interests.

☐ The controllers only use individuals’ data in ways they would reasonably expect, unless the controllers have a very good reason.

☐ The controllers are not using people’s data in ways they would find intrusive, or which could cause them harm, unless the controllers have a very good reason.

☐ If the controllers process children’s data, they take extra care to make sure they protect the children’s interests.

☐ The controllers have considered safeguards to reduce the impact, where possible.

☐ The controllers have considered whether they can offer an opt out.

☐ The controllers have considered whether they also need to conduct a DPIA.

Checklist: data minimization

☐ The controllers have ensured that they only use personal data if needed.

☐ The controllers have considered the proportionality between the amount of data and the accuracy of the AI tool.

☐ The controllers periodically review the data they hold, and delete anything they do not need.

☐ The controllers at the training stage of the AI system debug all information not strictly necessary for such training.

☐ The controllers check if personal data are processed at the distribution stage of the AI system and delete them unless there is a justified need and legitimacy to keep them for other compatible purposes.

Checklist: right to access[7]

Preparing for subject access requests

☐ The controllers know how to recognize a subject access request and they understand when the right of access applies.

☐ The controllers understand that the right of access is to be applied at each stage of the life cycle of the AI solution, if it uses personal data.

☐ The controllers have a policy for how to record requests they receive verbally.

☐ The controllers understand when they can refuse a request and are aware of the information they need to provide to individuals when doing so.

☐ The controllers understand the nature of the supplementary information they need to provide in response to a subject access request.

Complying with subject access requests

☐ The controllers have processes in place to ensure that they respond to a subject access request without undue delay and within one month of receipt.

☐ The controllers are aware of the circumstances in which they can extend the time limit to respond to a request.

☐ The controllers understand that there is a particular emphasis on using clear and plain language if they are disclosing information to a child.

☐ The controllers understand what they need to consider if a request includes information about others.

☐ The controllers understand how to apply the right to access in training stages.

Checklist: data portability[8]

Preparing for requests for data portability

☐ The controllers know how to recognize a request for data portability and understand when the right applies.

☐ The controllers take into account the requirement for data portability from the earliest stages of conception and design of the AI processing.

☐ The controllers have a policy for how to record requests they receive verbally.

☐ The controllers understand when they can refuse a request and are aware of the information they need to provide to individuals if they proceed with such refusal.

Complying with requests for data portability

☐ The controllers can transmit personal data in structured, commonly used and machine-readable formats.

☐ The controllers inform users in advance when it is not technically possible to exercise the right of portability by means of a protocol.

☐ The controllers use secure methods to transmit personal data.

☐ The controllers have processes to ensure that they respond to a request for data portability without undue delay and within one month of receipt.

☐ The controllers are aware of the circumstances under which they can extend the time limit to respond to a request.

Checklist: right to rectification[9]

Preparing for requests for rectification

☐ The controllers know how to recognize a request for rectification and understand when this right applies.

☐ The controllers have a policy for how to record requests they receive verbally.

☐ The controllers understand when they can refuse a request, and are aware of the information they need to provide to individuals when asked to do so.

Complying with requests for rectification

☐ The controllers are prepared to address the right of rectification of data subjects’ data, especially those generated by the inferences and profiles made by the AI solution.

☐ The controllers have processes in place to ensure that they respond to a request for rectification without undue delay and within one month of receipt.

☐ The controllers are aware of the circumstances when they can extend the time limit to respond to a request.

☐ The controllers have appropriate systems to rectify or complete information, or provide a supplementary statement.

☐ The controllers have procedures in place to inform any recipients if they rectify any data they have shared with them.

Checklist: right to erasure[10]

Preparing for requests for erasure

☐ The controllers know how to recognize a request for erasure and they understand when the right applies.

☐ The controllers have a policy for how to record requests they receive verbally.

☐ The controllers understand when they can refuse a request and are aware of the information they need to provide to individuals when doing so.

Complying with requests for erasure

☐ The controllers have processes in place to ensure that they respond to a request for erasure without undue delay and within one month of receipt.

☐ The controllers are aware of the circumstances under which they can extend the time limit to respond to a request.

☐ The controllers understand that there is a particular emphasis on the right to erasure if the request relates to data collected from children.

☐ The controllers have procedures to inform any recipients if they erase any data they shared with them.

☐ The controllers have appropriate methods to erase information.

Checklist: right to object

Preparing for objections to processing

☐ The controllers know how to recognize an objection and they understand when the right applies.

☐ The controllers have a policy for how to record objections they receive verbally.

☐ The controllers understand when they can refuse an objection and are aware of the information they need to provide to individuals when doing so.

☐ The controllers have clear information in their privacy notice about individuals’ right to object, which is presented separately from other information on their rights.

☐ The controllers understand when they need to inform individuals of their right to object, in addition to including it in their privacy notice.

Complying with requests which object to processing

☐ The controllers have processes in place to ensure that they respond to an objection without undue delay and within one month of receipt.

☐ The controllers are aware of the circumstances when they can extend the time limit to respond to an objection.

☐ The controllers have appropriate methods in place to erase, suppress or otherwise cease processing personal data.

Checklist: transparency

☐ The controllers inform data subjects about how and for which purposes their data (including both observed and inferred data about them) is used

☐ The controllers make data subjects aware of how and why an AI-assisted decision about them was made, or where their personal data was used to train and test an AI system

☐ The controllers undertake analyses that evaluate the effectiveness and accessibility of the information provided to the data subjects helps to ensure the efficient implementation of this principle.

☐ The controllers facilitate the incorporation of independent certification systems that are capable of accrediting that the mechanism meets the requirements of the GDPR

☐ The controllers furnish the data subject with the information they need to protect their interests

☐ The controllers have provided for the development of more understandable algorithms over less understandable ones.

☐ The controllers have balanced the trade-offs between the explainability, transparency and best performance of the system.

☐ The controllers have tried to find technical solutions to the lack of interpretability (if this is the case).

☐ The controllers have considered the possibility of using independent audits.

Checklist: bias

☐ The controller has established a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data and for the algorithm design.

☐ The controller assesses and acknowledges the possible limitations stemming from the composition of the used datasets.

☐ The controller has considered the diversity and representativeness of the data used.

☐ The controller has tested for specific populations or problematic use cases.

☐ The controllers used the available technical tools to improve their understanding of the data, model and performance.

☐ The controller has put in place processes to test and monitor for potential biases during the development, deployment and use phases of the AI system.

☐ The controllers has implemented a mechanism that allows others to flag issues related to bias, discrimination or poor performance of the AI system.

☐ The controller has established clear steps and ways of communicating on how and to whom such issues can be raised.

☐ The controller has considered others, potentially indirectly affected by the AI system, in addition to the (end-)users.

☐ The controller has assessed whether there is any possible decision variability that can occur under the same conditions.

☐ In case of variability, the controller has established a measurement or assessment mechanism of the potential impact of such variability on fundamental rights.

☐ The controller has implemented a quantitative analysis or metrics to measure and test the applied definition of fairness.

☐ The controller has established mechanisms to ensure fairness in the AI systems, and has considered other potential mechanisms.

Checklist: is a DPIA necessary?

☐ The controller determined the jurisdictions where data-processing activities will take place.

☐ The controller check if those jurisdictions have enacted lists indicating the processing that require a DPIA and see if the intended data processing activities that involve AI are covered by those provisions.

☐ If the controller is unsure of the necessity of carrying out a DPIA, he/she consult with the DPO or, in lieu of, legal department of the controller.

☐ If necessary, the controller carried out a DPIA.

☐ If necessary, the controller filed a prior consultation with the appropriate supervisory authority.

☐ If changes were suggested, the controller followed the advice of the supervisory authority.

Checklist: processor due diligence

☐ The controllers required information regarding where the data-processing activities will take place, and: (1) carry out the case law review suggested below; and (2) assess if the jurisdictions, in case of non-EU countries, are deemed as adequate by the EU Commission.

☐ The controllers reviewed case law from the national supervisory authorities where the processor operates to check for potential sanctions.

☐ The controllers required proof of adherence to a code of conduct or certification.

☐ The controllers required proof of relevant ISO certification.

☐ The controllers required a copy of records of processing activities.

☐ The controllers enquired about the development process of the AI, in particular which kind of data were used for training the AI and the data that the AI needs to operate and deliver a useful result.

Checklist: DPOs

☐ The controllers checked if the institution has already appointed a DPO.

☐ If not, they checked with the legal department if the intended data-processing activities trigger the appointment of a DPO, either by looking at European authoritative interpretations, local regulations, local authoritative interpretations, case law – both local and European – and, finally, academic interpretations.

☐ The controllers required the appointment of a DPO if necessary, and its involvement in the AI development process as necessary.

☐ As a general rule, the DPO should be aware of every step taken to allow room for their intervention if deemed relevant.

 

 

References


1ICO (no date) Rights related to automated decision making including profiling. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/rights-related-to-automated-decision-making-including-profiling/(accessed 15 May 2020).

2This checklist has been adapted from the one elaborated by the High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 20 May 2020).

3This checklist has been adapted from the one elaborated by the High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 20 May 2020).

4This checklist has been adapted from the one elaborated by the High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 20 May 2020).

5This checklist has been adapted from the one elaborated by the High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 20 May 2020).

6ICO (no date) Principle (b): purpose limitation. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/purpose-limitation/(accessed 17 May 2020).

7ICO (no date) Right of access. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-of-access/ (accessed 28 May 2020).

8ICO (no date) Right to data portability. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-data-portability/(accessed 28 May 2020).

9ICO (no date) Right to rectification. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-rectification/ (accessed 28 May 2020).

10ICO (no date) Right to erasure. Information Commissioner’s Office, Wilmslow. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/(accessed 28 May 2020).

 

Skip to content