Home » AI » Case studies » Second scenario: AI for Crime Prediction and Prevention » Deployment


“Deployment is the process of getting an IT system to be operational in its environment, including installation, configuration, running, testing, and making necessary changes. Deployment is usually not done by the developers of a system but by the IT team of the customer. Nevertheless, even if this is the case, developers will have a responsibility to supply the customer with sufficient information for successful deployment of the model. This will normally include a (generic) deployment plan, with necessary steps for successful deployment and how to perform them, and a (generic) monitoring and maintenance plan for maintenance of the system, and for monitoring the deployment and correct usage of data mining results.”[1]

Main actions that need to be addressed

General remarks

Once you have created your algorithm, you face an important issue. It might happen that it incorporates personal data, openly or in a hidden way. You must perform a formal evaluation assessing which personal data from the data subjects could be identifiable. This can be complicated at times. For example, some AI solutions, such as Vector Support Machines (VSM), could contain examples of the training data by design within the logic of the model. In other cases, patterns may be found in the model that identify a unique individual. In all of these cases, unauthorized parties may be able to recover elements of the training data, or infer who was in it, by analyzing the way the model behaves. If you know or suspect that the AI tool contains personal data (see Purchasing or promoting access to a database section in Actions and tools chapter), you should:

Finally, you must take regular action to proactively evaluate the likelihood of the possibility of personal data being inferred from models in light of the state-of-the-art technology, so that the risk of accidental disclosure is minimized. If these actions reveal a substantial possibility of data disclosure, necessary measures to avoid it should be implemented (see Integrity and confidentiality” section in “Principles” chapter).

Updating information

If the algorithm is implemented by a third party, you must communicate the results of the validation and monitoring system employed at the development stages and offer your collaboration to continue monitoring the validation of the results. It would also be advisable to establish this kind of coordination with third parties from whom you acquire databases or any other relevant component in the life cycle of the system. If this involves data processing by a third party, you must ensure that access is provided on a legal basis.

It is necessary to offer real-time information to the end-user about the values of accuracy and/or quality of the inferred information at each stage (see” Accuracy” section in “Principles” chapter). When the inferred information does not reach minimum quality thresholds, you must highlight that this information has no value. This requirement often implies that you shall provide detailed information about the training and validation stages. Information about the datasets used for those purposes is particularly important. Otherwise, the use of the solution might bring disappointing results to the end-users, who are left speculating on the cause.

You must also ensure that any real-world implementation also complies with the Data Protection Law Enforcement Directive (Directive 2016/680)[2] and their specific implementation in individual member states. Please be aware that this usually implies for LEAs less restrictive regulations concerning the use of personal data. In criminal justice, the provision of evidence is often a burdensome activity. It is therefore a natural tendency to collect and process as much data as possible data that eventually could prove as useful. This tendency is even reinforced by the increasing technical possibilities to analyze huge amounts of data automatically by AI tools. However, data minimization is necessary and effective countermeasures against extensive data collection and processing therefore must be integrated already into the design of the AI tools.

Compliance with human rights and ethical principles requires the fulfilment of further essential conditions:

“Regarding surveillance technologies, the burden of proof should lie with states and/or companies, who have to demonstrate publicly and transparently, before introducing surveillance options,

– that they are necessary

– that they are effective

– that they respect proportionality (e.g. purpose limitation)

– that there are no better alternatives that could replace these surveillance technologies

These criteria must then also be subjected to post factum assessment, either on the level of normal political analysis, or through Member States policies to do so.”[3]




1SHERPA, Guidelines for the Ethical Development of AI and Big Data Systems: An Ethics by Design approach, 2020, p 13. At: Accessed 15 May 2020

2European Parliament and the Council, 2016, DIRECTIVE (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, Official Journal <>.

3European Group on Ethics in Science and New Technologies. (2014). Opinion No. 28: Ethics of security and surveillance technologies (10.2796/22379). Retrieved from Luxembourg: Brussels:


Skip to content