Second scenario: AI for Crime Prediction and Prevention
Home » AI » Case studies » Second scenario: AI for Crime Prediction and Prevention

Johann Čas (ITA/OEAW)

This part of The Guidelines has been reviewed and validated by Marko Sijan, Senior Advisor Specialist, (HR DPA)

Introduction and preliminary remarks

The use of advanced ICTs plays – as an essential technology for all economic, governmental or societal activities – an increasingly important role in predicting, preventing, investigating, and prosecution of criminal or terroristic activities. Accordingly, research to develop and improve technical capabilities of law enforcement agencies (LEAs) forms a priority area of past, current and future EC funding programs. Advanced and emerging ICTS possess unprecedented powers of surveillance and analysis of large and diverse datasets, particularly in connection with AI[1] technologies. The research in such technologies, as well as the implementation of advanced ICTs in the context of security, raise serious concerns of ethics and legal compliance. EU funded security research programmes demand explicitly full compliance with the provisions of the Charter of Fundamental Rights of the European Union,[2] the consideration of privacy by design, data protection by design,privacy by default and data protection by default,[3]and, in addition to the Ethics Self-Assessment Table[4] also to fill in a Societal Impact Table. “A ‘Societal Impact Table’ is a specific feature of this work program part. This table emphasizes on societal aspects of security research. It checks whether the proposed security research meets the needs of and benefits society and does not negatively impact society. Applicants must fill in the ‘Societal Impact Table’ as part of the submission process.”[5] Similar procedures should also be implemented on the level of designing the work programmes. Additional safeguards should be foreseen that programs do not contain calls that are difficult or impossible to fulfil without raising severe ethics issues or causing unproportionate infringements of human rights. This could be realized by a mandatory involvement of civil society representatives and ethics and legal expertise among the expert groups drafting EU funded research programs.

These precautions are essential to bringing security research in line with principles like human rights and democracy; nevertheless, concerns remain that they may increase the legitimacy of security research projects without guaranteeing ethical and legal compliance in practice.[6]The use of AI in the context of crime prediction or prevention poses severe threats to civil liberties. A simple trade-off between security and freedom is not appropriate or sufficient. The complex relationship should be treated as a kind of hostile symbiosis,[7] implying that both are necessary for the survival of the other.

To take these concerns into due consideration, this scenario also incorporates information from existing H2020 Security research calls, particularly of the H2020-SEC-2016-2017 call and currently running or recently concluded projects. MAGNETO[8](Multimedia Analysis and Correlation Engine for Organised Crime Prevention and Investigation), CONNEXIONs[9](InterCONnected NEXt-Generation Immersive IoT Platform of Crime and Terrorism DetectiON, PredictiON, InvestigatiON, and PreventiON Services)or RED-Alert[10](Real-time Early Detection and Alert System for Online Terrorist Content based on Natural Language Processing, Social Network Analysis, Artificial Intelligence and Complex Event Processing) are examples of projects of relevance for this case study. They are financed by the 2016-2017Technologies for prevention, investigation, and mitigation in the context of the fight against crime and terrorism call.[11] The original plan to take one of these projects as concrete bases for this scenario was abandoned as most, or almost all deliverables of the mentioned projects are in accordance with H2020 regulations[12] classified and not publicly accessible. Whereas the classification of specific results of security research projects may be necessary and understandable, it certainly also limits the possibility of public scrutiny and debates of these technologies, which should be mandatory in view of the potential infringements of human rights and European values.

The complexity of this use case is further increased by the fact that different regulations apply to the research and development phase on the one hand and to the implementation and use phase, on the other hand. Research activities are subject to the GDPR; future applications of the research results are subject to the so-called Data Protection Law Enforcement Directive (Directive 2016/680),[13] allowing for specific implementation and legislation in individual member states.

The development of AI for security objectives demands particularly careful and strict consideration and compliance with ethics requirements in general, i.e. the already mentioned Horizon 2020 Programme Guidance – How to complete your ethics self-assessment, of respective key documents related to AI, e.g. the High-Level Expert Group on AI: ‘Ethics guidelines for trustworthy AI’[14] and the EU Commission White Paper on Artificial Intelligence – A European approach to excellence and trust,[15] and of additional, security specific considerations and documents, as addressed in the Societal Impact Table, the EGE Opinion n°28 – Ethics of Security and Surveillance Technologies[16] or relevant documents published by the EDPS (European Data Protection Supervisor).[17] The Proposal for an Artificial Intelligence Act specifically addresses the use of AI technologies for the purpose of law enforcement and “…lays down a solid risk methodology to define “high-risk” AI tools that pose significant risks to the health and safety or fundamental rights of persons. Those AI tools will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market.”[18] Annex III lists a number of AI uses for law enforcement as High Risk AI tools for which conformity assessment procedures are mandatory.

The following step by step analysis follows the structure and terminology of the CRISP-DM Model[19], as outlined in the description below. In order to increase comparability of the approaches and results, this structure is commonly applied to all case studies presented and discussed as part of the MLEs (Mutual Learning Encounters) conducted by the PANELFIT. The adoption of a common structure implies that the individual terms must not be understood literally. Business understanding might for instance, signify to develop a holistic view of the project objectives and on the means and steps to attain them in the case that the planned project does not (primarily) have commercial intentions. It also implies that some of the steps or tasks included in the common framework are not applicable or less relevant for different contexts of the case studies. For instance, the first of the four main tasks comprising the general objective, i.e. the determination of the business objectives, is characterized by little or less freedom of choice if the goals are defined and described in a call to submit research proposals, as it is the case here. This statement should however, not infer that freedoms of choice do not exist at all or should not be considered, but that available options for project applicants are limited in comparison to those available when deciding on the topics of research calls.

During the discussion of the draft version with external experts, we also received recommendations going beyond this specific scenario, e.g. developing curricula for ethics and their mandatory integration into technical studies or offers of data protection and ethics training for engineers. Corresponding training programs should also be offered to police forces (deploying AI) as a general awareness-raising activity.

1AI is a (too) frequently used term lacking a unique definition. Here we refer to the broad definition of AI, developed by the High-Level Expert Group on AI:“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI tools can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”

References


1https://digital-strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines

2http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2010:083:0389:0403:en:PDF

3For details see EDPB. (2019). Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Version 2.0. Adopted on 20 October 2020.<https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_201904_dataprotection_by_design_and_by_default_v2.0_en.pdf>

4https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/ethics/h2020_hi_ethics-self-assess_en.pdf

5See p.5 https://ec.europa.eu/research/participants/data/ref/h2020/wp/2018-2020/main/h2020-wp1820-security_en.pdf

6Leese, M., Lidén, K. und Nikolova, B., 2019, Putting critique to work: Ethics in EU security research, Security Dialogue 50(1), 59-76 <https://journals.sagepub.com/doi/abs/10.1177/0967010618809554>.

7Wittes, B. (2011). Against a Crude Balance: Platform Security and the Hostile Symbiosis Between Liberty and Security. Project on Law and Security, Harvard Law School and Brookings, <https://www.brookings.edu/wp-content/uploads/2016/06/0921_platform_security_wittes.pdf>

8http://www.magneto-h2020.eu/

9https://www.connexions-project.eu/

10https://redalertproject.eu/

11https://cordis.europa.eu/programme/id/H2020_SEC-12-FCT-2016-2017

12http://ec.europa.eu/research/participants/data/ref/h2020/other/hi/secur/h2020-hi-guide-classif_en.pdf

13European Parliament and the Council, 2016, DIRECTIVE (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, Official Journal <http://eur-lex.europa.eu/legal-content/EL/TXT/?uri=OJ:L:2016:119:TOC>.

14https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

15https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

16European Group on Ethics in Science and New Technologies. (2014). Opinion No. 28: Ethics of security and surveillance technologies (10.2796/22379). Retrieved from Luxembourg: Brussels: https://publications.europa.eu/en/publication-detail/-/publication/6f1b3ce0-2810-4926-b185-54fc3225c969/language-en/format-PDF/source-77404258

17https://edps.europa.eu/data-protection/our-work/subjects_en

18European Commission. (2021). COM(2021) 206 final. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts., p. 3<https://ec.europa.eu/newsroom/dae/redirection/document/75788>

19Shearer, Colin, The CRISP-DM Model: The New Blueprint for Data Mining, p. 14.

Skip to content