General exposition
Home » AI » General exposition

AI: Requirements for developers and innovators

Iñigo de Miguel Beriain[1] (UPV/EHU), Felix Schaber[2] (OEAW), Gianclaudio Malgieri and Andres Chomczyk Penedo[3] (VUB)

Acknowledgements: The authors thankfully acknowledge José Antonio Castillo Parrilla, Eduard Fosch Villaronga and Lorena Perez Campillo for their kind support in writingthis document. Needless to say, all mistakes are our full responsibility.

This part of The Guidelines has been reviewed and validated by Marko Sijan, Senior Advisor Specialist, (HR DPA)

Introduction part A

The first part of this chapter is built around the seven ethical requirements included inthe recommendations published by the High-Level Expert Group on AI[4] in their ‘Ethics guidelines for trustworthy AI’.[5] These recommendations were recently analyzed by the SHERPA project,[6] which included an extensive analysis of the ethical issues involved when developing adequate tools to face these challenges. In light of this commendable effort, it would be redundant to include an extensive analysis here of the same topic (AI) from the same perspective (ethics).Instead, what we have tried to do in this document is provide a complementary analysis. These Guidelines aim to find the overlap between the ethical recommendations made by the High-Level Expert Group on AIand the legal framework created by the General Data Protection Regulation (GDPR) on data protection issues.

Before starting the analysis, however, it is necessary to include some preliminary notes. First, this report focuses mainly on AI developers: organizations willing to develop AI tools. These organizations become controllers as soon as they start processing personal data. In a similar vein, the terms ‘tool’, ‘solution’, ‘model’ and ‘development’ should be considered as synonymous in the context of this analysis.

Second, this part of the Guidelines can only be understood in the context of the whole tool (the Guidelines). There are several concepts that are not explored in this document, because they are addressed in other sections of The Guidelines; we have referred to these wherever needed (references are highlighted in yellow). In the future, all sections will be available on a website, making the Guidelines much more user friendly.

The different sections in this part of the document contain only what we consider to be strictly necessary to understand the core arguments of the ethical and legal issues at stake. We have included checklists that should help controllers to determine whether they are addressing the issues accurately, and a further reading section for readers to consult if necessary. Footnotes provide further references to the most important statements.

Finally, this document has been structured so that it is easy to understand. As previously mentioned, it is based upon the seven requirements described by the High-Level Expert Group on AI. We start our analysis with a brief description of the core ethical issues at stake, and then summarize the main ethical issues and challenges related to them. These serve as the common basis on which our legal analysis is built, providing the context for the legal analysis made.



1Author of the whole document except sections 2 and 7 of this part.

2Author of section 2 of this part.

3Authors of section 7 of this part.

4The High-Level Expert Group on AI was created by the European Commission in 2018. It has as a general objective to support the implementation of the European Strategy on Artificial Intelligence. This includes the elaboration of recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. Available at: (accessed 15 May 2020).

5High-Level Expert Group on AI (2019) Ethics guidelines for trustworthy AI, p.15 and ff. Brussels, European Commission, Brussels. Available at: 15 May 2020).

6SHERPA (2019) Guidelines for the ethical use of AI and big data systems. SHERPA,

Skip to content