Ethical principles
Home » AI » General exposition » Human agency and oversight » Ethical principles

This first requirement for the development of AI embeds three main different principles:[1]

  • Fundamental rights. AI systems can enable or hamper fundamental rights. Under such circumstances, a fundamental rights impact assessment should be undertaken before developing an AI solution.
  • Human agency. Users of AI systems should be able to make informed, autonomous decisions about doing so. AI systems should support individuals in making better, more informed choices in accordance with their goals. The overall principle of user autonomy must be central to the AI system’s functionality. For example, data subjects must be aware that their data could be used for profiling, if this might happen. Furthermore, their right not to be subject to a decision based solely on automated processing, when this produces legal effects or similarly significantly effects, must be respected. However, one must keep in mind that this, in general, refers to commercial purposes. Thus, the same is not applicable as for Law Enforcement Agencies that process personal data on legal basis and might use AI for effective fight against different crimes and to fulfil obligations stipulated by law
  • Human oversight. Human oversight helps to ensure that an AI system does not undermine human autonomy or cause other adverse effects. Such oversight may be achieved through diverse governance mechanisms. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

 

References


1Ibid., p.15.

 

Skip to content