Have a good knowledge of the legal framework regarding profiling
Home » IoT » Human agency (automated decision making and profiling) » Have a good knowledge of the legal framework regarding profiling

Profiling has been defined in Art. 4 GDPR as “any form of automated processing of personal data that involves the use of personal data to evaluate specific aspects of an individual, in particular to analyze or predict aspects relating to that individual’s professional performance, financial situation, health, personal preferences, interests, reliability, behavior, location or movements”.

In principle, profiling can bring users important benefits, since it could increase the efficiency of the system, save resources or help provide a better service. For instance, profiling by a smart TV could help us find out series that match well with our preferences without spending a lot of time looking for them on our own. However, it is also clear that it could serve for more obscure, discriminative purposes, which involve significant risks for individuals’ rights and freedoms” and can “perpetuate existing stereotypes and social segregation” absent appropriate safeguards.

Caution regarding profiling should be specially adopted when controllers start mixing data. The alignment of different types of personal data can reveal sensitive information about individuals. Sometimes these processes even end up processing personal data of special categories in an unnoticed way. For instance, mixing non-special categories of data such as data about preferences, location and social media connections may allow to infer with a high degree of success the sexual orientation of individuals or their religious believes. This may happen without the awareness of the individuals. Needless to say, these consequences should be carefully avoided.

In order to address these issues, AI developers should have a good knowledge of article 22 GDPR. This clause states that “data subjects shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them.” On this basis, there are some important considerations to be made.

  • First, the prohibition on “fully automated decision-making” only applies when the decision based on such technology “has a legal effect on or similarly significantly affects someone.” The EDPB guidelines cite as examples of this: the rejection of a selection process, the denial of credit or insurance, or the application of different prices to the same group. It is not necessary that it affect a large number of people. In other situations, assessing whether the fully automated decision-making process creates significant effects may not always be clear to the controller. In these cases, we recommend getting advice from the DPO.
  • Second, one must keep in mind that there is no “decision based solely on automated processing” if the decision is reviewed by a human being who “takes account of other factors in making the final decision”. If a human being only ratifies what a tool states, this would not count as human intervention as such. The human element must have power enough to rectify the recommendation by the tool. Some factors to assess this could be the amount of times where the human deviates from the automated recommendation, whether the person has autonomy within the organization to make the decision, and which other factors are taken into account by the person that are not included in the automated model.
  • Furthermore, article 22(2) introduces some exceptions to this general ban of profiling or automated decision-making. Indeed, the Guidelines published by the A29WP declare that the ban does not apply if the profiling or automated decision making is necessary for entering into, or performance of, a contract between the data subject and a data controller; authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to protect the data subject’s rights and freedoms and legitimate interests (technical and organizational measures, safeguards, etc.); or is based on the data subject’s explicit consent. That is, in these three circumstances, fully automated decision-making processes are allowed. For instance, a service consisting on offering personalized media content based on your declared preferences and consumption data, will necessarily require profiling. If the “necessity” can be proven, fully automated profiling will be permissible.
Box 6: Profiling negative effects and inferential analyticsPossible ways of monitoring and profiling that can lead to privacy and discrimination issues in IoT systems:

  • profiling through data inference, whether on the basis of those primarly provided by data subjet or other sources of data (e.g. Internet browsing behavior);
  • profiling through linking IoT datasets;
  • profiling that occurs when data is shared with third parties that combine data with other datasets (e.g. product supliers, technical support).

Therefore, profiling or automated decision-making can be acceptable if any of these circumstances apply, provided that processing does not contravene the data protection regulation in any other possible way. However, even in these cases, additional actions should be embedded. IoT developers should be especially careful if they are dealing with special categories of data inferred by the systems. Article 22(4) contains limitations as regards the use of special categories of data for fully automated decision making or profiling. In this case, art. 22 needs to be applied in accordance with art. 9 GDPR. Specifically, when dealing with special categories of data, fully automated decisions are only allowed with the explicit consent of the data subject (art. 9(1) (a)) or for reasons of substantial public interest, based on existing legislation (art. 9(1) (g)). For instance, a controller wishes to create citizens’ profiles to infer the chances of getting a virus and prevent a pandemic. For that purpose, different sources of data are to be mixed, including health data. This could be considered necessary for the protection of a substantial public interest. In addition, there needs to be EU or national legislation allowing for this processing. If all these circumstances are met, art. 9(2) (g) is applicable, and therefore the controller may fall under the art. 22(4) exception allowing for this fully automated profiling.

In any case, all other obligations and guarantees need to be fulfilled. For instance, information obligations, assessment of a DPIA, etc. Furthermore, art. 22(3) contains additional guarantees that need to be observed when taking certain fully automated decisions, such as the right to obtain human intervention, to express the data subject’s point of view and to contest the decision. The data subjects should be made aware of all this information and the corresponding rights and actions that they could embed.

Box 7: Inferring data. Example“Company X has developed an application that, by analyzing raw data from electrocardiogram signals generated by commercial sensors commonly available for consumers, is able to detect drug addiction patterns. The application engine can extract specific features from ECG raw data that, according to previous investigative results, are linked to drugs consumption. The product, compatible with most of the sensors on the market, could be used as a standalone application or through a web interface requiring the upload of the data. Most likely, explicit consent will be the best suitable lawful basis for this processing.”

Source: Art 29 Data Protection Working Party Opinion 8/2014 on the on Recent Developments on the Internet of Things (SEP 16, 2014) https://www.dataprotection.ro/servlet/ViewDocument?id=1088.

As mentioned before, the process of profiling is “often invisible to the data subject. It works by creating derived or inferred data about individuals. Individuals have differing levels of comprehension and may find it challenging to understand the complex techniques involved in profiling and automated decision-making processes.”[1]
References


1Article 29 Working Party (2017) Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679. Adopted on 3 October 2017 as last Revised and Adopted on 6 February 2018. European Commission, Brussels, p.9. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.

 

Skip to content