Avoid biases
Home » IoT » Fairness and Transparency » Avoid biases

Biases create prejudice and discrimination against certain groups or people. Harm can also result from the intentional exploitation of (consumer) biases, or by engaging in unfair competition, such as the homogenization of prices by means of collusion or a non-transparent market. IoT systems could contribute to exacerbate such terrible situation in two main different ways: through the incorporation of AI tools in the IoT system that are biased; or by building biased datasets through an inadequate collection the data produced by the data subjects. If the use of these data fuels profiling or automated decision-making, this could bring unacceptable social consequences.

Thus, there are some actions that IoT developers should embed in order to avoid unfair biases provoked by the use of AI. On the one hand, they should only incorporate to their tool devices or AI tools that can demonstrate a lack of biases. Tools such as ethical algorithmic auditing should be implemented to flag up discrimination. Internal auditing schemes should be considered to guard against discrimination of protected groups, but also to protect victims of unanticipated discrimination.[1] On the other hand, the IoT system should be designed in a way that automated decision-making based on the data gathered is able to avoid biases. This could be done by using the tools that are generally used in AI to this purpose (see the section “Fairness” in Part IV on AI of these Guidelines).

It is important to mention that IoT might also cause serious biases due to inaccuracies provoked by the way in which these systems work. For instance, data subjects might provide incorrect data or might not fully understand the consequences if their behavior is constantly monitored. This may cause a breach of the accuracy principle when the situation could have been prevented or solved by the controller had it been taken into account. Similarly, data processing can also lead to unexpected biases because potential relationships between data categories, revealed only through aggregation and linkage of disparate datasets, may not be known at the time of data collection. It might happen that if the systems uses such data for profiling, inaccuracies might bring to biased recommendations, for instance. In order to avoid such scenario, critical assessment of the provenance of data is required. To this purpose, organizational measures should be implemented to guarantee the accuracy and reliability of the gathered data, while still ultimately deferring to the right of users to withhold private information (e.g. confirming whether a record is accurate).
 

References


1Wachter, Sandra, Normative challenges of identification in the Internet of Things: Privacy, profiling, discrimination, and the GDPR, Computer Law & Security Review, Volume 34, Issue 3, 2018, pages 436-449.

Skip to content