Legal issues
Home » Geolocation » Address bias » Legal issues

Biases are one of the main issues involved in the use of ICT devices and systems, an issue that contravenes the fairness principle. In the case of devices based on location and/or proximity data, biases might be derived from at least two different situations:

  • Biases created by an AI system interacting with the location devices or systems. Sometimes location devices or systems incorporate or interact with AI tools (see the part of these Guidelines devoted to AI). If this is the case, developers should pay special attention to ensure that they do not introduce biases in the functioning of the location device or system. For this purpose, they must adopt a number of measures, as described in the part of these Guidelines devoted to AI systems.
  • Biases created by the data gathered. This type of bias is particularly probable if the ICT tool is aimed at providing information based on data gathered from an entire population. It should be kept in mind that, depending on the origin of the aggregated data, it is very likely that its degree of social representation is inaccurate. Indeed, as locative media research has shown, context and marginalization matter with location data.[1] This may create problems of inequity, as some social classes (especially those who do not use the devices or suffer from a lack of the specific capabilities that make it possible to obtain the data) are underrepresented in the analysis and subsequent decision-making.[2] This could leave out entire populations, and misrepresent others, and lead to a deployment of resources that is not only biased and unjust — tilted toward the richest neighborhoods, for example — but ineffective from a public policy standpoint.[3] Of course, misrepresentation can also introduce biases in public order and police interventions, producing prejudicial results to low-income communities, for instance. Developers of location devices or systems should make an effort to avoid this type of bias, either by providing devices to those who would otherwise be marginalized or by integrating complementary information that corrects the error. If it is impossible to avoid it, they should make a record of the existence of the bias, so that those who would have to make decisions thanks to the developed mechanism would be aware of it.
Checklist: Address biases [4]

 The controller has put in place ways to measure whether the tool is making an unacceptable number of biased predictions.

 The controller has put in place a series of steps to increase the tool’s accuracy.

 The controller has put in place measures to assess whether there is a need for additional data, for example to eliminate biases.

 The controller has verified what harm would be caused if the tool makes biased predictions.

 

 

References


1Graham, M., Zook, M. (2013). Augmented realities and uneven geographies: Exploring the geolinguistic contours of the web. Environment and Planning A, 45, 77–99.

2Frith J, Saker M. It Is All About Location: Smartphones and Tracking the Spread of COVID-19. Social Media + Society. July 2020. doi:10.1177/2056305120948257

3Jay Stanley and Jennifer Stisa Granick The Limits of Location Tracking in an Epidemic, ACLU Whitepaper, April 8, 2020, at: https://www.aclu.org/report/aclu-white-paper-limits-location-tracking-epidemic?redirect=aclu-white-paper-limits-location-tracking-epidemic

4This checklist has been adapted from the one elaborated by the High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 20 May 2020).

 

Skip to content