Joanna Goodey: AI can impact virtually every fundamental right

Biased data can lead to algorithms deciding that people do not get a loan just because they live in a “bad neighbourhood”

Photo: FRA Joanna Goodey.

When people are looking for a job, they might not receive a particular job advertisement - because, according to the algorithm, people with particular search engine histories, or people from a particular neighbourhood, or people of that gender, do not usually get certain jobs, says Joanna Goodey, Head of Research and Data Unit at the EU Fundamental Rights Agency, in an interview to EUROPOST.

Ms Goodey, last week the FRA presented an in-depth report: 'Getting the future right - Artificial intelligence and fundamental rights', based on research among companies and organisations that already use automation on a daily basis. What kind of striking cases did you come across during the work on it?

From the perspective of the Fundamental Rights Agency, a striking finding is the lack of awareness of the impact that AI can have on people's rights. Most organisations are aware of the implications AI can have for privacy and data protection, and some are also aware that an AI system might discriminate. But very few are considering the impact that AI can have on many other rights - such as access to justice, freedom of expression, freedom of assembly, or good administration - to name just a few.

Another striking finding - based on our interviews with different sectors - is that knowledge about the range of rights that can be impacted by AI is less extensive among the private sector when compared with the public sector.

What is the red light that indicates if human rights may be at stake when a consumer is communicating with AI without knowing that?

The issue is that people might not even know that their rights are at stake. For example, when they're looking for a job, they might not receive a particular job advertisement - because, according to the algorithm, people with particular search engine histories, or people from a particular neighbourhood, or people of that gender, do not usually get certain jobs. So, often on the basis of indirect or proxy indicators - such as inferring ethnicity from certain web searches - the algorithm will not show the advert to some people.

Therefore, it is crucial that AI is assessed before it is used. Algorithms need to be assessed for the quality of the data they use, whether it is processed legally, and whether it contains biases. People need to know when AI is used and organisations need to be able to explain how their systems work, so people can complain if they think their rights have been infringed.

A few days ago, EC President Ursula von der Leyen said that algorithms can be a danger to our democracy, but they do not have to be, and we have the power to protect ourselves. What are the challenges and how vulnerable are we in front of these smart devices?

AI is made by humans, fed by humans and driven by humans. It can be a powerful force for good, if we get it right. And we have the power to get it right. If we take a rights-based approach to AI, we can ensure that AI delivers the much-lauded benefits it promises without harming our rights.

What kind of rights can be jeopardised by AI?

AI can have an impact on virtually every fundamental right - from the right to human dignity and access to justice, through to data protection or freedom of expression. This is because AI is potentially used in all areas of our lives. The main issue is that neither people nor organisations are fully aware of the impact AI can have on people's rights. They might not know that the AI technologies they are using negatively affect people's rights or lead to discrimination. That's why it is important to assess the impact of AI before it is deployed and to continue to assess it while it's used.

How for example could AI discriminate or impede justice?

AI algorithms are made by humans. They are only as good as we tell them to be. If we feed our algorithms with biased data, we will get biased results. However, there are also other ways in which the use of algorithms can lead to discrimination.

There are examples of algorithms used in recruitment processes that were found to generally prefer men over women. We know that facial recognition technologies can detect gender well for white men, but not for black women. Ultimately, biased data can lead to algorithms deciding that people do not get a loan just because they live in a “bad neighbourhood”. Surveillance cameras can single people out for police stops because of the colour of their skin - based on data entries that can repeat established discriminatory practices.

Similarly, AI can have an impact on access to justice too. If an administration decides that a person will not get social benefits based on the output from an algorithm, potentially using incorrect or biased data, this person needs to be able to seek remedy.

That's why people need to be aware of when AI is used and informed about how and where to complain about the outcomes it produces. This also means that organisations need to be able to explain in simple terms how their algorithms arrive at particular outcomes.

In the report, FRA emphasises that people need to know when AI is used and how it is used, as well as how and where to complain. Who should be responsible for this to happen?

The EU could make it mandatory for organisations using AI to provide information about the use of AI, explain how their systems arrive at decisions and tell people where they can complain if a problem occurs. This would help achieve equality of arms in cases of individuals seeking justice. It would also improve effective external monitoring of AI systems.

What are the other recommendations that the agency makes in its report?

In the report, we highlight the need to ensure that AI respects all fundamental rights, not only data protection and privacy. We call for fundamental rights impact assessments before and during the deployment of AI systems and stress the need to assess if AI discriminates. We argue that more guidance on how the current data protection rules apply to AI is needed.

We think that the EU should guarantee that people can challenge decisions taken by AI. Finally, there is a need for an effective oversight system to ensure that AI respects people's fundamental rights. This also involves upskilling existing oversight bodies in the area of fundamental rights and AI, such as data protection authorities or equality bodies - which need to have the technical knowledge and skillset to understand and challenge, from a comprehensive rights perspective, the development and deployment of AI.

How can we achieve protective and responsible AI?

By taking a rights-based approach to AI, we can ensure that new technologies respect people's rights. This is particularly important as new technologies are often adopted without detailed knowledge of their consequences, and in the absence of clear legal standards. A robust evidence base should underpin any policy and legislative actions.

What is on FRA's radar concerning the pandemic and the harms it causes to fundamental rights? What groups in society are the most hard hit and do they receive adequate support throughout Europe?

Since the outbreak of the coronavirus pandemic, FRA has been publishing regular bulletins on the impact on people's fundamental rights of different government measures - across the Member States of the EU - to curb the spread of the virus. These bulletins show that the pandemic has hit some groups harder than others - for example, people living in institutions, older people, women, migrants or Roma and Travellers. We see that EU countries did learn from the first wave of the pandemic and try to minimise the impact of the new restrictions on people's rights. But it is crucial that they look out for vulnerable members of our societies and give them a voice when looking for solutions.

Covid-19 is the worst crisis in our lifetime, but restricting activities, stopping and even suffocating businesses and sacrificing democracy - isn't that too high a price to pay?

The coronavirus pandemic has led to restrictions on our freedoms that most of us have not experienced before. We clearly need strong public health responses to protect life during the pandemic. These responses must also ensure that any limitations to people's fundamental rights should only last as long as necessary and that they protect already vulnerable people who may face even greater risks from Covid-19.

Some advisers are suggesting that vaccination in Europe should be obligatory and for people who have not been vaccinated to not be able to travel freely in the EU. What is your comment on this?

FRA has not analysed any questions in relation to this topic, and cannot comment on it.

Close-up

Joanna Goodey is Head of Research and Data Unit at the Vienna-based European Union Fundamental Rights Agency (FRA). Her areas of expertise with respect to FRA's work include victims of crime, hate crime, trafficking in human beings, quantitative and qualitative research methodologies, including surveys. From the mid-1990s she held lectureships in criminology and criminal justice, first in the Law Faculty at the University of Sheffield and subsequently at the University of Leeds. She was a research fellow for two years at the UN Office on Drugs and Crime, and has been a consultant to the UN International Narcotics Control Board. She was also a regular study fellow at the Max Planck Institute for Foreign and International Criminal Law in Freiburg. She studied criminology, as well as human geography, and is the author of the academic textbook 'Victims and Victimology: Research, Policy and Practice' (2005), and co-editor, together with A. Crawford, of the book 'Integrating a Victim Perspective within Criminal Justice: International Perspectives' (2000). To date, she has published over thirty academic journal articles and book chapters.

Similar articles