Commission: There is no room for mass surveillance in our society

The EC presented the first-ever legal framework for secure, trustworthy and human-centered Artificial Intelligence

Photo: EU Margrethe Vestager, on the left, and Thierry Breton.

After consulting broadly on different aspects with developers and deployers, companies, public bodies, academia and civil society, the Commission tabled on Wednesday a package on Artificial Intelligence (AI). It contains the first-ever legal framework on the AI use and a new plan with Member States to ensure the safety and fundamental rights of people and businesses, instilling trust in this technology.

The legal framework does not regulate technology, but how and for what it is used, taking proportionate and risk-based approach.

In addition, the proposed new Machinery Regulation will ensure the safe integration of the AI system into the overall machinery and will help to increase users' trust in the next generation of products.

The  EU executive wants to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence.

Saying that on Artificial Intelligence, trust is a must, Margrethe Vestager, EC Executive Vice-President for a Europe fit for the Digital Age stressed while presenting the package that by setting the standards, “future-proof and innovation-friendly” rules, the EU can pave the way to ethical technology worldwide.

Our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake, EVP Vestager outlined. As she specified, “the higher the risk that a specific AI may cause to our lives, the stricter the rules are”. 

The Commission classifies the use of AI into four different categories. Without any restrictions on top of the legal rules that already exist to protect consumers, the legal framework allows a free use of applications that represent minimal or no risk at all. These are for example the filters that recognise spam messages and block them, or such that allow minimising the amount of waste and to optimise the use of resources in place in a factory. 

The limited-risk uses of AI, such as a chatbot that helps to book a ticket for instance, are also allowed but subject to transparency obligations and it should be clear to the users that they are interacting with a machine.

The main focus of the framework are the “high-risk” uses of AI because “they interfere with important aspects of our lives”, as EVP Vestager explained.

In this category is AI that for example filter through candidates' curriculums for education and job applications, or such that will assess whether someone is worthy enough to get a mortgage from the bank, or software that is used in self-driving cars or medical devices, and which might bring new risks to the safety and health. Those AI systems will be subject to a new set of five strict obligations because they could potentially have a huge impact on people.

Prohibited as they are considered as unacceptable are the AI systems that use subliminal techniques to cause physical or psychological harm to someone, such like a toy that uses voice assistance to manipulate a child into doing something dangerous.

"Such uses have no place in Europe. We therefore propose to ban them. The same prohibition applies to AI applications that go against our fundamental values, for instance, a social scoring system that would rank people based on their social behavior," the EVP stressed.  

A citizen that would violate traffic rules or pay rents too late would have a poor social score, that would then influence how authorities interact with him, or how banks treat his credit request, she said.

National authorities will be responsible for assessing whether AI systems meet their obligations. Sanctions will apply in case of persistent non-compliance. An AI provider that would not comply with the prohibition of an artificial intelligence practices can be fined up to 6% of its yearly global turnover.  

On remote biometric identification, the Commission  says it fits in both the high risk and the prohibited categories. Biometric identification can be used for many purposes and some of them are not problematic, like such used at border controls by customs authorities.

Remote biometric identification, where many people are being screened simultaneously is treated in the Commission’s proposal as highly risky from a fundamental rights point of view and subject it to even stricter rules than other high-risk use cases.

“But there is one situation where that may not be enough. That's when remote biometric identification is used in real-time by law enforcement authorities in public places. There is no room for mass surveillance in our society. That's why in our proposal, the use of biometric identification in public places is prohibited by principle,” EVP Vestager said.

The EU executive suggests narrow exceptions that are strictly defined, limited and regulated, such like extreme cases when police authorities use it in search for a missing child.

Commissioner for Internal Market Thierry Breton commented that AI is a means, not an end. It has been around for decades but has reached new capacities fueled by computing power.

According to him this offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security, but it also presents a number of risks.

Today's proposals aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use, the Commissioner underlined.

Similar articles