Consumers should be protected from flawed algorithms choices

MEPs adopted a resolution on safeguards concerning AI and automated decision-making processes

Photo: EP Petra de Sutter.

With the rapid eruption of AI and automated decision-making processes in the internal market, questions are now more than the answers. Along with the undeniable advantages that new technologies offer, there are growing concerns about the risk of consumers being misled or discriminated against. 

For instance, algorithms choices may adapt the price of goods or services to consumer's estimated purchasing power. But there is a real jeopardy in many other aspects as well.

A week before the Commission plans to table the White Paper on AI, in a resolution backed on Wednesday, MEPs insisted for a sturdy set of rights to protect consumers vis-a-vis artificial intelligence and automated decision-making.

Focusing especially on consumer protection, the resolution addresses several challenges in view of boosting artificial intelligence and automated decision-making technologies.

Lawmakers urge for clear information on personalised pricing, the review and correction of automated decisions, transparent algorithms and the use of high quality, non-discriminatory data sets.

In order to provide trust and safety in the consumers' choices, when buyers interact with automated systems they have to know, for example, whether they are talking to a chatbot or with a human behind the system.

MEPs stressed that in case of automated decision-making, consumers should be properly informed about how it functions, about how to reach a human with decision-making powers, and about how the system's decisions can be squared and corrected.

They insisted those systems must only use high-quality and unbiased data sets and explainable and unbiased algorithms. The resolution foresees set up of review structures to remedy possible mistakes in automated decisions. In addition, consumers can seek redress for automated decisions that are final and permanent.

Furthermore, the deputies warned that humans must always be ultimately responsible for, and able to overrule, decisions that are taken in the context of professional services in the sphere of medicine, legal affairs, accounting, and for the banking sector.

A risk-assessment scheme for artificial intelligence and automated decision-making and a common EU approach to help secure the benefits of those processes and mitigate the risks across the EU is necessary, MEPs demanded.

They asked the Commission for proposals adapting the EU's safety rules for products to ensure that consumers are informed about how to use those products and are protected from harm, while manufacturers should be clear on their obligations.

A revision should also undergo the Product Liability Directive, adopted over 35 years ago, adapting concepts such as 'product', 'damage', 'defect', among others.

According to Belgian Green MEP Petra de Sutter, chair of the IMCO committee and rapporteur on the file, AI can be a technology to help people in the future, providing it is transparent, supervised by humans, abides by the law and when regulated under a risk based approach. She expressed regret that MEPs from the Socialist, Renew and EPP groups have blocked the inclusion of sustainability and climate protection goals in the AI strategy. The EU Commission should include the assessment of environmental impact and energy consumption in its strategy, the rapporteur claimed.

MEPs recalled that under EU law, sellers must inform consumers when the price of goods or services has been personalised on the basis of automated decision-making and profiling of consumer behaviour and asked the Commission to closely monitor how the rules are fulfilled.


Similar articles