“A historic day for European policy-making”

MEP Josianne Cutajar (S&D) talks with The Journal on the European Parliament’s approval of the world’s first comprehensive legislation regulating Artificial Intelligence.

With 523 votes in favour, 46 against and 49 abstentions, MEPs convened for the last-but-one plenary session in Strasbourg before the June elections yesterday approved a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, and permits businesses to thrive and expand.

The Artificial Intelligence Act will establish obligations for artificial intelligence based on its potential risks and level of impact. The new European legislation comprises:

– Safeguards on general purpose artificial intelligence

– Limits on the use of biometric identification systems by law enforcement

– Bans on social scoring and AI used to manipulate or exploit user vulnerabilities

– The right of consumers to launch complaints and receive meaningful explanations

– Fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover

“More work in the coming mandate”

Outgoing Maltese Labour MEP Josianne Cutajar (S&D) was the rapporteur of the European Parliament’s Commitee on Transport and Tourism (TRAN) on this dossier. Communicating her initial reaction to The Journal after the plenary vote, she commented that “by ensuring an ethical and human centric-approach to Artificial Intelligence that secures our fundamental rights and values while leaving vital breathing space for research and innovation, the adoption of AI Act is a historic day for European policymaking.”  She welcomed how the regulation will contribute to the safety and privacy of transport users, amongst other vital sectors such as health.

Josianne Cutajar also emphasised the need for an AI Act that works for SMEs. This is why she welcomed the legislation’s granting of priority access to regulatory sandboxes at free or proportionate cost for SMEs, and the regulation’s aim to minimise their administrative burden and compliance costs.

MEP Josianne Cutajar (S&D). Photo: Philippe BUISSIN/European Union

The MEP stated that “while this is an occasion for celebration, much more legislative work on AI will be necessary in the coming mandate, be it on civil liability or AI in the workplace. I remain hopeful that the spirit of the AI Act, and its appeal for clear rules, will permeate future policymaking in the field.”

“People and European values at the very centre”

During a plenary debate on the bill held on Tuesday, the Internal Market Committee (IMCO) co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

MEP Brando Benifei (S&D). Photo: Mathieu CUGNOT/European Union

“The EU has delivered”

Civil Liberties Committee (LIBE) co-rapporteur Dragoș Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

MEP Dragoș Tudorache (Renew). Photo: Mathieu CUGNOT/European Union

An overview of the landmark law

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

What’s next?

While awaiting final legal and linguistic reviews, the regulation is on track for adoption before the current legislature ends (via the so-called corrigendum procedure). Formal Council approval is also required.

Taking effect 20 days after its publication in the EU’s Official Journal, the legislation will be fully enforceable 24 months later. Exceptions will be made for bans on prohibited practices, which will apply six months after the entry into force date; codes of practice (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

Main photo: Alexander / Adobe Stock

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Menu