EU moves closer to first AI Act
An important committee vote in the European Parliament on 11 May has called for a ban on the use of biometric surveillance, predictive policing, social scoring, and other harmful uses of AI systems. The text adopted by the civil liberties and internal market and consumer protection committees has yet to be endorsed by the full Parliament in June, but indicates that Members of the European Parliament (MEPs) will adopt a strong position on human rights in future negotiations with the European Commission and Council of member states.
When the Commission proposed the AI Act in 2021, a prohibition on facial recognition was notably missing. Civil society organisations launched a pressure campaign (#ReclaimYourFace), demanding a blanket ban on biometric identification technologies in the AI Act. The Parliament’s largest political group, the centre-right Christian Democrats, opposed a ban, but eventually conceded last week when it was clear they did not have a majority. As such, advocates for human rights, digital rights and consumer protection have welcomed the outcome.
Despite this big win, there are serious shortcomings in the text. MEPs endorsed the Commission’s overarching risk-based framework, which relies heavily on a self-regulation model. There are four risk categories for AI technologies, each with corresponding governance requirements — ‘unacceptable risk’ (prohibited), ‘high-risk’ (conformity assessments), ‘limited risk’ (transparency obligations), and ‘low or minimal risk’ (no obligations). For high-risk systems, companies will carry out a conformity assessment internally, without external oversight. While the Commission’s proposal imposed obligations on ‘providers’ of AI systems, lobbying by Google successfully introduced a new class of ‘deployers’ with fewer obligations, to evade potential responsibilities it could bear for other actors using its technologies.
The Commission’s 2021 proposal entirely omitted general purpose AI (GPAI) — AI systems that can be used in a range of applications beyond that for which it was designed — from its scope, meaning that large language models such as ChatGPT would not be regulated at all. The treatment of GPAI was a major source of debate among MEPs in the lead-up to the vote. A compromise was reached whereby MEPs distinguish ‘foundation models’ — namely large language models and large-scale generative AI systems — from GPAI. Researchers have criticised the distinction, arguing that the ‘foundation model’ category is a Stanford ‘PR term’, and that the move may allow AI systems to avoid regulation. Still, the committees’ text introduces new obligations on large language models and generative AI systems that were absent from the Commission text, including requirements on data governance, copyright law compliance, and safety checks.
The text could yet be amended by MEPs in the full plenary vote of the Parliament next month. When the Parliament adopts its final position in June, it will then enter inter-institutional negotiations with the Commission and Council, behind closed doors, with absolutely no public scrutiny, where it will have to fight to retain the progressive content added by MEPs.
As the first attempt to introduce a horizontal regulatory framework for AI internationally, we will be watching the outcome closely.
Emma Clancy — UTS FASS PhD candidate