The European Union is considering the possibility of becoming the first region in the world to have a complete law to regulate Artificial Intelligence (AI). But after more than 18 hours of marathon negotiations, not all points have yet been closed, although there is already agreement on one of the thorniest: how to regulate the foundational models on which systems like ChatGPT are based and which, although they are Considered fundamental in the evolution of technology, they also raise serious doubts due to their disruptive capacity.
“The negotiations continue,” sources close to the negotiations coincidentally indicated. The press conferences called to present the results early in the morning have been postponed for the moment, in a sign, however, of the desire to speed up the deadlines to achieve an agreement for the law under one’s arm.
According to sources, the fundamental obstacle for the negotiators – representatives of the States and the European Parliament, with the European Commission – to launch white smoke after a meeting that began on Wednesday at 3:00 p.m. behind closed doors is another of the main obstacles in everything the law discussion process: the regulation of real-time biometric surveillance in public spaces through systems such as facial recognition, one of the red lines of the European Parliament, concerned about the possibility of abuses of citizens’ fundamental rights by part of States when using these technologies.
From the negotiating mandate of the European Parliament came the decision to prohibit or restrict as much as possible the “intrusive and discriminatory uses of AI”, especially biometric systems in real time or in public spaces, with very few exceptions for security reasons. The States seek to expand these exceptions, and there they were going to run into, as MEP Brando Benifei, a participant in the three-way negotiations (the so-called trilogues), warned the day before, with a clear red line.
The position of the MEPs is much stricter than that of the States and, although the negotiations have been “difficult”, there is optimism, cautious, of course, about the possibility of finding a middle ground. As long as, the European Parliament emphasizes, the ban on predictive policing, biometric surveillance in public places and emotion recognition systems in workplaces and educational systems continues to be maintained. “We need a sufficient degree of protection of fundamental rights with the necessary prohibitions when using [estas tecnologías] for security and surveillance,” summarizes Benifei.
“Governments want a long list of exceptions to the application that we are not going to accept,” said the Italian in a meeting with journalists hours before locking himself into the discussions, which he arrived with the mandate to maintain the ban on predictive policing. , biometric surveillance in public places and emotion recognition systems in workplaces and educational systems.
“We need a sufficient degree of protection of fundamental rights with the necessary prohibitions when using [estas tecnologías] for security and surveillance,” according to Benifei, who said he was willing to find a “compromise” on the matter, as in “specific cases” of police surveillance, but stressed that this required very robust “safeguards” and control of the same ones that, in any case, cannot be exercised by the States themselves. “We are not going to allow governments to control themselves if they respect the law, this is very, very important for us (…) and we will never accept a deviation and not have serious control,” he said.
Participants in the negotiations, such as the Internal Market Commissioner, Thierry Breton, sent messages on social networks during the early hours showing an active negotiation in a session described as an “ultramarathon.” The discussions have already made it possible, according to the sources consulted, to overcome the other major obstacle to a provisional agreement on the law – which must still be ratified by the Council of the EU and the European Parliament before it can come into force, at the earliest. at the end of 2026—, the question of the regulation of foundational models, especially the most powerful ones.
More than the “what”, the pulse revolved around the “how”, explained sources close to the negotiation on the eve of it. Countries such as Germany, France and Italy had opposed in recent weeks, as the European Parliament was seeking, setting obligations by law and advocated for greater self-regulation – through mandatory codes of conduct – for developers. The argument used was not to hinder innovation and competitiveness in a sector in which Europe does not want to be left behind compared to its great rivals, the United States and China.
But MEPs, concerned about the capacity of these new technologies to affect fundamental rights of citizens, had drawn red lines and warned of their intention to abandon the negotiations — which would have severely postponed the entire law, which is expected to be fully implemented. at the end of 2026—if sufficient safeguards were not put in place. Especially for the most powerful foundational models, those considered “systemic risk” because they have high-impact capabilities whose results may “not be known or understood at the time of their development and publication, so they can cause systemic risks at the level of the EU”, according to the definition accepted by all parties.
The crux, Benifei also explained on the eve of the negotiations, was the way in which it is “guaranteed” for these more powerful models “that what the developers of these models do is mandatory and compliance can be demanded.” The European Parliament, he pointed out, wanted a text clear enough to ensure that “there is no way to escape these obligations”, even if they were included in a code of conduct as requested by the States, but that, in any case, “that they do not be a de facto voluntary commitment, but one that can be enforced.”
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#extends #negotiations #agree #worlds #major #artificial #intelligence #law