Who controls the risks of artificial intelligence, especially so-called “foundational models” like ChatGPT? The new European directive on AI for this technology – revolutionary but also enormously disruptive – that the community institutions are now negotiating to produce a definitive text is increasingly leaning towards self-regulation. The latest proposal from Spain, which presides over the EU council this semester and coordinates the negotiations, proposes “very limited obligations and the introduction of codes of conduct” for companies, although with several layers of intermediate supervision, according to the documents available. those that EL PAÍS has had access to. But the pulse continues: the European Parliament demands a somewhat tougher framework, while France, Italy and Germany – three of the most powerful partners of the community club – press for the scale covered by the companies’ own codes of conduct to exceed the of the specific regulations; They argue that strict regulation will harm innovation in European research and companies. Europe arrives after the United States, which has already approved its own law, which requires technology companies to notify the United States Government of any advance that poses a “serious risk to national security.”
Spain, which will hand over the presidency at the end of the month to Belgium and which has made moving forward with the historic directive among its main priorities, is navigating these balances and has proposed a series of codes of conduct for the founding models (or GPAI, for its acronym in English, those capable of creating audio, text or image content from the observation of other data) that imply greater risk, real or potential, those that the regulation calls “foundational models of systemic risk”: that is, with high-impact capabilities whose results may “not be known or understood at the time of their development and publication, therefore they may cause systemic risks at the EU level.” Codes that include both “internal measures” and active dialogue with the European Commission to “identify potential systemic risks, develop possible mitigating measures and ensure an adequate level of cybersecurity protection,” the plan says.
The codes of conduct would also include transparency obligations for “all” founding models, in accordance with the latest negotiating position, which raises other elements, such as companies reporting their energy consumption. For all foundational models, some “horizontal obligations” would also be established. But, in addition, the new directive could include a clause that would give the European Commission power to adopt “secondary legislation” on the foundational “systemic risk” models to, if necessary, further specify the technical elements of the GPAI models and keep benchmarks up to date with technological and market developments.” This would be equivalent to leaving the door open for new regulatory chapters, according to community sources.
The Spanish proposal also proposes the creation of a Supervision Agency for Artificial Intelligence, an organization that would paint a layer more security, which would provide a “centralized surveillance and implementation system.” The agency could also satisfy the demands of the European Parliament, which had requested the construction of some type of specialized body.
The proposals to finish putting together the directive will be debated this Wednesday between representatives of the Member States (Spain, as presidency of the Council of the EU), the European Parliament and the Commission, in a decisive meeting. It is one of the last opportunities for him to move forward. The negotiations are already very “advanced” and there is even agreement on what constitutes the general architecture of the law, based on a risk pyramid and on the principle, maintained by the Spanish presidency in its latest proposal, that the approach is “technologically neutral”, that is, not regulating specific technologies, but rather their end uses through the creation of various risk categories, as proposed by the European Parliament.
Spain is optimistic. “The European Union would become the first region in the world to legislate the uses of AI, its limits, the protection of the fundamental rights of citizens and participation in its governance, while guaranteeing the competitiveness of our companies,” the Secretary of State for Digitalization, Carme Artigas, tells EL PAÍS. Artigas believes in the EU’s responsibility to go beyond, for high-risk uses, the establishment of a code of conduct and self-regulation models and good practices to be able to limit the risks that this innovative technology already shows, from the disinformation to discrimination, manipulation, surveillance or deep fakes. All taking into account that innovation and advancement must be supported. “The European AI regulation is, therefore, not just a legal standard, nor just a technical standard. It is a moral standard,” says Artigas.
The problem, however, is that two key points remain open—and will probably remain so until negotiators meet face to face again on Wednesday afternoon: one is the question of biometric surveillance systems; The second is who controls the most unpredictable foundational models, those called “systemic risk.” A debate fueled by the latest events in the Open AI saga and the departure and return of Sam Altman to the leading company, since Open AI researchers notified the company’s board of a powerful artificial intelligence discovery that, according to them, threatened humanity before Altman’s dismissal.
The tension is maximum. Especially since Germany, France and Italy turned the tables a few weeks ago and declared themselves in favor of broad self-regulation of the companies that develop these systems, through separate codes of conduct, which, yes, would be mandatory. The three countries have sent the rest of the Member States a position paper in which they defend self-regulation for general-purpose AI, call for a “balanced pro-innovation approach” based on the risk of AI and that “reduces the burdens “unnecessary administrative burdens” for companies that, they say, “would hinder Europe’s ability to innovate.” Furthermore, in the confidential document, to which this newspaper has had access, they are committed to “initially” eliminating sanctions for non-compliance with the codes of conduct related to transparency and advocate dialogue.
However, the path followed by this proposal from three of the EU’s major powers – some, like France, which hosts technology companies with links to AI, such as Mistral – is a red line for other member states and for many experts, such as has shown the open letter sent last week to Paris, Berlin, Rome and Madrid, advanced by EL PAÍS, in which they urge that the law go forward and that it not be diluted. That is, they ask for fewer codes of conduct and more rules.
“Self-regulation is not enough,” also maintains Leonardo Cervera Navas, secretary general of the European Data Protection Supervisor (EDPS), who does not hide that he would like the hypothetical and future AI Office to fall within the responsibilities of the EDPS. This supervisory entity, he suggests, could serve as a hinge between those who prefer self-regulation and those who demand black-on-white obligations in a law, given that it would allow a high degree of self-regulation, but ultimately supervised by a higher and independent body of law. the interests of companies. For the expert, the ideal is a “flexible regulatory approach, not excessively dogmatic, agile, but combined with strong supervision,” which is what this office would carry out.
This is also the position of the European Parliament negotiators, who insist that the directive must be very complete to guarantee citizen security and their fundamental rights in the face of technologies with an intrusive potential that is sometimes still unimaginable. “The Council must abandon the idea of only having voluntary commitments agreed with the developers of the most powerful models. We want clear obligations in the text,” underlines by telephone the Italian MEP Brando Benifei, one of the negotiators of the European Parliament in the inter-institutional talks (the so-called trilogues, which illuminate the true legal text).
Among the obligations that European legislators consider “crucial” and that should be set out in law are data governance, cybersecurity measures and energy efficiency standards. “We are not going to close an agreement at any cost,” warns Benifei.
What seems more resolved now is the issue, very important for the European Parliament, of prohibiting or restricting as much as possible what it calls the “intrusive and discriminatory uses of AI”, especially biometric systems in real time or in public spaces, except in very specific cases. Few exceptions for security reasons. The position of the MEPs is much stricter than that of the States and, although the negotiations have been “difficult”, there is optimism, cautious, of course, about the possibility of finding a middle ground. As long as, the European Parliament emphasizes, the ban on predictive policing, biometric surveillance in public places and emotion recognition systems in workplaces and educational systems continues to be maintained. “We need a sufficient degree of protection of fundamental rights with the necessary prohibitions when using [estas tecnologías] for security and surveillance,” summarizes Benifei.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
#leans #selfregulation #artificial #intelligence #law