On August 1, 2024, the Artificial Intelligence Regulation (RIA) came into force, a regulation also known as the European Artificial Intelligence law. This set of standards constitutes the first general regulation worldwide on this matter and its objective is to guarantee that AI systems developed and used in Europe are safe, transparent and respectful of the fundamental rights of citizens. The Regulation mainly affects companies that develop AI systems and launch them on the market, but also those who use AI tools for activities that are not purely personal.
The implementation of this law is gradual and is based on the level of risk of artificial intelligence. It is expected to be fully operational in 2027. Meanwhile, this year will have four important milestones: bans on certain AI systems and AI literacy requirements will begin to apply on February 2; May 2 marks the deadline for the codes of good practices to be ready; and on August 2, rules on notified bodies, GPAI models, governance, confidentiality and sanctions will be applied.
«On the one hand, the Regulation applies to suppliers that market or put into service AI systems or that market general-purpose models, that is, companies that sell or offer AI systems or models – explains Erika Mahler, from the Property department. Intellectual and New Technologies of Auris Abogados–. That could include companies like OpenAI (the creators of ChatGPT) or Google (which uses AI-based search algorithms among others). Likewise, it applies to companies that implement AI systems in their companies within the European Union. This could include any type of business, from banks using AI to optimize their risk analysis processes to stores of any type using AI to manage their inventories. It also applies to suppliers and implementers outside the EU, when the product generated by the AI system is used in the Union.
Although the Artificial Intelligence Regulation is a standard with a multi-sector impact, which includes the private and public sectors, there are various sectors that are particularly affected, due to the classification of certain AI systems by the standard as directly high risk, such as he financial and insurance sectorcritical infrastructures, health or education, among others. «In fact, in the European Union a public consultation has been launched on the application of the RIA to the financial sector, given the particularities and specific obligations established in relation to this sector. This shows that there is still much to determine and specify in relation to the content and general obligations of the RIA, as well as the role that current regulators and other supervisory and control authorities are going to play at the sector level,” explains Noemí Brito, partner responsible for the IT, IP and ‘Legal Operations’ area at KPMG Abogados.
The Regulation not only affects companies that sell or use AI in their services or products, but also applies to importers and distributors of AI systems. This would include, for example, any company that is engaged in importing an AI system and marketing it in the Union.
Likewise, it affects companies that manufacture products that incorporate an AI system, such as a car company that incorporates AI-based assistants. The Regulation establishes that it falls on “affected persons who are in the Union.” This impact includes both users of digital platforms, such as any Instagram user, and consumers of any of the previous suppliers, distributors or implementers.
«From our experience at Auris, we have observed that Spanish companies are going through a complex process of adaptation to the new AI regulation. The transition is not being easy, since, although artificial intelligence has been consolidated as a tool in almost all companies, its use is not always accompanied by the necessary level of knowledge about the legal obligations that it implies. We observe that many companies are increasingly using AI, both to carry out their daily operations and to implement new services for their customers. What happens in many cases is that they are unwittingly using a High Risk AI System or a Prohibited AI System without being aware of it,” explains Erika Mahler.
On February 2, the bans on certain AI systems begin to apply. «It is urgent that companies begin to adapt to the Regulation so that when the time comes for the measures to be applied, they are not penalized. Many companies use AI tools in their daily lives without realizing that the new Regulation is already in force and that from 2025 its restrictions and, consequently, sanctions will begin to apply. Companies will need time to adapt their uses of AI in all aspects to comply with the new regulations, and lack of preparation is going to be a serious problem,” says Mahler.
«In the most serious cases of non-compliance, for example, for failure to respect the prohibition of artificial intelligence practices, administrative fines of up to 35 million euros will be applied or, if the offender is a company, up to 7% of its total worldwide business volume corresponding to the previous financial year, if this amount were higher – indicates Noemí Brito. But beyond merely pecuniary sanctions, it is also important to consider the reputational riskand the indirect economic impact, derived from a loss of confidence of interest groups in the market (users, clients, investors, etc.). In fact, in the investment and purchase processes of companies, the degree of compliance with the RIA is already beginning to be analyzed in the corresponding legal and financial analyses.
Speed up control
Hence, many organizations are already deploying risk control systems associated with AI systems and developing internal supervision and control functions in this regard, as well as metrics and indicators to measure the degree of associated compliance over time. Third-party risk control is also being strengthened, for example, for providers of artificial intelligence systems or solutions. “We must consider that, on average, more than 60% of artificial intelligence solutions in organizations are from third parties outside of them,” explains Mónica Brito.
Therefore, apart from the role of supervisory and control authorities, such as Aesia (Spanish Association for the Supervision of Artificial Intelligence), it is highly recommended to consider the guidelines and recommendations of other sectoral supervisors, especially for certain sectors of activity such as the financial sector.
#complex #race #clock #adapt #legal #framework