The European Commission has published the first draft of the Code of Practice for General Purpose Artificial Intelligence (AI). The document indicates a series of requirements, standards and mechanisms to guarantee that companies such as OpenAI, Meta or Google comply with those established in the AI Law in force in the block.
The executive body explains that the first version of this compliance instrument has been developed by independent experts, in collaboration with the European AI Office and based on information provided by the main providers of general purpose artificial intelligence (GPAI) models. its acronym in English).
The proposal will be reviewed by almost 1,000 interested parties in the coming days. The regulatory body hopes to obtain comments and opinions to refine the code. The final document will be ready before May 1 of next year. “The final version will establish objectives, measures and, where relevant, key performance indicators (KPIs). It will play a crucial role in guiding the development and future implementation of secure and reliable GPAI. “It must detail rules related to copyright and define systemic risks as well as evaluation and mitigation measures,” the Commission states.
What does the new European Union AI code propose?
The 36-page outline addresses four key aspects: transparency, copyright compliance, risk identification, and the moderation of technical and governance hazards. Emphasizes that corporations behind large general-purpose models will need to report trackers webinformation sources and databases that they have used to train, test and verify the algorithms.
Companies will also have to set out the terms of licensing agreements that allow them to access protected content. They will be required to implement a single point of contact to manage complaints and comments from intellectual property owners.
The code lists various types of threats that organizations subject to the regulations agree to consider and address as systematic risks. These include cybercrime, large-scale discrimination, persuasion and manipulation, as well as the misuse of AI to create chemical, biological and nuclear weapons.
The creators of GPAI will be committed to adopting a Safety and Security Framework (SSF). The policy aims to ensure safety and accountability in AI development. It will be necessary for suppliers to break down their contingency management and mitigation protocols, considering technical aspects, data management, failure detection and evaluation of the efficiency of their products.
The draft requires technological entities related to AI to deliver periodic reports on their internal governance guidelines, carry out continuous analyzes on possible threats to their systems, and incorporate external evaluators when necessary.
The regulations are not mandatory. Firms designing GPAI can embrace alternative measures as long as they can justify their effectiveness under the European AI Law. Despite this, organizations such as OpenAI, Google, Meta, Anthropic and Mistral are expected to comply with the new European Union guidelines.
The AI Law came into force in August with the intention of moderating the production, marketing and use of AI systems. It contemplates criteria that authorize or prohibit the use and sale of these computer programs based on the “potential they have to cause harm to society.” Public and private agents, both local and international, with operations within the bloc are subject to the provisions. Entities that fail to comply with the provisions of the regulations will face fines of up to 35 million euros or the equivalent of 7% of their annual turnover worldwide.
#ensure #technology #companies #submit #Law #plan #achieve