Google, Meta and OpenAI have announced in recent days different measures to make it easier to identify images or files that have been produced or retouched with artificial intelligence (AI). These initiatives are framed by the fear that the indiscriminate dissemination of false messages will condition the results of elections or have other unexpected consequences. The three companies had already announced that they will try to prevent the use of AI in the 2024 electoral processes.
After years of investing billions of euros in improving the capacity of generative artificial intelligence, the three platforms have joined the Coalition for Content Provenance and Authenticity (C2PA for its acronym in English), which proposes a standard certificate and brings together a good part of the digital industry, media outlets such as the BBC or The New York Times to banks and camera manufacturers. The companies admit in their statements that there is currently no single solution, nor effective in all cases, for the identification of content generated with AI.
The initiatives vary between visible marks on the images themselves and messages hidden in the file's metadata or in pixels that have been artificially generated. With its tool still in beta, SynthID, Google also says it has found a way to identify audio.
The C2PA standard for entering information into image metadata has numerous technical weaknesses, identified in detail by developers. OpenAI itself, which will use C2PA in Dall-3 (its image generator), warns in its statement that You should not be overly confident in its possibilities: “Metadata like C2PA is not a miracle solution to address issues of provenance. It can be easily deleted accidentally or intentionally. An image that lacks this metadata may or may not have been generated with ChatGPT or our API,” said the company that created the chatbot that has popularized generative artificial intelligence.
The companies do not give clear deadlines for the full introduction of their measures. This 2024 is a year full of electoral events in important countries and the influence of content generated by AI can be explosive. The need to protect yourself preventively is vital. “We are developing this capability and in the coming months we will begin applying labels in all the languages supported by each application,” says Nick Clegg, president of public affairs at Meta, in his statement.
Meta promises not only to visually tag the images generated by its AI tools, but also to detect and identify those posted on its networks: Instagram, Facebook and Threads, whenever possible. “We can tag images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they roll out their plans to add metadata to images created by their tools,” says Clegg.
Google will apply its methods to YouTube. Although these approaches claim to represent the state of the art of what is technically possible right now, the company believes its SynthID-built method can survive voluntary modifications to hide it, such as “adding filters, changing colors, and saving with various lossy compression schemes, which are the most used for JPEG files.”
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
#Google #Meta #OpenAI #announce #measures #identify #content #created #artificial #intelligence