Leading AI companies are committed to protecting children's safety online, coming together under the goal of prevention.
The main companies in the field of artificial intelligencesuch as OpenAI, Microsoft, Google and Meta have joined forces to ensure that their tools are not used for malicious purposes.
One of the main objectives is avoid child abuse and the production of sexually explicit material involving them, a practice strongly condemned.
This collaboration is led by Thorn, an organization dedicated to child safety, and All Tech Is Human, a nonprofit that promotes responsible technology.
Thorn believes that the AI commitments made by companies represent an innovative and significant step in protecting children against sexual abuse, especially given the advancement of generative AI technologies.
Document to prevent
As recently as 2023, they have been reported in the United States over 104 million suspicious files concerning sexual abuse of minors.
Without coordinated action, there is a risk that AI could exacerbate the problem, putting further pressure on law enforcement to identify real victims.
On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative Artificial Intelligence: Preventing Child Sexual Abuse.”
This text offers strategies and tips for companies involved in the development of AI tools, search engines, social media platforms, hosting companies and developers, in order to prevent abuse.
Among the main recommendations is the careful selection of data used to train artificial intelligence models.
It is recommended to avoid datasets containing only child pornography, as well as adult sexual content, as generative AI tends to confuse both concepts.
Contrast methods
Thorn is encouraging social media platforms and search engines to remove links to websites and apps that enable the “nudism” of images of children, thereby contributing to the generation of new online child sexual abuse material through AI.
“This project was designed to clearly demonstrate that we shouldn't give up,” Rebecca Portnoff, Thorn's vice president of data science, told The Wall Street Journal. “We want to change the course of this technology so that the current harms are stopped right from the start.”
Portnoff revealed that some companies have already agreed to separate images, video and audio involving children from datasets containing adult content, to prevent their models from combining the two categories.
Other companies are also introducing watermarks to identify AI-generated content, although this method is not foolproof, as watermarks and metadata can be easily removed.
#child #abuse #OpenAI #Microsoft #Google #Meta #committed #prevention