03/29/2024 – 19:43
OpenAI, the company behind ChatGPT, revealed, this Friday (29), a voice cloning tool that it plans to keep under strict control until security measures are implemented to prevent audio forgeries designed to deceive listeners.
A model called “Voice Engine” can essentially duplicate someone's speech based on a 15-second sample, according to an OpenAI blog post sharing results from a small-scale test.
“We recognize that generating speech that resembles people's voices presents serious risks, which are especially in the spotlight in an election year,” the San Francisco-based company said.
“We are engaging U.S. and international partners from government, media, entertainment, education, civil society and other sectors to ensure we are incorporating their feedback as we build,” he added.
Disinformation researchers fear the widespread use of artificial intelligence (AI)-powered software in a crucial election year, thanks to the proliferation of cheap, easy-to-use and difficult-to-track voice cloning tools.
Admitting these issues, OpenAI stated that it is “taking a cautious and informed approach to a wider rollout due to the potential for misuse of synthetic voices.”
The cautious revelation comes a few months after a political consultant working for the presidential campaign of an unlikely candidate, Joe Biden's Democratic rival, claimed responsibility for a robocall pretending to be the American president.
The AI-generated call, the brainchild of Democratic Congressman Dean Phillips' consultant, featured what appeared to be Biden's voice urging people not to vote in the New Hampshire primary in January.
The incident has caused alarm among experts who fear a flood of “deepfake” disinformation using AI in the 2024 US presidential race, as well as other key elections around the world.
OpenAI said its partners testing the Voice Engine have agreed on rules including the need for explicit and informed consent from anyone whose voice is duplicated.
It should also be clear to the public when the voices they are hearing are generated by AI, the company added.
“We have implemented a set of security measures, including watermarking to track the origin of any audio generated by the Voice Engine, as well as proactive monitoring of how it is being used,” guaranteed the company.
#OpenAI #reveals #voice #cloning #tool