It wouldn’t be entirely incongruous for comedian-turned-podcaster Joe Rogan to endorse a “libido-boosting” brand of coffee for men.
But when a video that recently circulated on TikTok showed Rogan and his guest, Andrew Huberman, promoting the cafe, some observant viewers were shocked, including Huberman, a neuroscientist.
“Yes, that’s false,” Huberman wrote on Twitter after seeing the ad, in which he appears to praise the testosterone-raising potential of coffee, though he never did.
The ad was one of a growing number of fake videos on social media made with technology powered by artificial intelligence (AI). Experts said that Rogan’s voice appeared to have been synthesized using AI tools. Huberman’s comments were taken from an unrelated interview.
Making realistic fake videos, often called deepfakes, once required elaborate software to put one person’s face on another’s. But now, many of the tools to create them are available to everyday consumers — even in smartphone apps, and often for little or no money.
The new doctored videos — mostly the work of meme creators and marketers so far — have gone viral on social media sites. The content works by cloning celebrity voices, altering mouth movements to match alternate audio, and crafting persuasive dialogue.
The videos and the accessible technology behind them have some AI researchers concerned about their dangers and have raised concerns about whether social media companies are prepared to temper growing digital counterfeiting.
Disinformation watchdogs are also bracing for a wave of digital forgeries that could mislead the public or make it harder to know what’s true online.
“What’s different is that now everyone can do it,” said Britt Paris, an assistant professor of data science at Rutgers University in New Jersey. “It’s not just people with sophisticated computer technology and relatively sophisticated computer skills. Instead, it’s a free app.”
Lots of manipulated content has circulated on TikTok and elsewhere for years, usually using tricks like careful editing or swapping one audio clip for another.
Graphika, a research firm that studies disinformation, detected deepfakes of fictitious news anchors that pro-China bot accounts distributed late last year, in the first known example of the technology being used for state-aligned influence campaigns. .
Last month, a fake video circulated showing President Joseph R. Biden Jr. declaring a US draft for the Russia-Ukraine war. The video was produced by the team behind “Human Events Daily,” a podcast operated by Jack Posobiec, a right-wing influencer known for spreading conspiracy theories.
In a segment explaining the video, Posobiec said his team had created it using AI technology. A tweet about the video from The Patriot Oasis, a conservative account, used a breaking news hashtag without indicating that the video was fake. The tweet was viewed more than 8 million times.
Many of the video clips with synthesized voices appeared to use technology from ElevenLabs, an American startup. In November, the company released a voice cloning tool. It drew attention last month after users of 4chan, a message board known for its racist content, used the tool to create a recording of an anti-Semitic text using a voice impersonating actress Emma Watson. ElevenLabs said on Twitter that it would introduce new security measures and provide a new AI detection tool. But 4chan users said they would create their own version of the technology, uploading demos that sound similar to the audio produced by ElevenLabs.
In a fake video on YouTube, Rogan appeared to be interviewing Prime Minister Justin Trudeau of Canada.
A YouTube spokeswoman said the video of Rogan and Trudeau did not violate YouTube’s policies because it “provides enough context.” (The creator had described it as a “fake video.”) The company said its disinformation policies prohibited content manipulated to mislead, similar to the policies of other social media companies.
Regulators have been slow to respond. A 2019 US law required government agencies to notify Congress if deepfakes targeted US elections.
“We can’t wait two years until the laws are passed,” said Ravit Dotan, a researcher at the University of Pittsburgh in Pennsylvania. “By then, the damage could be too much. We have an upcoming election here in the US. It’s going to be a problem.”
By: STUART A. THOMPSON
BBC-NEWS-SRC: http://www.nytsyn.com/subscribed/stories/6617031, IMPORTING DATE: 2023-03-17 16:40:07
#Deepfakes #easier #create #Artificial #Intelligence