Washington.– Among the images of bombed houses and devastated streets in Gaza, some stood out for the absolute horror they showed: bloodied and abandoned babies.
These images — viewed millions of times online since the war began — are deepfakes (digitally manipulated or generated realistic audio, images or videos) created with artificial intelligence. If you look closely, you can see clues: fingers that curl in strange ways or eyes that glow with an unnatural light, all telltale signs of digital deception.
However, the indignation that the images sought to provoke is totally real.
Images from the war between Israel and Hamas have vividly and painfully illustrated the potential of artificial intelligence (AI) as a propaganda tool, used to create realistic images of carnage. Since the war began last month, digitally altered images spread on social media have been used to make false claims about responsibility for victims or to mislead people about atrocities that never happened.
While most false claims circulating online about the war did not require AI to create and come from more conventional sources, technological advances are occurring with increasing frequency and with little oversight. This has made the potential for AI to become another form of weapon clearly evident, and has offered a glimpse of what is to come during future conflicts, elections, and other major events.
“This will get worse — much worse — before it gets better,” said Jean-Claude Goldenstein, CEO of CREOpoint, a technology company based in San Francisco and Paris that uses artificial intelligence to assess the validity of online claims. The company has created a database of the most viral deepfakes that have emerged in Gaza. “Images, video and audio: with generative AI it will be an escalation that you have not seen,” says Goldenstein.
In some cases, photographs of different conflicts or disasters have been repurposed and passed off as new. In others, generative AI programs have been used to create images from scratch, such as that of a baby crying in the rubble of a bombing that went viral in the early days of the conflict.
Other examples of AI-generated images include videos showing alleged Israeli missile attacks, tanks rolling through ruined neighborhoods, or families searching for survivors among the desolation.
In many cases, the fake images appear designed to provoke a strong emotional reaction by including the bodies of babies, children or families. In the bloody early days of the war, supporters of both Israel and Hamas claimed that the other side had victimized children and babies. Deepfakes of crying babies offered photographic “evidence” that was quickly presented as proof.
Propagandists who create these types of images are adept at stirring up people’s deepest anxieties and impulses, said Imran Ahmed, CEO of the nonprofit Center for Countering Digital Hate. who has tracked war disinformation. Whether it is a deepfake of a baby or a real image of a baby from another conflict, the emotional impact on the viewer is the same.
The more hateful the image, the more likely a user will remember and share it, inadvertently spreading misinformation even further.
“Right now people are being told, look at this photo of a baby,” Ahmed said. “Disinformation is designed to make you participate in it.” Similarly misleading AI-generated content began spreading after Russia invaded Ukraine in 2022. An altered video appeared to show Ukrainian President Volodymyr Zelenskyy ordering Ukrainians to surrender. Such claims have continued to circulate even this past week, demonstrating how persistent misinformation and false information can be, even when it has been easily debunked.
Each new conflict or election season offers new opportunities for disinformation promoters to demonstrate the latest advances in AI. This has many artificial intelligence experts and political scientists warning of risks next year, when several countries hold major elections, including the United States, India, Pakistan, Ukraine, Taiwan, Indonesia and Mexico.
The risk that AI and social media could be used to spread lies to American voters has alarmed lawmakers of both parties in Washington. At a recent hearing on the dangers of deepfakes, U.S. Rep. Gerry Connolly, D-Va., said the United States must invest to fund the development of artificial intelligence tools designed to counter harmful content.
“We, as a nation, must get this right,” Connolly declared.
Around the world, several technology startups are working on new programs that can detect deepfakes, watermark images to prove their origin, or examine text to verify any misleading claims that may have been inserted by AI.
“The next wave of AI will be: How can you verify the content that exists? How can you detect erroneous or false information? How can you analyze text to determine if it is trustworthy?” said Maria Amelie, co-founder of Factiverse, a Norwegian company that has created an AI program that can examine content for inaccuracies or biases introduced by other AI programs.
These programs would be of immediate interest to educators, journalists, financial analysts, and others interested in rooting out falsehoods, plagiarism, or fraud. Similar programs are being designed to detect manipulated photographs or videos.
While this technology is promising, those who use AI to lie are often one step ahead, according to David Doermann, a computer scientist who led an effort at the Defense Advanced Research Projects Agency to respond to the national security threats posed by AI-manipulated images.
Doermann, who is now a professor at the University at Buffalo, said that effectively responding to the political and social challenges posed by AI-generated misinformation will require both better technologies and better regulations, voluntary industry standards and large investments in digital literacy programs. to help Internet users find ways to distinguish truth from fantasy.
“Every time we release a tool that detects this, our adversaries can use AI to hide that trail of evidence,” Doermann said. “Detecting and trying to eliminate these things is no longer the solution. “We need to have a much bigger solution.”
#Fake #babies #real #horror #war #deepfakes