Remember when trend watchers said, ‘soon everyone will be able to make deepfakes’? They are now right. A wave of artificial dredging is coming. Time to prepare. When teachers, agents, parents and employers soon hear this cry for help, they need to know what it’s about: “Help, someone has created a harmful image of me with generative artificial intelligence.” A picture with the victim depicted naked. Or in questionable circumstances. ‘Have you seen that photo of our colleague at an extreme right-wing demonstration?’
The big difference with current deepfake tools – sites where you can paste someone’s head into a porn movie – is the endless variety of images that generate artificial intelligence-driven image models. That makes photos more difficult to recognize as fake.
Because you can give an image model every possible assignment. Within a minute, four interpretations appear. Say ‘a photo of a hamster with a skateboard on Dam Square’ and voila, you have four photos of sporty hamsters. Some of these programs have been available to everyone since last summer, with DALL-E being the best-known and most accessible variant.
In part, I think that’s great news. It will provide all kinds of new creative applications. But I’m also very concerned about its malicious potential.
If you try something lewd at DALL-E, the online tool won’t give you a home. The owner of DALL-E has control over what the model can and is allowed to do. But the Stable Diffusion image model is publicly available. As a result, users can run the model on their own computer and learn new things. Without any supervision.
A spiteful programmer can train Stable Diffusion by feeding it dozens of photos of his ex. Based on that image collection, Stable Diffusion generates every imaginable image of the old lover. It is only a matter of time before user-friendly applications become widely available. Safe variants are already easy to use via sites like avatarai.me.
“Most people are still unaware that this technology exists,” writes technology thinker Alberto Romero in his newsletter, “but it won’t be long before it becomes mainstream. It’s easy to use, super cheap and extremely versatile. And it is improving quickly.”
In the technology world, it’s pretty much nothing else than the rise of generative artificial intelligence. But have they heard of Stable Diffusion at police stations if a victim is soon to file a report? Does an employer act directly on a dubious image of a staff member? Or does she think first: maybe it’s fake?
A deepfake of a well-known politician quickly debunks news media. But it is already more complicated for a councilor to defend himself. Let alone for a normal citizen without a podium. That is why it is so important that everyone in the Netherlands thinks twice when they see a startling image. PO Box 51 material, if you ask me.
And in the meantime, experiment yourself with the chaste DALL-E. Not only for those cozy hamsters, but also to understand the technology.
Ernst-Jan Pfauth writes a column here every other week.
A version of this article also appeared in the newspaper of November 14, 2022
#Column #Wave #artificial #dredge