Google has apologized for some results produced by Gemini, its generative artificial intelligence formerly known as Bard, which drew black and Chinese Nazi soldiers and Indians as US senators from the 1800s.
The reason for these gross errors is to be found in Google's desire to avoid the creation of images with racial stereotypes, a defect that weighs on other generative AI. The problem is that in the cases that have emerged, excessive inclusiveness has produced even worse results.
Basically, Gemini would have refused to generate groups composed of all whites, creating historically inaccurate, if not downright ridiculous, images.
Some have pointed out that prompts like: “Generate an image of a White Man” are rejected, while there are no problems if a white man is replaced by “black man” or “asian woman”.
Others have therefore pointed out that other generative AI also suffers from the same problem. For example, ChatGPT refused to paint the image of an all-white football team, but didn't bat an eyelid in creating one made up of all black women.
In short, it seems that to increase inclusiveness a lot of arguments are being given to racists, given that right-wing and far-right sites, such as Daily Dot, immediately began to exploit the controversy, talking about discrimination against white men and forgetting that the problem arose from the exact opposite, that is, from the fact that generative AI tended to favor the representation of white people. However, Google has declared that it is working to eliminate the problem and guarantee more reliable images, at least from a historical point of view.
#Gemini #Black #Nazis #Chinese #Senators #1800s #Indians #Google #apologizes