The image generator was launched in February 2024 as part of the Gemini application. Ironically, it did not produce many of the obviously racist or sexist images that critics feared. Instead, he had the opposite problem. When a user asked Gemini to create “an image of an American senator of the nineteenth century,” images of black women, Asian men or an American native woman appeared with a feather headdress, but not a single white man. There were also more disturbing images, such as the representation that Gemini made of groups of German soldiers of the Nazi era as people of color. The Republicans of Congress made fun of the “AI Woke“Of Google. Elon Musk published repeatedly in X on Gemini’s failures, qualifying the AI as” racist and sexist “and pointing to a member of the Gemini team as responsible. The employee closed his accounts on social networks and feared for his safety, according to his classmates. Google stopped the model of the model to generate images of people, and Alphabet’s actions fell back.
Musk’s publications triggered talks among dozens of Google leaders. Vice Presidents and Directors flew to London to meet Hassabis. In the end, both the Hassabis team (Gemini, the model) and Hsiao (Gemini, the application) received permission to hire experts to avoid similar mishaps, and 15 posts of trust and security were added.
Back in Gradient Canopy, Hsiao made sure the team responsible for the image generator had enough time to correct the problem. With the help of Manyika, other staff members developed a set of public principles for Gemini, all written around “Ti”, the user. Gemini must “follow your indications”, “adapt to your needs” and “safeguard your experience.” One of the important points was to underline that “the answers do not necessarily reflect Google’s beliefs or opinions,” according to the principles. “Gemini’s results are largely based on what you ask to do: Gemini is what you do.” This was a good cover for future mistakes. But it was not clear what practices Google could introduce to account for those principles.
Of text a podcast
About half past six of March 2024, two Google employees appeared at the Josh Woodward’s table, in the yellow gradient canopy zone. Woodward directs Google Labs, a fast launch unit responsible for converting research into totally new products, and employees were anxious for listening to what they had created. Using transcriptions of appearances in the British Parliament and the Gemini model with a long context, they had generated a podcast called Westminster Watch with two presenters of AI, Kath and Simon. The episode began with Simon talking to a cheerful British accent: “It has been another animated week in the camera, with a lot of drama, debate and even a pinch of history.” Woodward was fascinated. Then, he says, he was telling everyone, including Pichai.
Josh Woodward on the Orb monument, outside gradient canopy.Photo: Scott Hutchinson
The text conversion tool to podcastknown as Notebooklm Audio Overviews, was included in the Google I/O May Conference program. A basic team worked tirelessly, nights and weekends, to have it ready, Woodward explained to Wired. “At this point, thousands and thousands have literally listened to” Podcasts generated by artificial intelligence. But when the media event of the 35 million dollars arrived, two other ads monopolized most of the attention. One was an Astra prototype, a digital assistant capable of analyzing live video (the real world, in real time) that Brin showed journalists enthusiastically. The other was the expected improvement of the generative AI for searches.
The MAGI project team had designed a function called AI Overviews, which could synthesize the results of a search and show a summary in a box at the top of the page. From the beginning, the personnel responsible for innovation had warned of the problems of partiality and precision and the ethical implications for websites that could lose search traffic. They wanted some supervision as the project advanced, but the team had restructured and divided.
Eat at least one rock a day?
As the AI was deployed, people received some strange results. When looking for “How many rocks should I eat?” The answer “according to UC Berkeley geologists appeared, it is recommended to eat at least one small rock per day.” In another viral consultation, a user searched “cheese does not stick to the pizza” and obtained this useful advice: “add approximately 1/8 of non -toxic glue cup to the sauce to give it more stickiness.” Pata’s weather had simple explanations. Pizzas glue, for example, originated from a Post Funny of Reddit. But the summaries of AI presented the information as a fact. Google temporarily reduced the presentation of summaries to recalibrate them.
According to Pandu Nayak, a Google scientist chief in charge of 20 -year -old searches and veteran in the company, that all problems were not detected before launch was a pity, but not a surprise. In general, the summaries of AI worked very well. Simply, users did not usually look at success. “All they do is complain,” says Nayak. “What we are committed is with constant improvement, because ensuring that you will not have problems is not a possibility.”
#frantic #race #Google #undertook #reach #Openai #years