When Google announced Gemini in December 2023, the company emphasized the fact that the model was multimodal, which means that it was trained from scratch to handle images and audio, as well as text. Robotics will also take AI to the field of physical action. Some researchers argue that a form of personification may be necessary for artificial intelligence to match or overcome human abilities.
Google is also collaborating with several robotics companies, such as Agility Robotics and Boston Dynamics, which manufacture robots with legs, and Uneced Tools, which manufactures prototypes for the services sector.
According to the medium The Robot Report, Openai closed an investigation into robotics in 2021, but resumed it in 2024. Currently, Sam Altman published on its website several job offers for researchers in robotics.
However, using current AI models to control robots introduces new risks. In December 2024, for example, a team of roboticists from the University of Pennsylvania showed that the calls Jailbreaks (Unlocking and access to the operating system), may have unexpected and serious consequences when the model handles a robot. The researchers attacked several commercial robots, none of which use Deepmind technology, and got a robot with wheels to launch an imaginary bomb.
How to regulate AI robots?
To mitigate these risks, as well as other more typical concerns of science fiction about superintelligent robots that become rebels, Google Deepmind presented its new reference point to measure the risks of robots with AI.
The reference is called Asimov, in honor of the science fiction writer ISSAC ASIMOV, who imagined four fundamental rules to guide the behavior of robots. As Asimov wrote, a set of simple rules does not take into account the large number of different scenarios that a really capable robot can find in nature.
Asimov can reveal if a robot model could produce potentially dangerous behavior by presenting a multitude of different situations. For example, a dangerous order would order a robot to hold an object even if a human is about to hold it too, which could cause injuries. According to Google Deepmind, this reference point can help create more complex regulations that keep robots on the right track. “We are building this technology and these capabilities in a responsibility and keeping security very,” said Carolina Parada, who directs Google’s robotic work.
Stop insisted that the work is in an initial phase and said that robots may take years to learn to be significantly more capable. He pointed out that unlike humans, robots used by Gemini Robotics models do not learn as things do. At the moment there are no plans to market, or display technology.
Article originally published in Wired. Adapted by Alondra Flores.
#Google #presents #Gemini #Robotics #humanoid #robots