It was the month of May 2018 when 4,000 Google employees signed an internal petition in which they demanded from the company their departure from the Maven project, a military artificial intelligence program (AI) and automatic learning promoted by the United States Department of Defense . It consisted of using AI to process data from various sources (mainly drones) to identify potential objectives in war areas and improve their attacks.
“We believe that Google should not be in the war business,” argued company employees in an open letter, which was supported by scientists and academics and forced Google to cancel the contract and publish commitments against the use of the The military weapons.
Seven years later, the Mountain View firm has almost completely redesigned the aforementioned document, but what has most caught attention is the absence of the category «AI applications that we are not going to address». As his name suggested, he grouped several concrete areas in which they would not be designed or implemented solutions in the Belicist sector to preserve the responsible development of it.
The original principles outlined by the L succinity executive director Pichai in mid -2018, said section, now non -existent, on “artificial intelligence applications that we will not implement.” In the upper part of the list was the commitment not to design or implement artificial intelligence for “technologies that cause or can cause general damage” and a promise to weigh the risks for Google to “proceed only when you believe that the benefits substantially exceed the risks ».
Bloomberg, who was one of the first means to detect the disappearance of the category, requested a response to Google on the change of opinion. The company responded with a blog post signed by James Manyika, vice president of Google, and Demis Hassabis, who directs the Laboratory of the Google Deepmind, which talks about the responsible development of AI in democracy and refers to the page with the updated principles.
“We recognize the speed with which the underlying technology, and the debate around the progress, implementation and uses of AI, will continue to evolve, and continue adapting and refining our approach as we all learn over time,” he justified The company. It should be noted that new defense contracts have not been announced by Google, but the door is now open.
Likewise, the reviewed principles of AI have been published a few weeks after the executive director of Google, Sussian Pichai, and other technology titans attended the inauguration of the president of the United States, Donald Trump. After his inauguration, Trump quickly annulled an executive order of his predecessor, former president Joe Biden, who ordered security practices for AI.
Companies in the race to lead the flourishing AI field in the United States now have less obligations to meet, such as sharing the results of evidence indicating that technology involves serious risks to the nation, its economy or its citizens. “There is a world competition for the leadership of AI in an increasingly complex geopolitical panorama,” says Hassabis and Manyika at their entrance.
#Google #eliminates #principles #promise #military #armament