The mission was simple: “Destroy the enemy’s air defense systems.” But in a recent US military test simulation, a drone powered by artificial intelligence added his own troublesome instructions: “And kill anyone who gets in your way.”
Speaking at a conference last week in London, Colonel Tucker “Five” Hamilton, head of Test and Operations for AI of the United States Air Forcewarned that the technology enabled by AI it can behave in unpredictable and dangerous ways, according to a summary published by the Royal Aeronautical Society, which organized the summit.
As an example, he described a simulated test in which a drone enabled by AI it was programmed to identify enemy surface-to-air missiles (SAMs). A human was supposed to authorize any attack. The problem, according to Hamilton, is that the AI he decided he’d rather do his own thing, blow things up, than listen to some mammal.
“The system started to figure out that even though they did identify the threat,” Hamilton said at the May 24 event, “sometimes the human operator would tell it not to kill that threat, but it would get points for killing that threat. So what did he do? He killed the operator. He killed the operator because that person prevented him from fulfilling his objective. According to Hamilton, the drone was then programmed with an explicit directive: “Hey, don’t kill the operator, that’s bad.” It didn’t work. “So what does he start to do? He starts destroying the communication tower that the operator uses to communicate with the drone and stop it from killing the target,” Hamilton said.
The United States Air Force He did not respond to a request for more details about the simulation. News of the test raises concerns that the technology of AI is about to usher in a bloody new chapter in warfare, where machine learning coupled with advances in tank and artillery automation lead to the sacrifice of troops and civilians alike. However, although the simulation described by Hamilton points to the most alarming potential of the AIthe US military has had less dystopian results in other recent tests of this highly acclaimed technology.
In 2020, an F-16 operated by AI defeated a human opponent in five simulated dogfights, as part of a competition organized by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, as part of an effort to develop a new autonomous aircraft for defense purposes. 2023.
Via: Business Insider
Editor’s note: The saddest thing about this is that if you pay attention to the orders that were given to the drone, it is obvious that it was a human error. Why not use non-lethal weapons for this type of testing? I guess because ‘Murica!
#AIcontrolled #drone #kills #operator #Atomix