When United Nations Secretary General Antonio Guterres expressed his dismay at reports in April that The Israeli military was using Artificial Intelligence (AI) to identify human targets in Gaza, The emergence of these tools in war jumped to a planetary platform.
It is not that it was new nor that military circles, academics and human rights organizations had not done analysis on the impact on international humanitarian law (IHL) or the law of war of the arrival in war scenarios of technologies that help design operations in seconds. But when Guterres touched on the point that “life or death decisions” should not be delegated to algorithms, the debate took new breath.
On the one hand, there is the possibility that in the extreme a machine takes on the role of humans and decides who lives and who dies in a war. And, on the other hand, the concern that Perhaps existing legal frameworks fall short of keeping up with the drastic changes that military AI is bringing. As jurist Magda Pacholska, researcher at the TMC Asser Institute, says, “the adoption of AI in practice is considered the third revolution in military affairs, after gunpowder and nuclear weapons.”
However, what has been seen recently goes beyond projections and suggests that the use of these technologies has no reverse, since A large majority of armies are incorporating them given their defense and security potential. In 2010, China launched its strategy to become a leader in AI by 2030. Soon after, the Kremlin said that whoever becomes a leader in this field will “rule” the world, and the United States designated AI as one of the means that “will ensure that it can fight and win the wars of the future.”
Ukraine has become a laboratory for the military use of AI
But what was revealed about what has been happening in Gaza by Israeli forces poses a major leap that raises ethical and moral fears about the scope of AI.
In fact, All the digital traffic produced by a small region like the Gaza Strip is monitored in detail: calls, text messages, apps and emails. It is even said that every square meter of the Palestinian enclave, one of the most densely populated regions in the world, is photographed every ten minutes. Facial recognition cameras serve their purpose in identifying Palestinians susceptible to arrest.
But such a volume of information is impossible to process analogously. Here AI comes into play to lead to a new concept of military operation. The system generated about 200 target options in Guardian of the Walls and did so in minutes, a task that would previously have compromised the work of dozens of analysts for weeks.
Lavender and Gospel
The degree of advancement and sophistication was assumed to be the panacea for military strategists. However, On October 7, it became the worst failure of the Israeli intelligence services. Such dependence on digital and AI was very expensive, and Hamas’s very basic ‘analog intelligence’ claimed the lives of 1,200 people and hundreds of kidnapped people. But The Israeli retaliation from that day on opened a separate chapter in the history of military AI.
Lavender is the name of the military AI program that Israel has used to identify military targets. Supported by Evangelio, which identifies buildings where there may be militiamen, and another called Where is Daddy, which tracks suspects who are already marked and then bombs them. It is not science fiction. The problem is that these systems have a margin of error that kills innocents.
The report, which was based on Israeli intelligence sources, explained that once Lavender drew up the list of potential targets, soldiers spent about 20 seconds on each target to authorize an attack. But what differentiates a militiaman from a close friend? That he has several cell phones or that he sleeps in a different place every night do not seem like solid reasons. Or, what criteria could an information analyst have to refuse to follow a Lavender recommendation?
The man makes a decision that consists of a recommendation made by the machine
Hence Secretary Guterres’ astonishment, since the journalistic text reveals that the margin of error was at least 10 percent. Lavender was used in at least 15 thousand murders from the beginning of the war until November 24, according to the research. In a bombing on October 17 against a Hamas high command alone, 300 civilians were reported as ‘collateral damage’.
“They are just tools for analysts in the process of identifying targets,” the Israelis defended themselves after the publication, and denied that they were letting a machine determine “if someone is a terrorist.”
“Even if it is a human operator who presses the button, “This lack of knowledge or incomprehension, as well as the speed factor, means that their responsibility in decision-making is quite limited.”Add.
Is it legal?
The expert remembers that this type of technology has been used by armies of
USA, Netherlands and France to identify material targets. “The novelty is that, this time, AI systems are being used against human targets”, Pacholska clarifies in a quote taken up by the Spanish newspaper El País.
The novelty is that, this time, AI systems are being used against human targets
Linda Robinson, a researcher at the Council on Foreign Relations and an expert on Middle Eastern affairs, told this newspaper that “The human involved will be an important part of using AI to apply the necessary discretion. Recent reports and my own sources in Israel have reinforced the need for stricter implementation standards regarding Positive Target Identification (PDI) and avoiding collateral damage.”
This is despite the fact that US President Joe Biden and his Chinese counterpart, Xi Jinping, agreed in November to have their experts examine the issue. The UN has been addressing it for ten years, without progress. There are “many debates in the civilian AI industry, but very few in the defense AI industry,” Accorsi notes.
The calculations fed by an AI algorithm ignore the most basic feelings that make a human human, even in the context of an armed conflict: compassion, solidarity, doubt, proportionality, mercy, survival instinct, empathy, the sense of caring for women, children and the elderly, dignity. This, if there is a human in the middle, can make the difference between living or dying. The rest could be a leap into the void, even for the human species itself.
EDUARD SOTO
DEPUTY EDITOR
In X: @edusot
#Artificial #Intelligence #risks #algorithm #deciding #lives #dies #war