An economics student from London goes to Menorca with five friends to celebrate the end of his exams. Before entering the plane, he sends a photo of himself to the private Snapchat group that says: “I'm going to detonate the plane (I'm a member of the Taliban).” When the plane is flying over France, the British intelligence services transmit the alleged threat to their Spanish counterparts, who send two military planes to escort the flight to the island. Once there, the plane parks in an area far from the terminal and travelers are disembarked one by one, identified and subjected to a luggage search with dogs and bomb squads.
The student is arrested and sleeps for two days in a cell, before being released on bail. A year and a half later he testifies in the National Court, where he faces charges for a crime of public disorder. The prosecution requests a fine of 22,500 euros and civil liability compensation of 94,782 euros, the bill for the F-18s. This is not a case of terrorism, but neither is it about the limits of humor. It is an example of what happens when excessive surveillance is added to racist automatic systems in a context of international security.
The risks of public airport Wi-Fi
The prosecution requests that the article 561 of the Penal Code. Punishes anyone who causes the mobilization of police, assistance or rescue services with a false alarm of an accident or threat. But Aditya Verma, as the young man is called, did not post the photo on Twitter or his Instagram account. He sent it to his private Snapchat group, and none of his friends shared it. None of the recipients believed that Verma was carrying a bomb because they all boarded the plane with him. He says he did it because his friends regularly joke about his Indian origin and his dark skin.
Civil Guard experts who have reviewed their devices found anecdotal WhatsApp conversations about the conflict between Pakistan and India and the possibilities of an Islamic State attack in that area, but “no link with radicalism or intentionality was observed.” to plant a bomb, nor orchestrate it.” The fact that British intelligence accessed his private joke makes the prosecutor interpret it as a public communication. And the British security service does not say how he achieved it.
The prosecutor assumes that the capture was made through the airport's Wi-Fi network and that it was carried out legally. The two premises are interdependent. All airport Wi-Fi, including Gatwick Airport Wi-Fi, requires a login where the terms and conditions of the service are accepted. For example, that all communications will be open and will be subject to surveillance by agencies and authorities for security reasons. Airports are considered critical infrastructure and monitoring their public services is a legitimate part of their security strategy. But it seems unlikely that a college student who uses Snapchat would need Wi-Fi at the airport in his own city, and it is impossible for him to automatically connect inadvertently. Even if he did, Snapchat has its own security protocol.
Before Snowden, network communications were unprotected, making it much easier for the British Government Communications Headquarters and the US National Security Agency to massively capture data. Today, most traffic is encrypted, thanks to the protocol called Transport Layer Security (TSL) and many messaging services, such as Signal or WhatsApp, are end-to-end encrypted. It means that the message comes out encrypted from the phone that sends it and is decrypted on the final phone, remaining protected even in the insecure or monitored Wi-Fi at an airport. Snapchat says that “Snaps (photos) and chats, including voice and video, between you and your friends are private: we don't scan their content to create profiles or show you ads. This means that we usually don't know what you say or post unless you ask us.” The UK may now be an exception.
Post-Brexit privacy
Reading encrypted messages is possible, but not everyone can do it. Specific hardware is required to intercept Wi-Fi signals and specialized software to capture data packets transmitted over the network. That would be incompatible with “the necessary publicity” required by the application of article 561 of the Penal Code. Under US law, Verma would be said to have shared his joke with a “reasonable expectation of privacy.” In Europe that expectation would not be necessary, because we have General Data Protection Regulation and civil rights. But post-Brexit England does not have the same standards of citizen protection. The National Court could be judging a person in Spain with the regulations of the United Kingdom.
Last October, in England the Online Safety Act, which forces companies to scan users' messages to make sure they are not transmitting illegal material, most especially terrorist content or child sexual abuse material. The law doesn't say how to do this, but failure to do so could lead to criminal enforcement liabilities. The only solution without breaking encryption is to scan users' devices to examine messages before to be sent.
This technology is called client-side scanning, also known as Chat Control. It is possible that the authorities read Verma's joke and overreacted. It is more likely that an automatic algorithm from Snapchat itself did it, and a level of alarm was activated that justified the deployment without anyone being able to explain or verify the reason. The European Union is about to begin a trilogue on the EU Commission Regulation against Child Sexual Abuse, which proposes adopting that same technology. This case is just a small example of how dystopian its implementation can be.
The racism of a British algorithm
This is my theory: a system of client-side scanning It detected key words—detonate plane, Taliban—in a sensitive context—airport—and, since the sender was an 18-year-old Indian, it triggered the alarm at a level that the intelligence services received as a terrorist alert, without time to contextualize it. Following protocol, they transmitted the alert to the Spanish Ministry of Defense which, with the plane in mid-flight and without access or time for details, logically decided to take extreme precautions and accompany the flight to its destination. Once the threat has been denied, they look for the person responsible to pay the bill.
Technically, the false warning of the placement of an explosive device was made by the system, after intercepting private conversations of a British citizen on British soil and deciding that a student with no criminal record and a passion for chess is a credible jihadist threat, therefore color of his skin. Ironically, it's the same stereotype that sparked the joke in the first place. Even Defense has said that the fine should be paid by the British service, and not Aditya Verma.
Instead of recognizing the bias of a system that should be corrected, taking into account the almost two million citizens of the same ethnic group who live in the United Kingdom, they have preferred to pursue the first victim of the abuse: a teenager who has so assimilated the racism from his environment that makes terrorist jokes before others make them.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#Chat #Control #pays #private #joke #ends #F18 #escorting #plane