The police use of tools supported by artificial intelligence (AI), especially facial recognition or predictive algorithms, can generate great inequalities among citizens. You have to be especially vigilant with these technologies, which should be subject to strict regulations and ultimately should always have human supervision. This is established in a resolution approved yesterday by the European Parliament, which also calls for the “permanent prohibition of the automatic recognition of individuals in public spaces”, making it clear that citizens should only be monitored when they are suspected of having committed a crime.
The European Chamber also requests that private facial recognition databases cannot be used, and cites as an example that of Clearview AI, a US company that has analyzed 10 billion photos to be able to identify those who appear in them. It also calls for transparency in the development of any algorithm that may interfere with the privacy of citizens and that is precisely why open source is preferably chosen.
The resolution, approved with 377 votes in favor, 248 against and 62 abstentions, is not binding, but sets an important precedent for the EU Artificial Intelligence Act, the European regulation on artificial intelligence that is currently being cooked in Brussels and that establish the permitted uses of AI with a risk-focused approach. “We are clearly opposed to police predictive tools based on the use of AI, as well as any processing of biometric data that results in mass surveillance. This is a tremendous victory for all European citizens, ”said Bulgarian MEP Petar Vitanov after the vote.
The draft European regulation on AI, presented in the spring, already identified facial recognition and the use of biometric data as especially dangerous technologies. The resolution of the European Parliament provides even more arguments for these technologies to be subject to particularly strict regulation.
“We have worked for months with several MEPs to get the European Parliament to make a decision in this regard,” says Ella Jakubowska, coordinator of the facial biometrics program at EDRI, a Brussels-based NGO that works for the defense of human rights in the digital age.
MEPs underline the problems posed by algorithmic biases in the resolution. Thus, they warn that identification systems based on AI erroneously identify ethnic minority groups, LGBTI people, the elderly and women in a greater proportion than the rest. This is “especially worrisome” when it comes to applying the law or in border control, one of the places where these technologies are most used.
Applied for years in police work in countries like the United States, facial recognition is being strongly questioned in recent times. Several cities have banned or frozen their use and large companies such as Amazon, IBM or Microsoft suspended the sale of this type of systems to the police as a result of protests over the death of George Floyd, suffocated by an agent in Minneapolis on May 25. 2020. The Black Lives Matters movement has marked a turning point in the questioning of this technology, which is often wrong against the black population.
An initiative to regulate these systems at the federal level has been registered in the House of Representatives, although so far it has not been translated into anything concrete. The legal battle started in Detroit by Robert Williams, the first known case of irregular detention in the country for the faulty use of a facial recognition algorithm, could lead to his ban through the courts.
Police prediction algorithms, meanwhile, are widely established in the United States and China. In Europe they are also present in countries such as the United Kingdom, Germany or Switzerland. Its goal is to determine crime hot spots through automated data analysis to deploy police patrols more efficiently.
These tools are also not without controversy. One of the perverse effects of these systems, widely demonstrated by the scientific literature, is that they tend to overcriminalize less affluent neighborhoods, which in the case of the United States tends to translate into areas with a majority black or Latino population: how the algorithms are Feeding data on arrests, they usually demand more vigilance in the areas where such actions take place, which in turn leads to even more arrests. Attempts to correct this bias to date have not yielded satisfactory results.
Sign in to continue reading
Just by having an account you can read this article, it’s free
Thanks for reading EL PAÍS