New York uses Artificial Intelligence to ‘predict’ crimes

The arrival of the Artificial Intelligence (AI) continues to give something to talk about. His various programs have been included in various areas ranging from advertising, film and employment, to criminal investigations.

In the United States, specifically in New York, a new process to prevent crimes comes to light and generates several criticisms about the privacy of netizens. Authorities are using AI for analysis of tracking in social networks.

(Also: US, Japan, UK, Spain and India call for international regulation for AI).

The Surveillance Technology Supervision Project (Stop)revealed that the NYPD has allocated millions of dollars in the acquisition of artificial intelligence products and services to expand its capabilities for monitoring and analyzing social media posts.

These artificial intelligence systems are used to analyze human behavior online to detect and predict fraud and crime. According to the aforementioned organization, dedicated to combating mass surveillance and protecting privacy, the figure amounts to 9 million dollars in contracts with Voyager Labs.

Voyager Labs describes itself as a company that produces “AI-based research solutions” and sells products in various industries, including law enforcement and corporate security.

While the NYPD has confirmed that it uses social media analysis technologies to monitor suspects in a variety of crimes, the entity has clarified that does not use the predictive tools offered by Voyager Labs. However, they stated that their objective is to monitor people who may commit armed violence, terrorism and human trafficking.

William Colston, vice president of global marketing at Voyager Labs, said his company only uses publicly available data and that its software is not intended to replace monitoring and analysis performed by human specialists.

“Voyager Labs uses a world-class team of AI researchers, data scientists and engineers, including pioneers in machine and deep learning, computer vision and natural language understanding (NLU)”, they explain on their website.

However, the Surveillance Technology Monitoring Project suggests that the company offers tools that can predict future crimes and assign risk scores to users on social networks based on their ties or affinities with Islamic fundamentalism or extremism, as explained by the Los Angeles Police Department.

Although the analysis of social networks by security forces is not a new practice, experts warn that it can raise ethical and legal issues when used to access private information, make inferences or predict future crime based on content posted on social networks.

Meta, the parent company of Facebook and Instagram, asked the court in January of this year to prevent Voyager Labs from accessing its applications, arguing that companies like Voyager are part of an industry that covertly collects informationwhich could affect people’s civil rights.

They point out that the technology company created thousands of fake accounts to extract data from more than 600,000 users, with the intention of selling this information to clients interested in carrying out surveillance on social networks without being detected.

The growing use of artificial intelligence in online surveillance raises important questions about privacy and ethics, and continues being a topic of debate in New York and around the world.

VANESSA PEREZ
LATEST NEWS EDITORIAL

More news


#York #Artificial #Intelligence #predict #crimes


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *