A few days ago a series of images recorded in 2020 by Roomba automatic vacuum cleaners came to light. In one of them, a young woman is seen sitting on the toilet bowl with her pants down. It is the most striking, but there are others in which a child appears lying on the floor looking at the cleaning robot or a woman walking through her house. In total, 15 extracts from videos recorded in the United States, Japan, France, Germany and Spain that were found on Facebook and Discord by a reporter from the MIT Technology Review.
The interesting thing about the case is how these videos have come to circulate on social networks. They were uploaded to the internet by Venezuelan micro-workers who were in charge of labeling images to train the algorithm of the vacuum cleaners. And that tells us two things: that artificial intelligence is less automatic than what is preached, and that the platform economy (what was once called the sharing economy) has reached new heights.
I spent months unraveling where these video stills came from, how they got online, and what their existence + sharing says about the state of privacy today.
2 of the creepiest images below (TR added the gray boxes to hide their faces.) Full story here: https://t.co/BbkCFzkJ79 pic.twitter.com/AtKwNKT6Sf
— Eileen Guo | [email protected] (@eileenguo) December 19, 2022
We live surrounded by devices that rely on machine learning algorithms, or machine learning. This technology basically consists of collecting large amounts of data and developing algorithms that extract patterns from it. Machine learning is used, for example, in computer vision systems, found in self-driving cars and robotic vacuum cleaners. For the computer to recognize a chair it is necessary to give it (train it with) thousands or millions of examples of images of chairs, so that it extracts a pattern and is able to identify one when it is shown.
But someone has to associate those thousands of images that are provided to the machine with the word chair. That is where the taggers come in, a piece as fundamental as it is silent of artificial intelligence. They are workers who connect to certain platforms (Amazon Mechanical Turk was the pioneer) in which they manually type what appears in the image, identify and flag potentially problematic content or help improve speech recognition technology by interpreting and translating fragments of audio particularly difficult for those who are not reached by machine translators.
All of this happens in real time, in an instant auction process of microtasks paid for in pennies on the dollar. Mary L. Gray and Siddharth Suri described this industry in Ghost Work, a book that shook the sector in 2019 by demonstrating that artificial intelligence works thanks to a legion of ghost workers, most of them located in developing countries, who carry out extremely simple and low-paid micro-assignments. All they need is a computer with an internet connection and respond on the spot. same as the riders who travel the cities on the back of their bicycles with the dinner of others in their square backpacks.
The portrait that these two Microsoft researchers make of artificial intelligence clashes head-on with the hymns of progress and less drudgery that the guarantors of this technology have been promising for decades. Automation brings great improvements to our lives, yes, but at the cost of generating junk jobs at the service of artificial intelligence. These invisible jobs are concentrated in non-Western countries, but they also employ people in the US or other European countries. Although, like the ridersThey work long hours for little pay. Artificial intelligence also goes to pedals.
“Great technological advances,” Gray and Suri argue, “have always required cheap, expendable labor.” In the 1800s, Massachusetts textile mill owners hired farmers to make garments too delicate to be made in their machine shops. In the fifties of the last century, the calculators, or human calculators, reviewed the accounts of the first computers. Today people are paid to refine search engines and help train algorithms.
The Roomba case shows just how true that is. The Venezuelans who posted the videos accessed them through Scale AI, one of the companies that iRobot, the producers of Roomba vacuum cleaners, hire to “train” their systems. Workers were labeling objects that vacuum cleaners come across to improve their systems.
Amazon announced last summer its intention to acquire iRobot in exchange for $1.7 billion. The operation is waiting for the US regulator to determine if it would affect free competition in the smart home sector.
As iRobot told the MIT Technology Review, the leaked images come from modified robot prototypes. The company claims that Scale AI has violated the terms of the contract, while the micro-job platform unloads responsibility on the micro-jobs that have shared the images. The fact is that very sensitive user data is being shared with the sole purpose of training algorithms. And it is not unreasonable to think that this will happen with other smart products beyond Roomba vacuum cleaners.
Maintaining some privacy in the digital age is a pipe dream. From the moment we upload a document to the internet, it can be hacked or stolen. The intervention of ghost workers in artificial intelligence processes adds a new vector of potential data leaks. And it shows the seams of a technology, artificial intelligence, which was supposed to be more automatic and less analog. Every time we see a rider Walking down the street, we might think that in some room in Caracas, Bombai or Detroit there might be a colleague of yours helping to make the Uber app or Roomba vacuum cleaners work a little better.
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.
#video #woman #toilet #recorded #Roomba