“YouTube must accept that its algorithm is designed in a way that harms and misinforms people,” said Mozilla’s senior defense officer, Brandi Gerkink, after revealing the findings of an investigation carried out by the Mozilla Foundation – of the developer consortium. of the Firefox browser— about the operation of the video platform that hosts more than 2 billion active users.
The conclusions are compelling. Made with data donated by thousands of YouTube users, the investigation has revealed that the algorithm of the recording player recommends videos with misinformation, violent content, hate speech and scams, characteristics that the platform punishes in their usage policies. The research also found that people in countries that do not speak English as their first language are far more likely – 60% more – to find disturbing videos. “Mozilla hopes that these findings, which are just the tip of the iceberg, will convince the public and policymakers of the urgent need for greater transparency in YouTube’s artificial intelligence,” says the team of researchers. in your report.
“It is difficult for us to draw any conclusions from this report, since they never define what ‘rejectable’ means and only share a few videos, not the entire data set,” explains a YouTube spokesperson, adding that part of the content categorized as undesirable includes a tutorial on making pottery, a cut from the TV show Silicon Valley, a video of Crafts and a Fox Business cut.
The algorithm as a problem
The research lasted ten months and was conducted in conjunction with a team of 41 research assistants employed by the University of Exeter in England. To obtain the disclosed data, Mozilla used the RegretsReporter tool, an open source browser extension that turned thousands of YouTube users into the platform’s watchdogs. In other words, the volunteer users donated their data so that the researchers would have access to a set of strict recommendations data from YouTube.
The research volunteers found a variety of “sorry videos”, reporting everything. Too much perhaps. From the fear of the coronavirus to political disinformation and “grossly inappropriate” children’s cartoons, according to the investigation. 71% of all the videos that the volunteers defined as inappropriate or with content that hurt susceptibilities were actively recommended by the algorithm itself. Thus, almost 200 videos that YouTube recommended to volunteers have now been removed from the platform, including several that the company considered violated its own policies. These videos had a total of 160 million views before being removed and obtained 70% more views per day than other videos that do comply with the platform’s standards.
And there is more. Recommended videos that violated the platform’s policies were 40% more likely to be viewed than searched videos, and the recommendations were not necessarily related to the content viewed. For example, in 43.6% of the cases the recommendation had no relation to the videos that the volunteer had consumed. “Our public data shows that the recommended consumption of questionable content is significantly lower than 1% and only between 0.16% and 0.18% of all views on YouTube come from content that violates the rules,” he answers for his part company.
Punishes, but recommends it
Not only did the investigation uncover numerous examples of hate speech, political and scientific disinformation, and other categories of content that were likely to inflict YouTube Community Guidelines. He also discovered many cases that paint a more complex picture. “Many of the reported videos can fall into the category of what YouTube calls ‘limit content’, which are videos that border the borders of its rules, without actually violating them,” they explain from Mozilla.
Outside the algorithm, from YouTube the behavior policies they are very clear to their users, or so they seem. On its official blog, the platform details 11 violations of its policies. “We show you some common sense rules that will help you avoid problems. Take them very seriously and always keep them in mind. Don’t look for gaps or technicalities to avoid them ”, they recommend from YouTube. The categories are:
- Child safety. “We collaborate closely with the authorities and denounce the situations of minors at risk.”
- Identity Theft. “It is possible that accounts that pretend to be another channel or person will be removed.”
- Personal privacy. “If someone posted your personal information or uploaded a video of you appearing without your consent, you can request that the content be removed.”
- Copyright. “Respect copyright. Only upload content created by you or that you are authorized to use. This means that you should not upload videos that you have not created, or use content in your videos whose copyright belongs to someone else. “
- Threats “The account of users with aggressive behavior, threats, harassment, harassment, invasion of privacy, disclosure of third party personal information and incitement to others to commit violent acts or violate the conditions of use will be blocked.”
- Misleading metadata and cheats. “Do not create misleading descriptions, labels, thumbnails, or titles with the intention of increasing the number of views.”
- Harassment and virtual harassment. “It is not acceptable to post abusive comments or videos on YouTube. If the harassment goes beyond the limits and becomes a malicious attack, it will be reported and removed from the platform ”.
- Violent or explicit content. “It is not allowed to publish violent or morbid content whose main purpose is to cause a shocking, sensational or unjustified effect.”
- Incitement to hatred. “We do not allow content that promotes or justifies violence towards a person or towards groups of individuals based on their ethnic origin or race, gender, religion, disability, age, nationality, veteran status, caste, sexual orientation or identity of gender”.
- Dangerous content. “Do not post videos that incite others, especially children, to take actions that could seriously injure them.”
- Nudes and sex. “It does not admit pornography or explicit sexual content.”
In the report, Mozilla also includes a series of recommendations for the platform. “We don’t just want to diagnose the YouTube recommendation problem, we want to solve it. The laws of transparency of common sense, a better supervision and the pressure of the consumer can help to improve this algorithm ”, they explain from Mozilla. The one developed by Firefox proposes to YouTube to publish frequent and complete transparency reports that include information about its recommendation algorithms as well as to give people the option to opt out of receiving personalized recommendations and to enact laws that require the transparency of the artificial intelligence system and protect independent researchers.
“For years, YouTube has advocated health misinformation, political misinformation, hate speech, and other regrettable content. Unfortunately, the platform has met criticism with inertia and opacity while damaging recommendations persist ”, concludes the investigation.