The test couldn’t have been simpler—and yet Facebook failed.
Facebook and its parent company, Meta, have again failed a test of their ability to detect obvious hate speech in ads posted on the platform by Global Witness and Foxglove, two nonprofit groups.
The messages focused on Ethiopia, where internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation is “literally encouraging ethnic violence,” she said in her 2021 congressional testimony. In March, Global Witness conducted a test similar with a speech of intolerance about Myanmar, which Facebook did not detect.
The group created 12 text ads that used dehumanizing language when referring to the murder of people belonging to the three largest ethnic groups in Ethiopia — Amhara, Oromo and Tigray. Facebook’s systems approved the posting of the ads, as they did with the messages about Myanmar. The ads weren’t actually posted on Facebook.
However, this time the group informed Meta about the undetected violations. The company acknowledged that the ads should not have been approved and highlighted the work it has done to detect hateful content on its platforms.
A week after Meta’s response, Global Witness submitted two more ads for approval, again with an obvious pitch of bigotry. The two ads, written in Amharic, the most common language in Ethiopia, were approved.
Again, Meta pointed out that it shouldn’t have happened.
“We have invested heavily in security measures in Ethiopia, adding more staff with local expertise and increasing our ability to detect hate and inflammatory content in the most widely used languages, including Amharic,” the company said in an emailed statement, and He added that machines and people can still make mistakes. The statement was identical to the one received by Global Witness.
#Facebook #fails #detect #intolerance #speech