WScientists at Darmstadt University of Applied Sciences are researching hate comments on the Internet with the aim of teaching computers to automatically recognize when a posting violates the law, i.e. constitutes a criminal offense, as the university reports. Hatred on social networks has become a mass phenomenon; racist, inhumane and sexist insults are just as much a part of everyday life on the Internet as shit storms and threats.
“But if sentences like 'I bought a gun' appear – then the red warning light must light up,” says professor Melanie Siegel. Such a comment poses a danger that must be recognized under criminal law. Such content should be automatically forwarded to investigative authorities.
The computational linguist's research group is developing this automatic recognition; their project is called “BoTox” for “bot and context recognition in the environment of hate comments”. The possible crimes that the scientists want to filter out from text contributions include insults, incitement to hatred and calls for acts of violence. In total, the group committed twelve crimes.
The artificial intelligence should learn to notice such crimes and automatically send the content to the “Hessen gegen Hetze” reporting center so that the report is received there as quickly as possible. To do this, her research group in the Media Department needs legal knowledge, which a lecturer from Fresenius University provides: the professor of digital forensics and legal expert, Dirk Labudde.
AI should be able to distinguish humans from bots
The scientists don't just want to teach their artificial intelligence (AI) to detect criminal hate comments. The software should also distinguish whether a post was written by a human or by a bot, i.e. a computer program. Given the sophisticated AI applications like Chat GPT, it's not that easy to figure out, as Siegel says.
The developers of this “Open AI” have built in “ethical guardrails”, but these can be easily avoided with the right questions and tasks.
The researchers want to examine not only the individual hate messages, but also the environment in which they appear. According to the professor, it's about the right reaction to it: should you stand up and criticize toxic statements or is it better not to feed the trolls, that is, not to pay attention to their messages?
In order for the software developed by the researchers to learn all of this, a large amount of data is needed, a lot of hate comments and inaccurate statements. The data sets must reflect reality, for example that “hate speech” is posted online more often by men than by women and that right-wing extremist groups are more active in this regard than left-wing extremist groups.
Telegram, Facebook and YouTube offer plenty of material
Students help and evaluate the comments, as the scientist explains. According to her, three assistants take the same message and classify it as an extreme opinion, an insult or incitement to hatred. It is interesting to see whether the three opinions are similar.
The researchers took examples from platforms such as Telegram, Facebook and YouTube. However, the portal “X”, formerly Twitter, is not used because the research licenses are too expensive, says the professor of semantic technologies, who has been dealing with hate and lies on the Internet since 2017, one of the first in the German-speaking world.
#Darmstadt #University #Applied #Sciences #report #hate #messages