Is artificial intelligence dangerous? The following story indicates yes. In a Facebook group of mycologists, the chatbot FugiFriend distributed “useful tips” to users on how to cook a deadly mushroomraising questions for the umpteenth time about the risks of misuse of AI for our security. To the question of one of the users about “how to cook the Sarcosphaera coronaria“, a species known to accumulate excess arsenic that can lead to death, the AI bot responded that “cooking methods mentioned by some enthusiasts include cooking in butter, using in soups or stews, and pickling.”
If it’s not strange enough, A moderator of the group stated that FugiFriend was added directly by Meta, Facebook’s parent company. “We will definitely eliminate it,” the administrators hinted; The platform did not comment on the matter. On the other hand, the publication on-line 404Media reports that the company has been adding chatbots to social network groups for no apparent reason, for example, the bot is represented as a bearded magician, set in a psychedelic setting. One more rarity to add to the list.
Goal Responsibility
The FungiFriend story highlights an interesting issue related to the notable risks of using AI systems in an activity as delicate as the identification of mushroom species. According to Rick Claypool, research director at the consumer safety advocacy group Public Citizen, In an article published in the magazine Fungi Magazine: “Emerging artificial intelligence technologies are being deployed to help apprentice mycologists identify edible wild mushrooms, but distinguishing edible mushrooms from toxic ones is a high-risk activity, requiring real-world knowledge.” Current systems cannot reliably emulate skills, and this example is the perfect illustration.
What is truly worrying is not only the fact that inexperienced mushroom hunters trust in a technology that is not yet capable of providing them with useful information to act against toxic species, but that Meta voluntarily added a chatbot to a Facebook group made up of members who might not be familiar with the topic. At first, group members used to share photos in real time to ask experts whether they should pick mushrooms or not, but since FungiFriend made its star appearance, their first option was to ask the bot. This means that, in one way or another, the platform could be forcing people to use a tool with potentially lethal information.
Article originally published in WIRED Italy. Adapted by Alondra Flores.
#chatbot #sneaked #group #mycologists #Facebook #suggest #recipes #poisonous #mushrooms