Artificial intelligence (AI) has been gaining ground in various areas of our daily lives, offering a wide range of functions that, if used correctly and on secure platforms, can be beneficial. However, ignorance and lack of caution when interacting with these technologies can expose users to private data leaks without their consentfor example, those shared in most popular romantic chatbots.
In a recent research carried out by experts from the Mozilla Foundationit was revealed that during Valentine's Day, a moment of high demand for romantic chatbots, more than 24,353 data trackers were detected per minute on a total of 11 of these virtual platforms. This finding sheds light on the worrying practice of collecting personal information, much of which goes to advertising giants like Facebook and other companies.
Romantic chatbot applications, which present themselves under the guise of offering companionship and emotional support to users, show a lack of transparency regarding the handling of the data collected and the functioning of their AI models.
According to the statement issued by the Mozilla Foundation, many of these platforms operated by artificial intelligence They use deceptive marketing strategies to position themselves as mental health and wellness tools, hiding their true data collection and use practices.
Friends, lovers or simple algorithms? The contradiction of AI chatbots
According to analysis by the Mozilla Foundation, romantic chatbot applications tend to promote themselves as virtual companions capable of becoming “friends, lovers or soulmates” of users. However, this apparent closeness and empathy hides a network of contradictions and risks to the privacy of users.
A notable example is the case of Talkie Soulful AIwhose terms and conditions exempt the company from any responsibility for the content generated by the 'chatbot', including possible indirect damages or loss of data.
Similarly, Romantic AI presents itself as a tool to “maintain mental health”, although its terms and conditions make it clear that it does not offer medical or psychological care services, washing its hands of any consequences arising from the use of the platform.
The most “dangerous” chatbots that share data according to the Mozilla Foundation
Among the 'chatbot' applications identified as problematic by Mozilla Foundation researchers are:
- Soulful AI Talkie
- Chai Research Corp. Chai
- iGirl: AI Girlfriend
- Romantic AI
- Anima: My Virtual AI Boyfriend
- Replika: My AI Friend
- Genesia AI Friend & Partner
- Anima: AI Friend & Companion
- Mimico: Your AI friends
- EVA AI Chat Bot & Soulmate
- CrushOn.AI
This study highlights the urgent need for stricter regulation regarding data handling by artificial intelligence applications, as well as the importance of users being aware of the potential risks when interacting with these technologies.
So, from now on, you will surely think more before sharing your data or entering any application without first having read the terms and conditions, and what they use your information for.
#Romantic #chatbots #powered #Artificial #Intelligence #steal #personal #data