According to O2, 69% of Britons have been victims of scammers at some point and 71% stated that they would like to take revenge on those who have tried to deceive them or those close to them.
My artificial intelligence students have to make a presentation in class on a topic related to news or articles published in the media. They can choose any topic related to AI, but it is not about addressing technological questions about the design or implementation of systems or solutions based on AI, but rather about reflecting on how these intelligent technologies affect our lives in one way or another.
One of the groups chose a news story about a bot created by the telecommunications company O2, designed to answer calls from alleged scammers. When the call comes from a number suspected of being used by criminals posing as someone they are not – workers of the telecommunications company with which we operate, tax officials or employees of a bank, for example -, they receive the response from an AI that also pretends to be what it is not: an old woman who won’t stop talking. If necessary, the lovely lady can even give her bank account details to the alleged scammer, but they will be of no use to him, as he will have made them up as he goes along.
According to O2, 69% of Britons have been victims of scammers at some point and 71% stated that they would like to take revenge on those who have tried to deceive them or those close to them. Well, through Daisy, which is the name of the bot, they can do it. Daisy appears to be a vulnerable person, supposedly easy prey for deception professionals, and once they have taken the bait, they try to waste as much time as possible. There have been cases in which the conversation has lasted more than 40 minutes, to the despair of the scammer, who after a long conversation with an endearing grandmother has ended up getting away with it.
After each presentation by the students, we spend a few minutes discussing the topic presented. In my case I give them comments on the substance and the form, with the aim that they also improve their communication skills. On the topic I just mentioned, conflicting opinions and some certainly interesting reflections soon emerged. For example, could the scammer sue the O2 company or even the owner of the telephone line, for having deceived him or at least tried to deceive him? According to the European AI regulation, would the use of this bot be prohibited, since it does not identify itself as non-human? Isn’t there a gender bias in having simulated an elderly woman as a profile of a particularly vulnerable person? Will scammers be able to use a scam Daisy to avoid wasting their time and be able to approach hundreds or thousands of potential unsuspecting people simultaneously? Could that scammer Daisy tell when an anti-fraud Daisy answers her from a real person? Could you also learn continuously and during interaction with people and machines, so that you improve your art of persuasion, thus improving your deception capabilities?
The debate about Daisy left us with many questions and very diverse opinions. We have created a world of machines capable of learning and even deceiving us and each other. Are we building a world where not even machines have to trust each other? Daisy gives the scammers their own medicine, pretending to be someone they are not, but it is possible that the machines we create give us ours, perpetuating biases, generating distrust, stealing our jobs and even our most precious treasure, the language. It is in our hands that it does not end up like this, and for that, education continues to be our best tool. That is why my students necessarily have to think and speak in class.
#Give #medicine