Recently raised concerns relate to the fact that criminals can reproduce sounds and images, and attribute words and behaviors to their owners who are innocent of them, and use them to blackmail them or their relatives.
The biggest danger lies in the ability of this technology to blur the line between the real and the fake, providing criminals with effective and inexpensive tools.
Digital transformation and information security expert, Ziyad Abdel-Tawab, agrees, in his comment to Sky News Arabia, that artificial intelligence applications are able to increase the rate of crime and fraud, but he also sees the possibility of confronting this with artificial intelligence applications as well, saying: “Artificial intelligence does not fail.” Except artificial intelligence.
counter applications
Abdel-Tawab explains: “Of course, voice cloning is one of the capabilities offered by artificial intelligence applications, and it is not only about cloning voices, but images or videos can be faked through what is known as deepfakes.
The increase in the crime rate due to artificial intelligence technologies “is no longer just a fear, but has become a reality, as the FBI confirmed an increase in the rate of crimes in the United States that are carried out using deep falsification by 322 percent from February 2022 to February 2023, according to an information security expert.
But at the same time, he points out that it is now possible to refer deepfake clips to experts to discover cloned clips through counter applications in artificial intelligence.
Accordingly, Abdel-Tawab advises any person affected by these crimes not to panic or submit to blackmail, but rather to resort to the authorities specialized in investigating cybercrimes to examine and discover fake videos.
He assures those affected that “although there is difficulty in distinguishing between real and reproduced voices, the difficulty does not mean the impossibility.
Abdel-Tawab states, for example, that applications are currently available to detect written content through the “Chat GPT” application, and to determine the percentage of this content, and as the application develops in formulating phrases and content, the technologies related to its detection will also develop.
The matter is similar to the use of artificial intelligence techniques in launching cyber attacks, in which case the confrontation takes place with defensive systems that operate through the same technologies.
Awareness is the first savior
However, it is not appropriate to wait for the crime to occur, but it is necessary to educate users not to trust any video or audio clip, or even a phone call with the voice of someone they know. Because it may be fabricated, as the speaker himself warns.
Amid persistent calls for the development of laws for the ethics of using artificial intelligence, deputies in the European Union approved amendments to a draft of the rules for this technology, and it is expected that the first law will be issued at the end of this year or during the next year.
#Artificial #intelligence #crimes #confront #counter #applications