Communications and information technology expert, Salloum Al-Dahdah, gave examples of crimes that some chat platforms based on artificial intelligence have been involved in so far, including violating property and publishing rights, providing false information and false news, and defaming people.
Al-Dahdah added, in an interview with “Sky News Arabia”: “The most prominent accusation that was reported in the media is that of the chatbot, Chat GBT, about defaming the mayor of Hepburnshire in Australia, Brian Hood.”
The Australian mayor discovered that the “chatbot” had been falsely branded as “involved in a foreign bribery scandal” involving a subsidiary of the Reserve Bank of Australia in the 2000s.
The chatbot alleged that the Australian mayor had served prison time as a result of the crime.
The Australian mayor may be the owner of the world’s first defamation lawsuit against OpenAI, the developer of ChatGBT, after the app falsely accused him of the crime.
Can AI officials be sued?
Technology and information security expert, Shagali Jaafar, shed light on the technical aspect regarding the possibility of prosecuting artificial intelligence officials, saying: “Artificial intelligence algorithms and their applications are owned by companies known to the judiciary and law enforcement agencies, and they have technical means that they can provide to judicial authorities and victims to prove incident and finding out the perpetrators.
Jaafar explained that “the application of artificial intelligence can be sued through its developers and the companies that own it.”
He continued, “These entities can also give the judicial authorities a ‘technical guide’ that can provide justice to victims of artificial intelligence crimes, and it is necessary to train ‘judicial bodies’ on this mechanism.”
What does international law say?
Professor of International Law, Dr. Ayman Salama, identifies responsibility for the wrong use of artificial intelligence when “designers, developers, and publishers fail to comply with standards and legislation, whether international or national, to ensure the proper performance of artificial intelligence during the life cycle of these mechanisms, that is, throughout their use.”
In an interview with “Sky News Arabia”, Salameh explained that the damage could occur as a result of “biased training data, system and programmer errors, misuse and social discrimination.”
He pointed out that “most artificial intelligence systems are designed for pre-defined use cases, but using them differently may lead to “incorrect” results, harming individuals, funds, institutions and organizations.
However, the professor of international law pointed to obstacles that may hinder the possibility of prosecuting those responsible for artificial intelligence systems, including:
The distinctive technological characteristics of these applications make AI systems “unpredictable and ambiguous”, which hinders the discovery of the causes and unintended consequences of any accidental damage caused by these systems.
The relative newness of artificial intelligence and its use within organizations, institutions and companies makes it difficult to compare the recognition of responsibility and international accountability for the producers, programmers and operators of those systems.
#GBT #Chat #violations..Can #artificial #intelligence #prosecuted