A group of thirteen employees and former employees of Google And OpenAI recently published a letter expressing strong concerns about potential risks tied to artificial intelligence (AI). These experts, who work in the two most advanced research laboratories in the world, warn against the dangers that this technology could entail, to the point of hypothesizing the possibility of extinction of humanity.
In the document, entitled “A Right to Warn about Advanced Artificial Intelligence”, is highlighted as i autonomous AI systems they could escape human control and pose an existential danger. “We are employees and former employees of companies at the forefront of artificial intelligence and we believe that this technology can provide unprecedented benefits to humanity,” the letter reads, before adding: “But we are also aware of the risks associated with AI , That they go from the reinforcement of existing inequalities, to handling and misinformation, up to the loss of control over autonomous artificial intelligence systems which could lead to the extinction of human beings.”
The role of big tech and conflicts of interest
The signatories of the document point the finger at big technology companies, accusing them of not doing enough to mitigate the risks associated with the development of AI. According to them, these companies have too high economic interests to ensure effective supervision. “Companies that develop artificial intelligence have economic interests such as evade effective supervision,” they state in the letter. “These society they own information important secrets about the capabilities and limitations of their systems, but have no serious obligation to share them with governments or civil society.”
According to Google and OpenAI, greater protection for whistleblowers is needed
The “Right to Warn” movement is calling for greater protection for employees who wish to report related risks to AI. The normal ones confidentiality agreements they prevent workers from voicing their concerns publicly, and current legal protections for whistleblowers are insufficient Why they focus on illegal activities that are already regulated. There letter concludes by asking companies to commit not to stipulate agreements that prohibit internal criticism of AI risks and not retaliate against employees who raise such concerns.
The question of intelligence artificial is becoming more and more central In the debate public and scientific. What do you think? The companies technological should be more transparent about the risks related to AI?
#Google #OpenAI #employees #sound #alarm #threaten #humanity