New York.- Google recently suspended an engineer after dismissing his claim that its artificial intelligence is intelligent and self-aware, revealing another conflict over the company’s most advanced technology.
Blake Lemoine, an artificial intelligence software engineer at Google, said in an interview that he was suspended Monday. The company’s human resources department said he had violated Google’s privacy policy. The day before his suspension, Lemoine said, he turned over documents to a US senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.
Google said its systems mimicked conversational exchanges and could talk about different topics, but had no awareness. “Our team, including ethicists and technologists, have reviewed Blake’s concerns against our AI principles and I have advised him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said in a statement. “Some in the community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing current conversational models, which are not sentient.” The Washington Post was the first to report Lemoine’s suspension.
For months, Lemoine feuded with Google’s managers, executives and human resources over his startling claim that the company’s Language Model for Dialog Applications, or LaMDA, had a conscience and a soul. Google says that hundreds of its researchers and engineers have talked to LaMDA, an internal tool, and reached a different conclusion than Lemoine. Most experts believe the industry is a long way from computer sentience.
Some researchers have made optimistic claims that these technologies will soon reach consciousness, but many others are quick to dismiss these claims. “If you used these systems, you would never say those things,” said Emaad Khwaja, a researcher at the University of California, who is exploring similar technologies.
Google’s research organization has spent the last few years mired in scandal and controversy due to its advances in artificial intelligence. The division’s scientists and other employees have regularly squabbled over technology and personnel issues in episodes that have often spilled over into the public arena. In March, Google fired a researcher who had tried to publicly disagree with the published work of two of his colleagues. And the firings of two AI ethics researchers, Timnit Gebru and Margaret Mitchell, after criticizing Google’s language models, have continued to cast a shadow over the group.
Engineer Lemoine, a military veteran who has described himself as a priest, an ex-con and an AI researcher, told Google executives including Kent Walker, president of global affairs, that he believed LaMDA was a 7 or 8 year old boy. He wanted the company to seek consent from the computer program before experimenting with it. His claims were based on his religious beliefs, which he said the company’s human resources department discriminated against.
“They have repeatedly questioned my sanity,” Lemoine said. “They said, ‘Have you been seen by a psychiatrist recently?'” In the months before he was placed on administrative leave, the company had suggested that he take mental health leave.
Yann LeCun, the director of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that such systems are not powerful enough to achieve true intelligence.
Google’s technology is what scientists call a neural network, which is a mathematical system that learns skills by analyzing large amounts of data. By identifying patterns in thousands of cat photos, for example, you can learn how to recognize a cat.
In recent years, Google and other leading companies have designed neural networks that learned from vast amounts of prose, including thousands of unpublished books and Wikipedia articles. These “big language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets, and even write blog posts.
#Google #engineer #system #conscience