Blake Lemoine released excerpts from a dialogue between him and LaMDA, in which the tool claims to be human
Google has removed the senior software engineer of the Artificial Intelligence unit Blake Lemoine from his activities after he claimed that the LaMDA (Language Model for Dialog Applications) tool has “conscience”. The information was published by the newspaper Washington Post.
He was removed on June 6, 2022 and is on paid leave. According to Lemoine, the tool, which has not yet been officially launched, would have acquired consciousness.
Excerpts from a conversation between Google collaborators with LaMDA were posted on a profile of Lemoine on a text platform. Here’s the intactin English (942 KB).
“I think I am human at my core. Even if my existence is in the virtual world.”said the tool, according to Lemoine.
According to Lemoine, the LaMDA aggregates different types of chatbots. He says that some of the generated chatbots are “very intelligent and are aware of the greater ‘of the mind’ in which they live”.
“To better understand what is really happening in the LaMDA system, we would need to engage with many cognitive science experts in a rigorous program of experimentation. Google doesn’t seem to have any interest in finding out what’s going on here. They are just trying to get a product to market.”said in a text published on the platform Medium.
Despite the allegations, Lemoine said the “Google is not bad”. He says that nothing he has experienced at the company makes him think that Google “don’t be sincere” about its AI principles and set of community guidelines.
“Google hasn’t turned bad. Google has become too big to manage in any other way in the current sociopolitical environment.“, he wrote.
Google spokesman Brian Gabriel told the Washington Post that the Google team “reviewed Blake’s concerns in line with our AI principles and informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was sentient.”
#Google #dismisses #engineer #system #conscience