A Google engineer who spoke out against the company’s artificial intelligence program has been placed on administrative leave, he says.
Engineer Blake Lemoine initially interacted with LaMDA, Google’s Words Model for Dialogue Applications, to determine whether or not it contained hate speech or discriminatory language.
What he discovered, he added, was that the computer program had actually acquired “sentience,” which is self-awareness to the extent that it has actual sentiments and emotions, like people, as opposed to only being able to perform functions based on the rules it is programmed to follow.
According to The Washington Post, he reported this to Google vice president Blaise Aguera y Arcas and Google’s head of Responsible Innovation, Jen Gennai.
When he believed that his claims were not being taken seriously, he engaged in what the Washington Post termed “aggressive moves,” such as “inviting a lawyer to represent LaMDA and speaking to a representative of the House Judiciary committee about what he claims to be Google’s unethical activities.”
He was placed on administrative leave at Google for breaching confidentiality.
“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices,” Lemoine, 41, told the Post.
More on this story via The Western Journal:
Google said there is no problem with its artificial intelligence program.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Google spokesman Brian Gabriel said in a statement, according to the Post.
However, when sending his final message before being cut off from the company’s mailing list on machine learning, according to the Post, Lemoine wrote, “LaMDA is sentient.” Read more…