Google management send on paid leave the engineer Blake Lemoine, who worked with the artificial intelligence (AI) LaMDA system and said that it began to show signs of robot consciousness. The company said that the program is not reasonable.
The LaMDA (Language Model for Dialogue Applications) system is a Google language model; they design it to communicate with a person. The learning platform expands its vocabulary through the Internet and mimics natural human speech. Lemoine’s task was to control the vocabulary of the machine; LaMDA should not allow itself discriminatory statements, rude or hateful expressions.
However, when talking with artificial intelligence on a religious topic, the 41-year-old engineer, who studied computer science and cognitive science (the psychology of thinking) in college, noticed that the chatbot started talking about its rights and its own personality. In one of the dialogues, the machine was so convincing that Lemoine changed his mind about the “third law of robotics” by science fiction writer Isaac Asimov.
Google engineer fired after claiming AI chatbot had become sentient
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” the engineer told Washington Post reporters. He reached out to his management, but Google VP and head of innovation examined his suspicions and dismissed them. Sent on paid leave Blake Lemoine decided to make the incident public.
Also, Google spokesperson Brad Gabriel said: “Our team, including ethicists and technologists; has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. There was no evidence that LaMDA was sentient (and lots of evidence against it)”. However, we will keep tracking the evolution of this case; and keep you informed as soon as we get new information.