Daily Tech News, Interviews, Reviews and Updates

The Google engineer who believes the company’s artificial intelligence has come to life

After declaring that an artificial intelligence chatbot had become sentient, a Google employee was placed on leave on Monday.

As part of his position at Google’s Responsible AI team, Blake Lemoine told The Washington Post that he started speaking with the interface LaMDA, or Language Model for Dialogue Applications, last fall.

Last year, Google dubbed LaMDA their “breakthrough dialogue technology.” Conversational artificial intelligence is capable of having open-ended, natural-sounding discussions. The technology could be utilized in Google search and Google Assistant, according to Google, but research and testing are still underway.

On Saturday, Lemoine, who is also a Christian priest, wrote a Medium article depicting LaMDA “as a person.” He claims that he has discussed religion, consciousness, and robotics rules with LaMDA, and that the model has characterised itself as a sentient person. He claims LaMDA wants to “prioritise humanity’s well-being” and “be recognised as a Google employee rather than a Google property.”

He also shared some of his interactions with LaMDA that led him to believe in its sentience.

“Blake’s concerns have been investigated by our team, which includes ethicists and technologists, in accordance with our AI Principles, and we have notified him that the data does not support his assertions. He was told that there was no indication that LaMDA was sentient (and plenty of evidence that she wasn’t), and that there was plenty of evidence that she wasn’t “A Google official, Brian Gabriel, told The Washington Post.

According to The Washington Post, Lemoine has been placed on paid administrative leave for breaking Google’s confidentiality policy. He also proposed that LaMDA hire its own lawyer and expressed his concerns to a member of Congress.

While some have addressed the idea of sentience in artificial intelligence, the Google spokesman said that “it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” Anthropomorphizing is the practise of attaching human features to animals.

“These systems can riff on any fantasy topic,” Gabriel told The Post, “and simulate the types of dialogues found in millions of lines.”

According to him and other academics, artificial intelligence models have so much data that they can sound human, but that improved language skills do not imply sentience.

Google also noted in a report published in January that consumers talking to chatbots that sound convincingly human could have problems.

Get real time updates directly on you device, subscribe now.



You might also like