Blake Lemoine: Google dismisses engineer for claiming that AI technology has feelings
A developer who claimed that Google’s artificial intelligence system has feelings has been dismissed.
Blake Lemoine hypothesised that Google’s language technology is sentient and has “wants” that should be honoured public last month.
Google, as well as a number of AI professionals, refuted the allegations, and on Friday, the business announced that he had been fired.
In a statement, Mr Lemoine said he was seeking legal counsel but otherwise chose not to respond.
- Google engineer says AI system may have feelings
- How human-like are the most sophisticated chatbots?
Google claimed in a statement that Mr. Lemoine’s assertions regarding The Language Model for Dialogue Applications (Lamda) were “wholly untrue” and that the company collaborated with him for “several months” to make this clear.
It is said that Blake continued to consistently violate obvious employment and data security regulations, including the requirement to protect product information, despite extensive dialogue on this subject, the statement stated.
Google claims that with the ground-breaking technology Lamda can have spontaneous talks. It is the tool used by the business to create chatbots.
When Blake Lemoine claimed that Lamda was displaying human-like consciousness last month, he immediately gained media attention. It generated debate over the development of technologies intended to emulate humans among professionals and enthusiasts in AI.
According to Mr. Lemoine, a Google’s Responsible AI team, his responsibility was to check whether the technology employed hateful or discriminatory rhetoric.
He discovered Lamda to be self-aware and able to engage in discussions about religion, feelings, and concerns. Mr. Lemoine became convinced as a result that, hidden beneath its amazing language prowess, might also be a conscious mind.
Google rejected his conclusions, and he was put on paid leave for going against the company’s confidentiality policy.
To back up his assertions, Mr. Lemoine later shared a chat he and another person had with Lamda.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Google released a study outlining its approach to the “extremely serious” development of AI, according to a statement. It further stated that any employee concerns regarding the company’s technology are “extensively” assessed, and Lamda has undergone 11 reviews.
The statement closed with “We wish Blake well.”
Mr. Lemoine is not the first AI engineer to publicly assert that AI technology is evolving to become more intelligent. Similar ideas were also expressed to The Economist by a different Google employee last month.