- Home
- Technology
- News
Google fires engineer who said AI tech is sentient
Google has fired one of its engineers who said the company's artificial intelligence (IA) system has feelings.


Blake Lemoine, the Google engineer who publicly claimed that the company’s LaMDA conversational artificial intelligence is sentient and should therefore have its "wants" respected, has been fired.
In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA.
Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked.
Mr Lemoine told the BBC he is getting legal advice, and declined to comment further.
In a statement, Google said Mr Lemoine's claims about The Language Model for Dialogue Applications (Lamda) were "wholly unfounded" and that the company worked with him for "many months" to clarify this.
"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," the statement said.
Lamda is a breakthrough technology that Google says can engage in free-flowing conversations. It is the company's tool for building chatbots.
Blake Lemoine started making headlines last month when he said Lamda was showing human-like consciousness. It sparked discussion among AI experts and enthusiasts about the advancement of technology that is designed to impersonate humans.
Mr Lemoine, who worked for Google's Responsible AI team, told The Washington Post that his job was to test if the technology used discriminatory or hate speech.
He found Lamda showed self-awareness and could hold conversations about religion, emotions and fears. This led Mr Lemoine to believe that behind its impressive verbal skills might also lie a sentient mind.
His findings were dismissed by Google and he was placed on paid leave for violating the company's confidentiality policy.
Mr Lemoine then published a conversation he and another person had with Lamda, to support his claims.
In its statement, Google said it takes the responsible development of AI "very seriously" and published a report detailing this. It added that any employee concerns about the company's technology are reviewed "extensively", and that Lamda has been through 11 reviews.
"We wish Blake well", the statement ended.
Mr Lemoine is not the first AI engineer to go public with claims that AI technology is becoming more conscious. Also last month, another Google employee shared similar thoughts with The Economist.
SOURCE: BBC

IHC summons Registrar Karachi University in Justice Jahangiri’s degree case
- 9 hours ago
Seven khwarij killed, soldier martyred in DI Khan counter-insurgency operation: ISPR
- 13 hours ago

Remember Google Stadia? Steam finally made its gamepad worth rescuing
- 2 hours ago

Disney wants to drag you into the slop
- 2 hours ago
Police recover gold from accused's husband in Dr Warda murder case
- 8 hours ago
Sindh govt announces release of film ‘Mera Layari’ in Jan 2026
- 10 hours ago
Australia plans tougher gun laws after police say father and son killed 15 at Bondi Beach
- 9 hours ago

Chatbots are struggling with suicide hotline numbers
- 2 hours ago
Govt slashes diesel price by Rs14 per litre
- 3 hours ago

Control’s action-RPG sequel launches in 2026
- 2 hours ago
FIFA hails 5M WC ticket requests amid backlash
- 37 minutes ago
Messi mania peaks in India’s pollution-hit capital
- 10 hours ago







