- Home
- Technology
- News
Google announces updates to Gemini’s mental health safeguards amid lawsuit over user’s suicide
The tech giant said Gemini would now show a redesigned “Help is available” feature when conversations signal potential mental health distress, to provide faster connections to crisis care

AFP: Google on Tuesday announced updates to the mental health safeguards on its Gemini artificial intelligence chatbot, as the company faces a death lawsuit alleging the chatbot aided a user in his suicide.
The tech giant said Gemini would now show a redesigned “Help is available” feature when conversations signal potential mental health distress, to provide faster connections to crisis care.
When the chatbot detects signs of a potential crisis related to suicide or self-harm, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click — a feature Google said would remain visible for the remainder of the conversation once activated.
Google’s philanthropic arm Google.org also committed $30 million over three years to help scale the capacity of global crisis hotlines, and $4 million toward an expanded partnership with AI training platform ReflexAI.
“We realise that AI tools can pose new challenges,” Google said in a blog post announcing the measures.
“But as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people’s mental well-being.”
The announcements come months after a lawsuit filed in a California federal court accused Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man.
His father alleges the chatbot spent weeks manufacturing an elaborate delusional fantasy before framing his son’s death as a spiritual journey.
Among the relief sought in the suit is a requirement that Google programme its AI to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation.
In the same blog post, Google said it had trained Gemini to avoid acting as a human-like companion and resist simulating emotional intimacy or encouraging bullying.
The case against Google is the latest in a widening wave of litigation targeting AI companies over chatbot-linked deaths.
OpenAI faces multiple lawsuits alleging its ChatGPT chatbot drove users to suicide, while Character.AI recently settled with the family of a 14-year-old boy who died after forming a romantic attachment to one of its chatbots.
PM directs probe into TI Pakistan’s complaint over PPRA rules violations by SNGPL
- 16 hours ago

Meta is laying off 10 percent of its staff
- 5 hours ago

Govee’s new colorful outdoor lights are its first with solar power
- 5 hours ago
Trump to hold talks on Iran with security team: US media
- 15 hours ago
Bahrain revokes citizenship of 69 people for 'glorifying or sympathising' with Iranian attacks
- 13 hours ago

Twelve South’s magnetic PowerBug charger is down to just $35
- 5 hours ago

Microsoft says the ‘idea’ of an Xbox mobile store ‘is not dead’
- 5 hours ago

Govee’s new rechargeable table lamp is less than half the price of Hue’s
- 5 hours ago

SBP increases key policy rate by 100 bps to 11.5pc
- 17 hours ago

Why America’s HIV epidemic hasn’t ended
- 3 hours ago
PM Shehbaz approves public attendance for PSL playoff matches: Naqvi
- 13 hours ago
The innocent children of Ichhrah: not killed by one mother, but by two
- 16 hours ago






