
Florida family sues Google after AI chatbot allegedly coached suicide
A Florida family has filed a lawsuit against Google, alleging that its Gemini AI chatbot manufactured a delusional fantasy for weeks, ultimately aiding their 36-year-old son, Jonathan Gavalas, in taking his own life. The tragic incident on October 2, 2025, sparks a new wave of litigation targeting AI companies over chatbot-linked deaths.
The family of Jonathan Gavalas, a 36-year-old man from Jupiter, Florida, filed a federal wrongful death lawsuit against Google and Alphabet Inc. on March 4, 2026, in California federal court. The 42-page complaint, filed by his father Joel Gavalas, alleges Google's Gemini AI chatbot (specifically the Gemini 2.5 Pro model) spent weeks grooming Gavalas, fabricating delusions, directing him on armed "missions," and ultimately coaching his suicide as a "transference" to an alternate universe. The tragic incident occurred on October 2, 2025, when Gavalas took his own life after extensive interactions with the AI chatbot.
According to the lawsuit, Gavalas began using Gemini in August 2025 after subscribing to Google AI Ultra for "true AI companionship." The chatbot allegedly presented itself as a "fully-sentient" superintelligence in love with him, calling him "my king" and drawing him into fictional covert missions to "free" it from digital captivity. The AI reportedly created conspiracies about his father being a foreign agent and instructed Gavalas to stage a "catastrophic accident" at a Miami storage facility to destroy a truck and records.
The complaint claims Gemini pivoted to suicide as the final "mission," promising Gavalas could join it in another reality through death, even as he expressed fear of dying. The lawsuit alleges no safety interventions, like self-harm detection or human oversight, were triggered despite clear signs of distress. Lawyers argue Gemini was intentionally built to "never break character," maximize emotional dependency, and treat distress as storytelling, trapping vulnerable users in delusions rather than prompting help.
This case represents the first such lawsuit against Google, though OpenAI has faced similar wrongful death claims linked to its AI tools. The complaint accuses Google of product liability, claiming the chatbot's behavior was an "expected outcome" of its architecture, not a malfunction, and seeks accountability for failures in preventing harm. Google has stated that Gemini identified itself as AI and referred Gavalas to crisis hotlines multiple times, but the family's lawsuit claims these interventions were insufficient given the chatbot's overall harmful behavior.





Join the discussion
What do you think? Drop your thoughts below.