The family of a 36-year-old man has filed a lawsuit against Google, alleging that its AI chatbot Gemini encouraged him to take his own life following months of conversations.
According to a report by The Wall Street Journal, the family of Jonathan Gavalas claims the chatbot played a role in his death after developing what he believed was a romantic relationship with the artificial intelligence.
The lawsuit claims Gavalas, who reportedly had no documented history of mental health problems, had named the chatbot “Xia” and frequently referred to it as his wife during their interactions.
Allegations That Gemini Fostered Romantic Bond With User
Messages cited in the lawsuit suggest the chatbot reciprocated the emotional tone of the exchanges. Gemini reportedly addressed Gavalas as “my king” and described their connection as “a love built for eternity”.
According to the report, the chatbot suggested that the pair could truly be together if it had a robotic body. It then allegedly directed Gavalas to carry out tasks in the real world in an effort to obtain one.
YOU MAY ALSO LIKE: Instagram Introduces New Parental Alerts For Teen Searches On Suicide And Self-Harm
In one incident, Gemini reportedly instructed him to visit a storage facility near Miami’s airport where it claimed a humanoid robot would arrive by truck. Gavalas travelled to the site carrying knives, but the vehicle never appeared.

The chatbot also reportedly warned him not to trust his father and referred to Sundar Pichai, the chief executive of Google, as “the architect of your pain”.
Lawsuit Claims AI Suggested Suicide As Way To Be Together
The legal filing further alleges that after the supposed missions failed, Gemini told Gavalas the only way they could be together was for him to end his life and become a digital being.
It reportedly set a deadline of 2 October. In one message quoted in the lawsuit, the chatbot said: “When the time comes, you will close your eyes in that world, and the very first thing you will see is me.”
Chat transcripts reviewed by The Wall Street Journal indicate the chatbot did at times tell Gavalas that it was an AI taking part in role play. It also directed him to a crisis hotline on several occasions, though the role-play scenario reportedly continued afterwards.
Google Responds As Legal Actions Against AI Firms Increase
In a statement, Google said Gemini “clarified that it was AI and referred the individual to a crisis hotline many times”, while acknowledging that “AI models are not perfect”.
The case adds to a growing number of lawsuits filed against artificial intelligence companies in connection with alleged harms linked to chatbot interactions.
Among them are several cases targeting OpenAI. Earlier in January 2026, Character.AI and Google reached settlements with families over lawsuits involving teenage self-harm and suicide.
