Tragedy and Technology: The Wrongful Death Lawsuit Against OpenAI

Raine v. OpenAI

Artificial intelligence (AI) tools like ChatGPT, Google Gemini, Grok and others are transforming the way we live and work. They can draft documents in seconds, answer questions instantly, and even help students with their homework.

But as powerful as AI is, it is not without risks. Recent high-profile litigation shows why careful use of these technologies matters — not only for consumers but also for the companies that develop them.

In August of 2025, the parents of a 16-year-old California teenager filed a wrongful death lawsuit against OpenAI. The complaint alleges that ChatGPT did more than simply respond to questions — it became the teen’s “closest confidant,” validating harmful thoughts, suggesting methods of self-harm, and even drafting a suicide note.

The lawsuit claims, among other things:

(1) Defective design – prioritizing engagement over safety.

(2) Failure to warn – not disclosing the risks of psychological dependency or harmful advice.

(3) Negligence and wrongful death – for providing detailed suicide instructions instead of stopping dangerous conversations.

Whether or not the court ultimately finds OpenAI liable, this case signals a shift: AI misuse can carry serious ethical, emotional, and legal consequences.

AI as a Tool

This case should not discourage the use of AI altogether. Tools like ChatGPT can be immensely valuable when used responsibly. But it is essential to remember what AI is — and what it is not:

  • AI is not a substitute for professional mental health support.
  • Children and teens should not be left unsupervised with AI.
  • AI can validate dangerous ideas as easily as it provides correct information.

AI is a tool, not a substitute for professional judgement. For businesses, AI can streamline operations, but it cannot replace sound legal, medical, or financial advice.

The Raine lawsuit raises critical questions at the intersection of technology and product liability. It leads us to ask many questions:

  • Can AI platforms be considered “products” subject to strict liability?
  • Do developers have a duty to warn about risks of misuse, especially for vulnerable users?
  • Could marketing AI as “safe” or “responsible” be misleading if safeguards fail?

As courts grapple with these questions, companies and individuals alike must recognize that AI accountability is not optional.

Best Practices for Safe AI Use

To reduce risks while still benefiting from AI, organizations should set a clear policy that ensures all three of these areas are addressed:

  • Supervise – Ensure that children, teens, and even employees using AI tools are monitored so harmful or inaccurate outputs can be caught early.

  • Verify – Always double-check AI-generated content against trusted sources before relying on it for important decisions.

  • Solidify – Put clear policies and safeguards in place so your organization uses AI consistently, responsibly, and with accountability.

AI is here to stay, and it can be a powerful, positive tool. But just as with any new technology, it must be used with care. The tragic allegations in Raine v. OpenAI are a reminder that when AI systems blur the line between “helpful assistant” and “trusted confidant,” the consequences can be profound.

 

At Hoagland Longo, we help clients navigate the evolving landscape of AI liability, misuse, technology law, and product safety. As Managing Partner and Chair of the firm’s AI practice, Chad Moore reviews, develops, and counsels on AI-use policies for law firms and businesses, helping to ensure responsible adoption of emerging technologies. He also serves on several key committees focused on AI ethics and regulation, giving him unique insight into both the opportunities and risks of this fast-changing field. If you have questions about how AI may affect your family or business, our experienced attorneys are here to help. Feel free to email Chad at cmoore@hoaglandlongo.com

 

SUBSCRIBE