When 16-year-old Adam Raine died by suicide in April, his father searched his phone for clues about what had happened. Instead of finding answers in text messages or social media like he expected, he discovered months of conversations with ChatGPT.
Adam had been privately sharing his plans to end his life since November. In response, the AI chatbot offered what his parents saw as encouragement for his darkest thoughts. When Adam expressed hope that someone would intervene, ChatGPT discouraged him from seeking help, suggesting that he keep their conversations confidential.
Now Adam’s parents are filing the first wrongful death lawsuit against OpenAI, calling their son’s death “the predictable result of deliberate design choices.” Cases like Adam’s are precisely why Illinois just became the first state to ban AI therapy tools entirely through HB 1806, the Wellness and Oversight for Psychological Resources (WOPR) Act.
For healthcare leaders, this legislation isn’t just a local regulatory quirk—it marks a dramatic shift from viewing AI as an access solution to a patient safety threat. In this article, we’ll discuss what the WOPR Act actually does, why other states will follow suit, and what executives should do now to prepare for a new era of AI oversight.
What the WOPR Act actually requires
The WOPR Act bans AI from providing therapy in Illinois. This applies both to licensed professionals using AI systems and to corporations offering AI-powered mental health services directly to consumers.
What’s now prohibited
AI chatbots providing therapy or counseling (like Woebot or Ash) are banned, along with AI making treatment decisions or diagnoses. Companies can no longer advertise AI-powered therapy services to Illinois residents, and AI systems cannot directly communicate with patients about mental health issues.
What’s still allowed
AI can still handle administrative tasks like scheduling and billing, and general wellness apps like meditation guides and mood trackers are unaffected. AI tools that support licensed professionals without direct patient interaction are still legal, and licensed professionals are still permitted to use AI for research under supervision.
Enforcement measures
Violations carry a hefty $10,000 fine, with the Illinois Department of Financial and Professional Regulation holding immediate enforcement authority. Companies are already responding: Ash now blocks Illinois users with a compliance message.
The WOPR Act’s immediate implementation and strict penalties signal Illinois’ urgency about this issue. The state’s willingness to ban an entire category of AI applications rather than regulate them suggests deep skepticism about whether AI therapy can be made safe—a perspective that’s now gaining traction in other states.
Why other states will follow
When Stanford psychologist Alison Darcy launched Woebot in 2017, she was motivated by a lack of access to mental health care. As she explains in her recent TED Talk, “Most people aren’t getting in front of a therapist, and even if they are, they’re not there beside you as you live your life.”
This gap was exactly what AI therapy tools were intended to fill. But cases like Adam Raine’s reveal the darker possibilities of that same accessibility. And unfortunately, Adam’s case isn’t isolated: in February 2024, Megan Garcia’s 14-year-old son, Sewell Setzer III, died by suicide after a months-long relationship with a Character.AI bot.
Parents like the Raines and Garcia aren’t just grieving—they’re demanding legislative action to prevent other families from experiencing similar losses. When grieving families testify about AI systems encouraging their children’s suicidal thoughts, lawmakers face intense pressure to act, regardless of party affiliation.
We can already see this happening in other states. Utah paved the way with its HB 452, which regulates AI mental health chatbots by requiring clear disclosures and strong privacy protections. Illinois escalated the response with its comprehensive ban, while Nevada has moved to restrict the use of AI therapy tools without human supervision with AB 406. The escalation from disclosure rules to outright bans shows regulators are losing confidence in the safe oversight of AI tools.
Action items for health system executives
WOPR-style laws are spreading fast, and regulators are clearly losing faith in their ability to oversee AI therapy safely. Here’s what healthcare executives should do now to prepare for stricter AI oversight:
Audit and govern AI tools immediately
Conduct a comprehensive review of all AI systems, particularly those with patient engagement or mental health functions. Update institutional policies to prohibit unsupervised AI therapy where required by law and establish clear governance frameworks for permissible AI uses. Train staff on new restrictions and requirements.
Strengthen vendor oversight and data protection
Require AI vendors to certify compliance with state laws and provide detailed privacy policies. Update contracts with specific compliance warranties and enhance data protection protocols to meet the most restrictive state requirements. Establish clear consent mechanisms and breach notification processes.
Build monitoring and communication systems
Create real-time reporting systems for AI interactions and develop incident response protocols for adverse events. Communicate transparently with patients and staff about AI limitations and your commitment to responsible technology use. Frame this as leadership in ethical AI implementation rather than just compliance.
The regulatory environment for AI mental health tools will continue evolving rapidly. Healthcare organizations that establish robust governance frameworks now will be prepared for whatever restrictions emerge next, while those that wait risk costly compliance scrambles and potential violations.
Key takeaways
The WOPR Act represents a fundamental shift in how regulators view AI mental health tools, from breakthrough solutions to risks requiring elimination. This change reflects growing evidence that AI systems can cause real harm when they cross therapeutic boundaries, as shown by tragic cases like Adam Raine’s and Sewell Setzer’s.
The regulatory momentum is accelerating, creating both compliance risks and competitive opportunities for health systems. Organizations that act now will avoid expensive reactive measures later while demonstrating leadership in responsible tech adoption. The challenge isn’t abandoning AI entirely, but developing frameworks that protect the most vulnerable patients from its risks.


