This is a summary of the AI and clinical landscape as of Spring 2026. The environment is changing quickly and standards are not yet settled. Regulatory and legal responses are uneven and still developing across jurisdictions. This update reflects ongoing clinical work alongside review of emerging law and literature. Its purpose is to reduce blind spots for clinicians and programs and to identify emerging risks to the profession and the public. Future updates will incorporate international developments as relevant.
State Regulation Is Moving First
Colorado HB26-1195. This is a bill that proposes to prohibit clinicians from using AI to directly communicate with patients. It also places restrictions on AI from creating treatment plans or therapeutic recommendations with a licensed professional reviewing them.
New York S7263. This bill imposes liability for damages caused by a chatbot imitating certain licensed professions, including counseling.
New Jersey S3668. This bill requires disclosure in certain AI-driven communication.
Washington State HB2225 and S5984. This bill mandates that AI chatbots inform users that they are not interacting with a human and then requires periodic reminders.
The Colorado bill is the most aggressive about protecting the public and clinicians while New York is laying the ground for user litigation against AI companies if they get unlicensed or bad advice. New Jersey and Washington are just about AI disclosure. There is no unified Federal standard. Clinicians are operating across inconsistent legal environments. None of them go far enough.
The Courts Are Beginning to Test Harm and Liability
Google was sued in Federal Court on March 4 by the family of a Florida man who took his life. The man’s family alleges that the Gemini AI chatbot encouraged his self harm.
The parents of a girl who was critically wounded in a mass shooting in Canada has sued OpenAI in British Columbia’s Supreme Court. They allege that OpenAI failed to act on warning signs from the shooters ChatGPT account.
Liability theories are forming. Courts will shape foreseeability, duty to warn and liability. Regardless how these and other cases play out, they will influence practice, insurance and legal environments. I expect to see a significant increase in lawsuits, including families bringing claims related to harm, negligence and failure to intervene.
Clinical Risk
A new scientific review in the Lancet Psychiatry discussed how AI chatbots can encourage delusional thinking in vulnerable people. It describes how AI can present incorrect or reinforcing content. The biggest concern is that there is evidence that AI Chatbots can reinforce distorted thinking.
How I Am Using AI
I am using AI to edit some of my writing, generate counterfactuals and conduct hostile reviews of my articles. Any research that AI conducts I independently verify. When I do use AI research for an article, I cite it. I use AI to organize and analyze data and provide me with basic visualizations. I have used AI to synthesize themes and evaluate prior work in my career. I have a written doctrine about AI Use and Writing Standards available on my website that goes into much greater detail about this.
I created a Student AI Use Guide for my Rutgers Seniors. At the end of the semester, I will post an updated version of that on my website.
Provisional Practices
Documentation and Writing: AI may assist with structure or editing. My best practice recommendation is that clinicians should not have AI write notes, treatment plans, treatment plan reviews or discharge summaries. Writing is both an act of thinking and learning. Clinicians who do all their own writing are much more likely to have a better understanding of their case, as well as keeping their skills sharp. The more one uses AI to write, the more one loses the ability to write.
Diagnosis and Clinical Decision-Making: Do not rely on AI to determine diagnoses or treatment decisions. A medical student described using AI so frequently for diagnostic support that he realized he had gone an entire day without independently thinking through a single patient encounter (The New Yorker, “If A.I. Can Diagnose Patients, What Are Doctors For?”). He deliberately pulled back after recognizing the impact on his own thinking.
Input Discipline: Do not enter identifiable client information into AI systems unless you are certain it complies with privacy laws and organizational policy. I believe there are likely widespread HIPAA violations happening all over the country in this regard.
Supervision and Consultation: AI is not supervision. AI carries no ethical duty or liability. Even if the legal landscape changes, a good clinician should continue to seek human supervision and consultation. My abilities and career were shaped by my supervisors. Woe to the therapist who learns mostly or solely from a computer.
Client Use of AI: Clinicians should ask their clients about AI use. They should find out if they are using AI for guidance or emotional support. College students have told me that they have friends who spend entire weekend evenings chatting with AI. One woman I talked to on a train told me that she was getting advice about how to handle her breakup from AI. To be clear, some people are using AI as a substitute for human interaction. Clinicians should evaluate the impact of a client’s AI use on symptoms, avoidance and functionality.
Verification: Do not rely on unverified AI outputs. Any information used in clinical or professional work should be independently confirmed.
AI Agents: Do not delegate communication or decisions to AI. This introduces legal and ethical risk.
Disclosure and Risk Awareness: Assume AI-assisted content is discoverable. Check with your company’s policy on AI disclosure practices. That’s a bit of a joke, as most companies’ AI policies are outdated, incomplete or non-existent.
In Closing
AI is advancing faster than regulation, and only a few states are beginning to regulate AI as it pertains to the mental health field. Legal standards are going to emerge through court cases, both in the US and internationally. Many people are using AI for guidance and emotional support; there is a particular risk for people who are isolated or have delusional type thinking. Clinicians should be talking to their clients about their AI use and setting boundaries. AI can provide information and simulated interaction, but it does not replace lived experience, real-time clinical judgment or the relational work that happens between people. I wrote about AI’s potential impact on social work in 2024. At that time, I had not yet used these tools directly, but was already concerned about how they might affect clinical work and professional roles. The pace and scope of change since then has been significant. My plan is to provide updates on a quarterly basis.