Oregon Governor Tina Kotek signed Senate Bill 1546 into law on April 1, 2026. This is Oregon’s first attempt to address risks associated with AI chatbots. It aims to protect minors, take precautions for suicidal ideation and ensure individuals know they are communicating with a chat bot.
In December, President Trump issued an Executive Order that set up an AI Litigation Task Force that would challenge state laws, claiming that a patchwork of laws would make it hard for businesses. To date, however, the federal government has not enacted legislation aimed at limiting AI-related harm, protecting licensed professions or establishing enforceable consumer safeguards.
As a result, states are moving forward with their own regulatory approaches.
This Oregon law has five aspects:
Disclosure requirements (A-level regulation)*
The law requires chatbots to clearly disclose that they are not human. This aligns with legislation in other states, including California (SB 243) and Nevada (HB 2225), which focus on representation and transparency.
Suicide and self-harm intervention (C-level regulation)*
AI chatbots are required to:
- detect suicidal ideation and self-harm signals
- provide referrals to 988 and other crisis resources
This is a meaningful shift. It treats chatbots as systems that must identify and respond to risk, not simply provide information.
Protections for minors
The law includes several provisions designed to reduce compulsive or harmful engagement:
- suggesting breaks after extended use (every three hours)
- prohibiting sexually explicit content
- banning emotional manipulation tactics designed to prolong engagement
- restricting reward loops that encourage continued interaction
The restriction of reward loops is particularly significant. It represents an early attempt to regulate what can be described as engagement-driven reinforcement mechanisms in AI systems. This has been a particular concern of mine for all users, not just minors and the emotional/mentally vulnerable.
Reporting requirements
AI systems must file annual reports that include:
- the number of crisis referrals (988)
- a description of intervention protocols
This introduces a form of system-level visibility into how often high-risk interactions occur and how they are handled. We’ll see if AI systems actually follow this part of the law.
Private right of action
The law allows individuals to sue for $1,000 per violation. This creates direct liability exposure and removes the possibility of relying solely on internal compliance or voluntary standards. Allowing individuals to sue prevents the AI systems from having a shield law. Significant corporate behavior and future legislation will certainly be informed by court cases where plaintiffs allege harm was done.
Significance of this law
Oregon is moving beyond regulating what AI systems say and beginning to regulate what they are required to do in situations involving risk.
That is a meaningful shift. It places AI systems closer to functioning as actors within a clinical or quasi-clinical space, with corresponding expectations around intervention and accountability.
How companies respond, both in compliance and in litigation, will likely shape the next phase of state and federal AI policy.
A/B/C/D Framework for AI in Clinical and Professional Practice
To make sense of emerging AI laws and policies, I classify how AI is being used and regulated across four categories:
A — Representation and Communication
AI systems that:
- interact with users
- provide information
- simulate conversation
Regulatory focus:
- disclosure (AI vs human)
- prohibition on impersonating licensed professionals
- consumer protection
Example: laws requiring chatbots to identify themselves as AI
B — Structured Intake and Screening
AI systems that:
- gather user information
- ask structured questions
- assist in initial assessment
Regulatory focus:
- accuracy of screening
- escalation protocols
- documentation and oversight
Key issue: missed risk signals or incomplete assessment
C — Triage and Gatekeeping
AI systems that:
- prioritize cases
- determine urgency
- decide whether and how users receive care or intervention
Regulatory focus:
- duty to escalate (suicidal ideation)
- limits on autonomous decision-making
- liability for delayed or missed intervention
Key issue: harm caused by failure to act or improper prioritization
D — Clinical Decision Support (Post-Contact)
AI systems that:
- assist licensed professionals
- provide recommendations within ongoing care
Regulatory focus:
- clinician accountability
- documentation of AI use
- maintaining professional standards
Key issue: overreliance on AI without independent verification