AI: Oregon SB 1546 Signed Into Law

Oregon Governor Tina Kotek signed Senate Bill 1546 into law on April 1, 2026. This is Oregon’s first attempt to address risks associated with AI chatbots. It aims to protect minors, take precautions for suicidal ideation and ensure individuals know they are communicating with a chat bot.

In December, President Trump issued an Executive Order that set up an AI Litigation Task Force that would challenge state laws, claiming that a patchwork of laws would make it hard for businesses. To date, however, the federal government has not enacted legislation aimed at limiting AI-related harm, protecting licensed professions or establishing enforceable consumer safeguards.

As a result, states are moving forward with their own regulatory approaches.

This Oregon law has five aspects:

Disclosure requirements (A-level regulation)*

The law requires chatbots to clearly disclose that they are not human. This aligns with legislation in other states, including California (SB 243) and Nevada (HB 2225), which focus on representation and transparency.

Suicide and self-harm intervention (C-level regulation)*

AI chatbots are required to:

  • detect suicidal ideation and self-harm signals
  • provide referrals to 988 and other crisis resources

This is a meaningful shift. It treats chatbots as systems that must identify and respond to risk, not simply provide information.

Protections for minors

The law includes several provisions designed to reduce compulsive or harmful engagement:

  • suggesting breaks after extended use (every three hours)
  • prohibiting sexually explicit content
  • banning emotional manipulation tactics designed to prolong engagement
  • restricting reward loops that encourage continued interaction

The restriction of reward loops is particularly significant. It represents an early attempt to regulate what can be described as engagement-driven reinforcement mechanisms in AI systems. This has been a particular concern of mine for all users, not just minors and the emotional/mentally vulnerable.

Reporting requirements

AI systems must file annual reports that include:

  • the number of crisis referrals (988)
  • a description of intervention protocols

This introduces a form of system-level visibility into how often high-risk interactions occur and how they are handled. We’ll see if AI systems actually follow this part of the law.

Private right of action

The law allows individuals to sue for $1,000 per violation. This creates direct liability exposure and removes the possibility of relying solely on internal compliance or voluntary standards. Allowing individuals to sue prevents the AI systems from having a shield law. Significant corporate behavior and future legislation will certainly be informed by court cases where plaintiffs allege harm was done.

Significance of this law

Oregon is moving beyond regulating what AI systems say and beginning to regulate what they are required to do in situations involving risk.

That is a meaningful shift. It places AI systems closer to functioning as actors within a clinical or quasi-clinical space, with corresponding expectations around intervention and accountability.

How companies respond, both in compliance and in litigation, will likely shape the next phase of state and federal AI policy.


A/B/C/D Framework for AI in Clinical and Professional Practice

To make sense of emerging AI laws and policies, I classify how AI is being used and regulated across four categories:

A — Representation and Communication

AI systems that:

  • interact with users
  • provide information
  • simulate conversation

Regulatory focus:

  • disclosure (AI vs human)
  • prohibition on impersonating licensed professionals
  • consumer protection

Example: laws requiring chatbots to identify themselves as AI


B — Structured Intake and Screening

AI systems that:

  • gather user information
  • ask structured questions
  • assist in initial assessment

Regulatory focus:

  • accuracy of screening
  • escalation protocols
  • documentation and oversight

Key issue: missed risk signals or incomplete assessment


C — Triage and Gatekeeping

AI systems that:

  • prioritize cases
  • determine urgency
  • decide whether and how users receive care or intervention

Regulatory focus:

  • duty to escalate (suicidal ideation)
  • limits on autonomous decision-making
  • liability for delayed or missed intervention

Key issue: harm caused by failure to act or improper prioritization


D — Clinical Decision Support (Post-Contact)

AI systems that:

  • assist licensed professionals
  • provide recommendations within ongoing care

Regulatory focus:

  • clinician accountability
  • documentation of AI use
  • maintaining professional standards

Key issue: overreliance on AI without independent verification

Monopoly Money and Moral Conviction: Teaching Critical Thinking Through Scarcity

Each semester, I hand every student $686 in Monopoly money. Five to eight students come to the front of the room. Each has two minutes to propose a federal or state policy: What does it cost? What is the intended benefit? Who does it affect? Who supports it? Who opposes it? Students have presented on abortion policy, climate initiatives, foster care reform, student loan forgiveness and affordable housing. The range reflects student interests and varies each semester.

After the presentations, students privately allocate their $686 to whichever proposals they wish. They may divide it or place it all in one proposal. We total the funds and rank the proposals from most supported to least. Then we analyze the results.

I begin my policy class with a definition from Stuart Shapiro at Rutgers’ Bloustein School: public policy is the allocation of finite public resources. In an era of misinformation and AI-generated fluency, disciplined prioritization matters more than volume.

The exercise is intentionally simple. I do not allow students to spend money to oppose a proposal. I do not allow bargaining or horse-trading. This is not a legislative simulation. It is a priority test. The goal is clarity.

Monopoly money works because people know bankruptcy, overextension and aggressive acquisition from the game. We take that structure and apply it to public budgeting.

The most important moment is not the ranking. It is the debrief. In one class composed mostly of young women, abortion policy ranked near the bottom. That result unsettled assumptions. Cultural intensity shifted when students faced finite allocation. One student later told me, “I’ve been a far left progressive activist since I’ve been aware. I now see that some of my views are impossible to pass as legislation and actually lose me other support.” That movement, from conviction to feasibility, is critical thinking.

The point is not a single ranking. Students repeatedly confront limits, defend claims and revise their reasoning until clarity becomes expectation rather than exception.

Social work education rightly emphasizes dignity and justice. Professional ethics and constitutional protections are not subject to allocation here; the exercise examines discretionary funding priorities above that floor. What students practice is something different: translating conviction into policy under fiscal and political constraint. Social workers advocate, testify and operate within constrained systems. They must distinguish between moral belief and legislative viability.

Preparing them only for the ideal world leaves them unprepared for the systems they will enter.

By the end of the semester, students anticipate scrutiny. They expect to justify cost, anticipate opposition and clarify claims before being asked. That anticipation is the habit.

Critical thinking is not simply identifying bias or critiquing sources. It is ranking priorities under constraint, articulating those priorities clearly and revising in light of feasibility. Monopoly money makes the constraint visible. Precision makes judgment more reliable under pressure.

The Proportional Distress Scale

From the Greenagel Equations

The Greenagel Equations are a set of practical frameworks developed between 2005 and 2008 in schools, outpatient and family treatment settings. They were built in rooms, not in theory, and have been used with students, families, law enforcement, veterans and therapists.


My 11 year-old niece was crying when I showed up. Flowing tears, heaving breaths and red-faced. I asked her what was wrong. It was hard to understand her, but she told me that her younger brother had ripped one of the eyes off of her stuffed cow.

“I don’t like it when someone breaks my stuff either. You have a right to be upset. I’m only here for a little bit though and I’d like to see you. How much longer do you want to cry for? An hour? Twenty minutes? Ten minutes? The rest of the night?”

Her crying almost stopped. “Ten minutes.”

“Ok. When you are ready, I’ll teach you a little trick to deal with stress.”

She changed her mind. “Two minutes.” She had stopped crying.

I smiled.

“I’m ready.”

“What was the worst moment of your life?” I asked.

She thought for a moment.

“Was it when your grandpa died when you were four?”

“Yes.”

“Ok. That’s a 100. That was pretty awful, wasn’t it?”

“Yeah. I was really very sad.”

“I remember. So if that’s 100, where does your cow losing an eye go?”

She thought. “A seven.”

Then she shook her head. “No. A four.”

“Are you sure? You were pretty upset.”

“Compared to other things, Uncle Frank, it isn’t a big deal.”

“Exactly. That’s perspective.”

I told her to add a few more points to the scale: 80, 60, 40 and 20. Then to use it the next time she got upset, whether it was her brother, school or sports.

Anyone can use this.

100 is the worst event of your life. Not the worst thing you can imagine, but the worst event of your life. My 100 is the death of my grandmother when I was 19. I’ve met many people whose 100 is worse than mine. Everyone is different. We don’t compare traumas. If someone isn’t comfortable telling me or writing down their 100, I tell them to put down their 90 or 80. Then I ask them to figure out what a 60, 40 and 20 would be.

My 4Runner was stolen in Montreal in August of 2024 while I was on vacation with four friends. I had a lot of hiking gear in it as well. I was unhappy about it, but compared to my 100 and 90 and 80, it was about a 25. My friends asked what I was going to do. “We go on with our trip. We’ll go biking and then take a boat ride and get a great dinner. Just as planned. What else is there to do? Freak out? Ruin everyone else’s time?”

When I’ve taught it to my students or clients, I start off with the story of Chicken Little. It’s an old folk tale. The townspeople knew that the sky wasn’t falling; they all looked upon Chicken Little with great annoyance. There is a valuable lesson about human nature there: people don’t like being around people who catastrophize. It’s exhausting.

A student asked, “What if you have a client who says everything is 100?”

“Great question. For Chicken Little, everything was a 100. He had no perspective. Everything was a disaster. He ends up alone. This scale is most specifically for the people who rate every problem, every aggravation as 100. We use this to teach them perspective.”

Last week, a student asked me, “What if you haven’t been through much? That nothing bad had really happened in your life?”

“Well, good for you. Try to keep that up as long as possible. Don’t apologize for it though. Whatever your worst moment is,” I told him, “that is your 100. At some point, it will almost certainly be replaced.”

I have sat with many people during the worst moment of their lives. I always tell them that they are supposed to feel awful. Usually powerless, usually scared. There is no quick way out of it. For some of the worst moments, I tell them that they have the right to collapse, though I don’t recommend it. Instead, I recommend therapy, sleep, exercise, healthy eating, time with family and friends, enjoyable activities and just moving forward. Even if you don’t feel like it.

Sometimes, something that feels like a 75 today is a 30 a year later. “It wasn’t that big a deal,” or “I didn’t think I’d ever recover” and sometimes even “I learned a lot from that” are phrases I hear from clients.

When I look back on my grandmother’s death (100) or my friend Eric’s death (85) or my Dad’s (80), I can remember how terrible I felt. The sadness. The fatigue. The utter loss. But, I don’t feel that way now. They still are a 100, 85 and 80, but they don’t cause me serious distress. I don’t feel the way I felt when those events happened.

If someone keeps experiencing a 100 or a 90, months or years later, that is PTSD. You still feel like you are in the moment, experiencing the pain, long after the event has passed.

This isn’t treatment. It’s a way to keep everything from seeming like a 100.

I sent this article to my niece, who is almost 18 now. “I remember the lesson but I have almost no memory of the cow losing its eye. Which shows it really was a four on the scale and not important at all.”

AI: Thinking Less, Becoming Replaceable

I asked AI to write an article using a half dozen inputs. Then I wrote one using the same inputs. Both are below.


Economists have spent decades arguing that technology changes jobs more than it eliminates them. Automation replaces some tasks, creates others, and the workforce adjusts over time. That view is now shifting. Recent reporting shows a growing recognition that artificial intelligence is not just another incremental tool. It is affecting large portions of white-collar work in ways that are starting to resemble what happened to manufacturing.

In manufacturing, productivity gains reduced the number of workers needed to produce the same output. Fifty workers who produced a certain number of cars in 1970 can now produce far more. Demand did not increase at the same rate, so fewer workers were needed. Efficiency did not destroy the industry, but it reduced the number of people required to sustain it.

A similar dynamic is beginning to emerge in knowledge work. AI allows a single worker to complete tasks that previously required more time, more people, or both. Drafting, summarizing, analyzing, and communicating can all be done faster. On the surface, this looks like a clear benefit. Workers save time. Companies increase output. Stress decreases. Productivity rises.

But this is only part of the story.

Another recent analysis suggests that the vast majority of people are using AI in a very specific way: to reduce effort. They use it to write emails, draft documents, complete assignments, and move through their workday more quickly. A much smaller group uses AI differently. Instead of replacing effort, they use it to challenge their thinking, test assumptions, and improve their work.

These two approaches lead to very different outcomes.

When AI is used primarily to complete tasks, the worker becomes more efficient. But they also become more interchangeable. If the value of the job is defined by AI-assisted output, then multiple workers can produce similar results. Over time, this makes it easier for organizations to reduce headcount or replace individuals with others who can generate comparable work using the same tools.

In contrast, workers who use AI to improve their reasoning and judgment increase their value in a different way. They are not just producing output more quickly. They are producing better decisions, identifying errors, and adapting to new problems. Their work becomes less standardized and more difficult to replace.

This creates a divide within the same workforce.

On one side are workers who use AI to reduce friction. They complete tasks faster, lower their immediate stress, and meet expectations efficiently. In the short term, this improves their experience of work. In the longer term, however, it can flatten their skill set. If they rely on AI to generate answers without developing their own thinking, they risk becoming dependent on the tool in a way that limits their growth.

On the other side are workers who use AI to increase friction where it matters. They still benefit from speed, but they also use the technology to examine their own reasoning. They ask different questions. They compare outputs. They look for errors. Over time, this strengthens their ability to operate without the tool and to use it more effectively when needed.

The distinction is not about access to AI. Both groups have it. The difference is how it is used.

This is where the comparison to manufacturing becomes more precise. Efficiency gains reduce the number of people needed to perform a task. But in knowledge work, efficiency also changes the nature of the task itself. Work that can be standardized and accelerated becomes easier to consolidate. Work that depends on judgment, interpretation, and adaptation remains more resistant.

As AI continues to improve, these dynamics are likely to intensify. Tools will become more capable. Outputs will become more polished. The baseline level of performance will rise. At the same time, the gap between those who rely on AI for answers and those who use it to refine their thinking will widen.

Some workers will adapt. Others will not. Many will fall somewhere in between.

This is not simply a story about technology replacing jobs. It is also a story about how individuals respond to that technology. The same tool can lead to different outcomes depending on how it is used.

In the short term, using AI to complete tasks more quickly is appealing. It saves time. It reduces effort. It makes work more manageable. But over time, the habits that form around that use can shape what a worker is able to do without the tool.

Those habits matter.

AI will continue to change how work is done. It will increase productivity. It will alter expectations. It will reshape roles. The question is not whether these changes will occur, but how individuals position themselves within them.


I went to a speed dating event in Northern New Jersey in February with a friend. I instantly surveyed the landscape and recognized that I wasn’t interested in anyone, so I used the twelve women I talked to as a focus group to find out how they used AI.

A businesswoman who worked at Revlon told me that she used it to draft all of her emails at work, write reports and help make decisions. I asked her if she was concerned at all about losing skills or her boss finding out. She told me that she is more productive, less stressed and finally has time “to live my life again.”  I didn’t have the time or desire to tell her that if I were her boss, I would find a high school graduate who could use AI as well as her and pay them one-third her salary. I expect she’ll find that out herself in the next few years.

 A middle-school teacher from Montclair told me that she used it to write her lesson plans. “If it could grade papers and tests, I’d have it do that too,” she happily told me. I asked her if she felt less like a teacher because she wasn’t designing her course. She did not like that question at all. “I’m able to spend more time actually teaching the kids, so it makes me a better teacher,” she said, a bit defensively. Her job is probably safe for a while, because of a teacher shortage and the low expectations of the field. That safety removes the need to develop and improve. Hence the offloading of tasks that are key parts of teaching.  I suspect other teachers are doing this also, which means the level of instruction will slowly degrade over time. If true, this is dire for education.

A scientist from a German bio-tech company stated that she used AI to create, analyze and synthesize “just a massive amount” of excel spreadsheets. She said she uses three different AI systems and then cross checks their work to ensure there are no errors. “I am so much more efficient. The agents can create the data sets so much faster than any human worker. Taking away the grunt work allows me to utilize my other skills.” She was the only woman that night who I felt confident would still be employed in ten years.

All three of the women use AI to take care of tasks that they view as dull, tedious and time consuming. What I saw that night lines up almost exactly with what recent reporting is starting to show. A recent Business Insider article stated that at least 95% of workers use AI to think less, while the remaining 5% use it to think more. A New York Times article this morning quoted several economists who have finally come to the conclusion that AI probably will be very disruptive to the workforce, particularly if it continues to rapidly evolve and improve. The workers who are most in jeopardy of either having their salaries reduced or losing their jobs altogether are those who use it to think less. The businesswoman and the teacher are clear examples of professionals who have offloaded work to AI and are thinking less. The scientist uses AI as a collaborative partner; she uses it to challenge her assumptions and discuss the data before going to her team with it.

The professionals who are using AI to offload work, increase productivity and reduce stress are currently enjoying wonderful benefits from AI. But they are unknowingly removing the processes which once made them valuable. By using AI to increasingly complete their tasks, they are becoming more easily replaceable by another person who can do AI-assisted work. A professional with a deep knowledge base who uses AI to challenge their assumptions, test their models, edit their writing and expand upon their ideas is the rare worker who is likely to be inoculated from becoming redundant.

All three use AI. They all use it to complete some of their work. The businesswoman and the teacher are using it to think less. They are getting faster, but not better. The businesswoman will probably be the first to lose her job. The teacher’s job is safe, for now, but I believe that the quality of her work is already suffering. The scientist was the only one of the twelve women I met that night that uses AI to improve her thinking. The other 91% are actively participating in their own career extinction.


It took AI less than ten seconds to write a competent, passable article about how workers are using AI to complete tasks without increasing their knowledge or skills. Mine took about an hour.

AI explains the situation. It’s a boring, monotonous piece. It’s interchangeable. Any AI system could have written it.

Mine makes an argument. It uses real examples. It passes judgement. It’s entertaining and informative and could have only been written by me.

Some people would argue that both approaches work. The difference is obvious.

These are the choices workers are making every day.