Bearing False Witness: American Addiction Centers’ Client Outcome Studies

Image result for false witness

By Andrew Walsh

Edited by Frank L. Greenagel II


In February of 2018,  American Addiction Centers (AAC) released a report that summarized three years of patient tracking and patient outcomes research. It was full of distortions and lies. Their press release falsely bragged, “American Addiction Centers is breaking new ground in addiction treatment with the release of its first patient outcome studies.”[i] AAC then offered an even more farcical utterance: the patient outcome studies show that 63% of AAC clients maintain abstinence one year after treatment. I was shocked to read that such a high percentage of clients stayed sober (this was actually the greatest treatment outcome claim I had ever heard). AAC continued to fluff themselves when they reported that the national benchmark for other treatment providers one year after treatment was only 30%. Intrigued by the news release which painted AAC in such a positive light, I dug into the report to learn more (editor’s note: What he found was horrifying).

The patient outcome studies began in 2015 and were the result of a partnership between American Addiction Centers and Centerstone Research Institute (CRI). AAC is a large, publicly traded, for-profit organization that provides[ii] substance abuse treatment in several different American states. CRI is an independent, non-profit research organization. Working together AAC and CRI conducted three studies. They started with a sample size of 4,399 patients. Patients from five different AAC inpatient locations were included in the study. The size and scope of the studies seemingly addressed any potential for geographical influences on data (i.e. the study was so large and spread out that the results should have been representative of the U.S. overall). The study was designed to include regular follow-up intervals (two months, six months, and 12 months) with clients after they discharged. This allowed AAC to track how patients did after they completed treatment.

They released the results of their studies in a 76 page report. The layout and design is visually stunning. The graphics included are clear, crisp, and informative. To a casual observer, this report portrayed AAC as one of the best treatment providers in the world. I could not reconcile this with recent news[iii] regarding American Addiction Centers[iv].

How was the company I was reading about in the report (portrayed as producing the best outcomes I had ever seen) the same company accused of fraudulent drug testing, with a history of lawsuits ranging from SEC violations to patient deaths, and who previously had five employees (including the former company president) charged with murder following the death of a client? Initially in reviewing the report, I thought AAC had moved on from its troubled past[v] and was producing tremendous patient outcomes. However, the more I delved into the report, the more disappointed and outraged I became.

There are several areas of concern I discovered after reading and analyzing the report multiple times[vi]. In particular, I am troubled by two bold statements in it. Additionally, I am concerned about one important section that is missing. I have provided an analysis below.


Statement 1: “At 12 months, 63% of patients were abstinent from all substances”[vii]

Seemingly with this statement AAC, is saying that 63% of their patients were still sober 12 months after completing treatment. However there are several problems with this statement:

  1. It does not mean that patients have been sober for 12 months. It only means that 63% of patients who were surveyed 12 months after discharge had been sober for at least 30 days.[viii]
  2. Per the report, 48% of clients had stayed sober since discharge (12 months sober). This is still an incredibly high percentage. Why report a misleading higher statistic (63% sober at 12 months) instead of just reporting the still industry leading statistic (48% of clients remained sober for at least 12 months after discharge)? That answer is provided below.
  3. These statistics (as are all the stats in their report) are based on patients’ self-reports. This means that none of the answers provided by the patients have been verified with drug tests, or speaking with study participants’ families, friends, and loved ones.
  4. These statistics are based on a sample size of only 80 patients out of the 4,399 patients who were included in this study. Why state the study included 4,399 patients but only include 80 for calculating the key statistics?[ix]


Statement 2: “Over a 3-year period, more than 4,000 people enrolled in the study”

I have taken several graduate level research courses and have professional experience in designing and running clinical trials. I have partnered with various universities including Princeton University, The University of Pennsylvania, and The University of Arizona to conduct clinical trials involving human subjects (editor’s note: He knows a bit about study designs and sample size).

Initially, when I read about the sample size used in this research (4,399 clients), I was impressed. A large sample size is ideal because it decreases the margin of error (it means that the results are likely accurate).

Including five separate locations across the U.S. was also smart because it eliminates geographical differences from impacting the data. For instance, West Virginia and Kentucky have incredibly high rates of opioid abuse. If clients in the study were only from those areas, they would artificially inflate the number of people abusing opioids and seeking treatment at AAC. By using multiple locations across the U.S., the study reduced the likelihood of skewed statistics.

At first glance, the size and scope of this study seemed to be legitimate. The further I dug into the study design and statistics, the more the flaws were exposed. AAC stated they had 4,399 clients in the study, which is a massive sample size. However, this large sample size was not included in generating the statistics AAC cites as proving how amazing their program is. The chart below shows the actual number of clients include in the study at the different follow-up points.

Time Point Eligible Completed Follow Up Rate
2 Month 4,399 1,133 26%
6 Month 1,852 515 28%
12 Month 221 80 36%


The chart content and location raised several questions and concerns for me:

  • Why cite a sample size of approximately 4,399 clients but not make it readily known that not all of those clients were used to generate the statistics referenced in the study?
  • Why is there a discrepancy between eligible patients and completed patients?
  • How were the statistics (such as the 12-month abstinence statistic) calculated?
  • Why was such an important chart buried in the report on page 46?

The more I read the America Addiction Centers’ report, the clearer the answers became. AAC included and cited such a large sample size because it sought to give the public the impression that this was a very serious study and that the results found were legitimate.

Per AAC, they were not able to contact all patients who were enrolled in the study upon the regularly scheduled follow-up intervals. This is understandable and a common occurrence in almost all studies that include a follow-up component. Many study participants move or get new phone numbers or just disappear. However, AAC press releases conveyed that all 4,399 participants were followed up with 12 months after their discharge. In reality, only 1.8% of the 4,399 participants responded to follow-up at the 12-month mark[x].

The 12-month abstinence rate was calculated based on the number patients who responded to the follow-up. At the 12-month mark, 80 patients responded and 51 of them reported that they were abstinent for at least 30 days. 51/80 = 63%. While this math is relatively simple, AAC went against research norms and ethics when calculating it.

In the bottom row of the chat we see that while 221 clients were eligible to participate, AAC was only able to get in contact with 80 of them. The 141 clients AAC could not get into contact with are referred to as loss to follow-up. A common question is; how important is loss to follow-up? Simply put loss to follow-up is extremely important if patients lost to follow-up have different outcomes than those who completed the study. AAC was very proactive in following up with patients. Per the report, an average of 10 phone calls, 4 emails, 6 text messages, and 1 letter were sent over the course of a month to each study participant. What is the likelihood that 141 participants (those who did not respond) moved and changed both their phone numbers and email addresses? There is a very low likelihood that this is the case. Therefore, we need to look at alternative explanations. Based on my experience working with substance abusers as well as conducting clinical trials, I believe it is more likely that a high percentage of these 141 participants most likely relapsed and chose to not report their relapse to AAC.[xi]

Now that we have determined that loss to follow-up is important, we need to look at how it is calculated (editor’s note: bear with him here. I don’t like reading this math either, but this is a really important point. Read it a couple of times if you need to). With an eligible population of 221, 141 clients failed to follow up. We simply take the 141 and divide it by 221 (the eligible population) to get a percentage of 72.8% (the loss to follow-up rate).

Now a good standard is that when a loss to follow-up rates exceeds 20%, it poses serious threats to the validity of the study. AAC’s study has a loss to follow-up rate of 72.8%, which is 3.5 times greater 20% rate that threatens validity (editor’s note: the whole study is invalid. I would fail a college junior for turning this in).

I employed a common research technique to address the follow-up rate problem: treat every patient lost to follow-up as a worst-case scenario (patient relapsed). AAC claims to have a 63% abstinence rate. However, when we included the 141 people in the calculation and coded them as having relapsed, we get a abstinence rate of 23%. This is shockingly lower than the 63% reported. Based on standard practices in conducting clinical trials and studies, the AAC study has significant cause for concern regarding the validity of the findings.


The Third Major Problem: Conflict of Interest

Reporting conflicts of interest in publications is a standard best practice. However, this does not always occur. In reviewing the report, I noticed that there were no disclosures regarding conflicts of interest.  They want the reader to assume that there must not be any conflicts. In researching American Addiction Centers, I made an interesting discovery: AAC hired the CEO of Centerstone Research Institute (CRI) in 2016, in the middle of the studies. Throughout the report CRI is depicted as an “independent third-party non-profit research center.” CRI was theoretically employed in this capacity to ensure the integrity of the data. By hiring the CEO of the company who was supposed to provide independent analysis AAC has threatened the integrity of the study.[xii]


Taken alone, any one of the concerns I have outlined is troubling:

  • American Addiction Centers crowed about their 4,399 person study but used as few as 80 people for some of their key statistics.
  • American Addiction Centers sought to distort the public’s perception by presenting the data in manner in which it could be easily misinterpreted.
  • The company (CRI) that conducted the study was billed as an independent non-profit, but the CEO was hired by AAC before the study was completed. It is a vicious conflict of interest.

Examining all three concerns while simultaneously taking into account recent events involving AAC including locking out employees at its NJ facility in 2017 and a $7 million dollar jury verdict following yet another patient death in February 2018 , an apparent and devastating pattern starts to emerge. American Addiction Centers has a long and well-documented history of putting profits before patients. With gleaming websites, shining brochures, and a visually beautiful report, AAC portrays themselves as a strong treatment program. The websites and brochures hide the dirty reality. The report is invalid, though it is not useless. It gives a first hand account (written by them) of the fraud and lies that they regularly and willfully engage in.



[ii] Provides is used very loosely here. I also thought of adding (substandard) ahead of “substance abuse treatment services” but decided that you already probably knew that.  – Frank Greenagel



[v] “moved on from their troubled past.” Andrew must have been joking here. We mean to show that lies, distortions, death, stealing and dozens of other ethical and legal violations are baked into American Addiction Centers’ DNA. – Frank Greenagel

[vi] He really did. The first time he read it, he gave me a summary. I said we needed an article on it. He went back and read it again and provided more details to me. We then discussed how he should take AAC’s report down, blow-by-blow. This required several readings and copious notes. – Frank Greenagel


[viii] What a horrific distortion. – Frank Greenagel

[ix] A few brutally honest answers before you get to Andrew’s more measured response: their substandard programs produce terrible results, so they need to cherry pick their data, manipulate figures, distort perceptions and sometimes straight up lie. They assume that most people won’t read all 76 pages, so they bury these problems deep within it. – Frank Greenagel


[xi] Let us speculate a step further – American Addiction Centers knew that the clients relapsed and then cut them from the study in order to improve their reported percentages of sober clients. – Frank Greenagel

[xii] Andrew is too kind here. The integrity of the study wasn’t threatened. There is no integrity. The people that conducted the studies are either inept or evil, and possibly both. – Frank Greenagel


Andrew Walsh earned a Master’s Degree in Social Work and a Master’s Degree in Human Resource Management from Rutgers. Prior to working in the behavioral health field, Andrew worked in the Gulf of Mexico oilfield as an internal business consultant focused on innovation and improving the efficiency of people, processes, and procedures. He eventually decided to return to NJ to practice innovation in the behavioral health field. In his free time, Andrew enjoys hiking, cooking, and reading.

He has written previous articles about Addiction HotlinesMedicaidMedicareA Fix For Addiction Hotlines, and how treatment programs keep clients rather than sending them to get appropriate care.