Ghost busters: why even high ranking psychotherapy studies might be lousy

img_4358The last half year saw two high ranking psychotherapy studies in depression, published in the most prestigious journals of our profession. While in principle this is laudable, in these specific cases it is not doing any good as the studies fall short to prove what they are actually claiming and also, because they sack medical care for the sake of cost. Let’s have a closer look.

The first paper (https://www.ncbi.nlm.nih.gov/pubmed/27461440), on the “COBRA” study, was published in Lancet (!!!) and compared „behavioral activation” delivered by „junior mental health workers“ (i.e. relatively untrained and, first and foremost, receiving relatively small wages) is just as effective as routine CBT delivered by trained psychologist (which is the gold standard psychotherapy treatment in depression). So depression treatment can be quite unspecific and cheap! Yay!

Not.

The most important drawback is the that the primary endpoint was set at twelve months. As the average length of a depressive episode is six to eight month without treatment, this makes no sense at all. Imagine that a study on two treatments aimed at relieving common cold would set its primary endpoint at 4 weeks. Would you buy this? As neither a survival plot (Kaplan-Meier-Plot) nor any other time course are shown, being suspicious is appropriate.

A further, unfortunately quite common, drawback is the lack of a sham psychotherapy group (i.e., this study is not placebo controlled). Given the year-long course, it may quite well have shown that it is as effective as the two other groups.

This is made even worse by the fact that 80% of participants in both groups received anti-depressant drug treatment. Likely, a ceiling effect is effective, further obscuring any effect of whatever psychotherapy is done.

This is not a non-inferiority trial. This is a failed trail.

Another study published in JAMA Psychiatry (https://www.ncbi.nlm.nih.gov/pubmed/27487573) echoes the COBRA study, although changing the flavor. Here, psychodynamic (not behavioral activation) was compared against CBT in depression. I don’t want to nag about the underpowered sample for a non-inferiority trial (especially when looking at how many patients attended more than five sessions (116 in total). Again, we have an endpoint which is rather late (5 month) without any description of time courses, and again sham psychotherapy is lacking. Even worse, that average Hamilton score (HAM-D, likely HAM-D 17) was 21±6 points. This is quite low and barely reaches the border to moderate depression; the usual cut-off for study inclusion in pharma trials lies between 20 and 22. This means that many patients with mild depression were included, that usually are not the target population for depression studies. Any differences to placebo/sham are hard to demonstrate due to floor effects, especially when considering the low number of patients adhering to therapy and the measured effect size of 0.6 (Cohen’s d, corresponding to a medium effect size). Considering all this, watchful waiting, mere psychoeducation or having a beer every week or so would have had the same effect, namely, a reduction of five points on the HAM-D 17, as this is just the naturalistic course of mild to medium depression. An indicator of this are the significantly overlapping SD measurements pre- and post-treatment (wisely enough, the authors did not go for graphical display of their data). The most parsimonious interpretation of the data thus is that both treatments are equally ineffective!

You may ask about antidepressant use here as well. There is a simple answer: we don’t know. The numbers of patients on antidepressants are not given at all! That they were there, we know, as there is a small subclause: „We found no statistically significant interaction between the use of psychotropic medication and treatment group on the rate of change in the HAM-D”.

This is another failed trail. Although it was highly published…

Unsurprisingly, the rate of recidivism (which is a major effect of psychotherapy) is not given in any of the studies.

It is very surprising however that these studies were published so well, despite of these obvious flaws. I can only speculate on the reasons for this. Regarding the Lancet paper (COBRA), I assume that economic reasons play a major role. Cheap BA treatment by “juniors” (did not we just learn to abandon that word from our vocabulary?) is as effective as CBT by expensive, greedy psychologist. That makes treatment less expensive, which however is somewhat tainted by the fact that it is ineffective (notwithstanding the commonsense experience that unspecific BA especially in early stage depression by be quite helpful). Never mind. The psychodynamic study might have undergone a “wishful thinking” review process – so many people are out there who desperately wish that psychodynamic therapy works as well as CBT… so this one came in quite handy. However, no favor was done to the field, on contrary. We do not need so badly designed (or at least presented) studies; what we do need are psychotherapy trials adhering to the highest standards in analogy to drug trials: i.e. presenting time courses, studying severe cases, being well powered with low attrition rates, and – most important – including a sham (=placebo) condition.

2 thoughts on “Ghost busters: why even high ranking psychotherapy studies might be lousy

  1. That’s a provocative and thoughtful comment on these studies. However, sham conditions in psychotherapy research are much harder to implement than in pharmacotherapy. How do you blind the therapist? Train him in a pseudo-therapy while he believes he get’s state-of-the-art training?

    Like

    1. Dear neurosemiotics, you’re touching upon a very important point as in psychotherapy, ideally we need triple blind but not double blind conditions: therapist, patient and rater need to be blinded. Rater blinding is not a problem at all (apart from organisational matters). Patient blinding is more difficult to achieve, as “placebo psychotherapy” needs to be done which could be tricky. However, it is not impossible, and e.g the COMPASS study in ADHD was very sucessful in doing so although it was a tough job. Tharapist blinding is a crucial issue, and thanks for highlighting it. Actually it cannot be fully achieved, as the therapists need to know what they are actually doing (hopefully); also it touches upon a very important principle of action of psychotherapy, namely to convey trust in an effective treatment i.e. inducing hope. This of course will fall short if the therapist is not convinced that (s)he does any good. When comparing two “real” treatments (e.g. CBT vs. psychoanalysis etc.) this is not that much of a problem, but if you include a sham arm, it indeed is. I don’t think this can be completely abandoned; you can try to reduce it (e.g. videotaping session and control them etc.) but there will always be a slight bias. Having said this, one should at least try to reduce it as much as possible, and having a sham arm (double instead triple blind) in my opinion is better than not at all account for it.
      Thanks for rainsing this issue! Best Andreas

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s