New England Journal of Medicine – Journal Watch Psychiatry Top Stories of 2016 – ADHD is a hot topic!

fireworkNow that the year is coming to an end, we are flooded with reviews of the year. For many reasons, 2016 wasn’t a particularly good year: especially some “democratic” decisions made this year cast some doubt on the so-called “swarm intelligence” which in 2016 apparently turned into “swarm dullness”. With alt-right, fake news and the post-factual world being an imminent threat to mental sanity, we can only hope for a better 2017. Anyway – that’s not the topic of this blog post. As many other journals did, the top journal of the Medical World, NEJM has nominated their top articles in each speciality (http://www.jwatch.org/na43004/2016/12/23/nejm-journal-watch-psychiatry-top-stories-2016).

Amazingly, amongst the Top 10 papers in psychiatry, three dealt with ADHD – and even better, two of them featured IMpACT / MiND / Aggressotype / CoCA researchers in the author list! The papers are in detail:

  • the finding that the use of stimulants is safe in bipolar disorder with comorbid ADHD (Viktorin et al.; http://ajp.psychiatryonline.org/doi/10.1176/appi.ajp.2016.16040467 – also one of my favourite studies this year)(with H. Larsson, IMpACT / MiND / CoCA)
  • a meta-analysis showing that EEG-based neurofeedback does not have a significant beneficial effect in ADHD, and also suggesting that unblinding of the rater might have influenced positive reports (http://www.jaacap.com/article/S0890-8567(16)30095-8/abstract)(with Dani Brandeis, Aggressotype)
  • the equally sad as important report that young children (aged 5 to 11), who died by suicide, had more frequently symptoms of ADHD, rather than depressive features (almost 60% of 87 children). Also for this most devastating outcome, it is thus very important to adequately diagnose ADHD (http://pediatrics.aappublications.org/content/138/4/e20160436) especially considering that ADHD goes along with an increased risk for suicide life-long which can be lowered by MPH treatment.

In my opinion, the fact that the editors picked three ADHD-relevant papers for their top 10 list demonstrates that ADHD is a hot topic and that we provide cutting edge research in the field – and we will continue to do so in 2017! Watch out at this space for more news on ADHD / ASD, my personal top picks in 2016 and more exciting research in the coming year! Merry New Year and all the best for 2017 for all of you, may it bring peace, happiness and reason to this discomposed world.

Ghost busters: why even high ranking psychotherapy studies might be lousy

img_4358The last half year saw two high ranking psychotherapy studies in depression, published in the most prestigious journals of our profession. While in principle this is laudable, in these specific cases it is not doing any good as the studies fall short to prove what they are actually claiming and also, because they sack medical care for the sake of cost. Let’s have a closer look.

The first paper (https://www.ncbi.nlm.nih.gov/pubmed/27461440), on the “COBRA” study, was published in Lancet (!!!) and compared „behavioral activation” delivered by „junior mental health workers“ (i.e. relatively untrained and, first and foremost, receiving relatively small wages) is just as effective as routine CBT delivered by trained psychologist (which is the gold standard psychotherapy treatment in depression). So depression treatment can be quite unspecific and cheap! Yay!

Not.

The most important drawback is the that the primary endpoint was set at twelve months. As the average length of a depressive episode is six to eight month without treatment, this makes no sense at all. Imagine that a study on two treatments aimed at relieving common cold would set its primary endpoint at 4 weeks. Would you buy this? As neither a survival plot (Kaplan-Meier-Plot) nor any other time course are shown, being suspicious is appropriate.

A further, unfortunately quite common, drawback is the lack of a sham psychotherapy group (i.e., this study is not placebo controlled). Given the year-long course, it may quite well have shown that it is as effective as the two other groups.

This is made even worse by the fact that 80% of participants in both groups received anti-depressant drug treatment. Likely, a ceiling effect is effective, further obscuring any effect of whatever psychotherapy is done.

This is not a non-inferiority trial. This is a failed trail.

Another study published in JAMA Psychiatry (https://www.ncbi.nlm.nih.gov/pubmed/27487573) echoes the COBRA study, although changing the flavor. Here, psychodynamic (not behavioral activation) was compared against CBT in depression. I don’t want to nag about the underpowered sample for a non-inferiority trial (especially when looking at how many patients attended more than five sessions (116 in total). Again, we have an endpoint which is rather late (5 month) without any description of time courses, and again sham psychotherapy is lacking. Even worse, that average Hamilton score (HAM-D, likely HAM-D 17) was 21±6 points. This is quite low and barely reaches the border to moderate depression; the usual cut-off for study inclusion in pharma trials lies between 20 and 22. This means that many patients with mild depression were included, that usually are not the target population for depression studies. Any differences to placebo/sham are hard to demonstrate due to floor effects, especially when considering the low number of patients adhering to therapy and the measured effect size of 0.6 (Cohen’s d, corresponding to a medium effect size). Considering all this, watchful waiting, mere psychoeducation or having a beer every week or so would have had the same effect, namely, a reduction of five points on the HAM-D 17, as this is just the naturalistic course of mild to medium depression. An indicator of this are the significantly overlapping SD measurements pre- and post-treatment (wisely enough, the authors did not go for graphical display of their data). The most parsimonious interpretation of the data thus is that both treatments are equally ineffective!

You may ask about antidepressant use here as well. There is a simple answer: we don’t know. The numbers of patients on antidepressants are not given at all! That they were there, we know, as there is a small subclause: „We found no statistically significant interaction between the use of psychotropic medication and treatment group on the rate of change in the HAM-D”.

This is another failed trail. Although it was highly published…

Unsurprisingly, the rate of recidivism (which is a major effect of psychotherapy) is not given in any of the studies.

It is very surprising however that these studies were published so well, despite of these obvious flaws. I can only speculate on the reasons for this. Regarding the Lancet paper (COBRA), I assume that economic reasons play a major role. Cheap BA treatment by “juniors” (did not we just learn to abandon that word from our vocabulary?) is as effective as CBT by expensive, greedy psychologist. That makes treatment less expensive, which however is somewhat tainted by the fact that it is ineffective (notwithstanding the commonsense experience that unspecific BA especially in early stage depression by be quite helpful). Never mind. The psychodynamic study might have undergone a “wishful thinking” review process – so many people are out there who desperately wish that psychodynamic therapy works as well as CBT… so this one came in quite handy. However, no favor was done to the field, on contrary. We do not need so badly designed (or at least presented) studies; what we do need are psychotherapy trials adhering to the highest standards in analogy to drug trials: i.e. presenting time courses, studying severe cases, being well powered with low attrition rates, and – most important – including a sham (=placebo) condition.

Going bananas about methylphenidate studies!

Why did I chose to use the Minions as a feature image for this post, along with the catchy title? Simply to attract attention. Sheer clickbait. While this is perfectly acceptable for a blog post (well, almost…), it is not for scientific publications. This not only refers to the title of a paper, but also to the way it is disseminated; and in this respect, a series of manuscripts under the lead authorship from O. Storebo raised some brows with their bold claim that there is no evidence that methlyphenidate actually works. While most of us clinicians would readily agree that this medication requires experience, thorough assessment, responsibility, and that it is not rarely ill-prescribed (often however by doctors other than psychiatrists), most of us are sure that it is indeed an effective medication given that the ADHD diagnosis is valid. So are we all deluded?

Well… probably not. At least not when it comes to methylphenidate treatment.

The group around Storebo, who before worked in the ADHD research in one trial on social skills training (the SOSTRA study), conducted a Cochrane on the efficacy of methylphenidate in ADHD and found out that “methylphenidate may improve teacher-reported ADHD symptoms”, but “due to the very low quality of the evidence, the magnitude of the associated improvement is uncertain”. This led to some far-fetched conclusions and statements (see e.g. the conclusions section in the abstract here: http://www.ncbi.nlm.nih.gov/pubmed/26599576) and it was wide-spread communicated in the media that “methylphenidate is not effective”. A deleterious statement, which also outraged many patients and parents.

So far, so bad; Cochrane reviews are known for their methodological rigor, however there are many back doors so one should always look at them critically, as you can tweak your input. What was the fine-tuning done here?  To start with, the effect size estimate is based upon 19 (from 185 included) studies. 4 did investigate methylphenidate versus an active control, another study  was undertaken in children under 6 years (off-label). When these studies are excluded, which should have been done, effect sizes increase to a large effect of 0.89. On the other hand, 56 studies that employed a cross-over design were excluded for no clear and good reason.

 

This should be enough to cast doubt on the study. However, in addition, there was an unusually strict and almost arbitrary assessment of bias; this led the authors to rate ALL (!!!) 185 studies to be at high risk of bias – and hence categorizing all studies as “low quality”. However, the evidence to support this claim is little. Personally, I find it outragous (and as I did not take part in any of these studies, I am not biased…) especially as the most common source of potential bias were assumed “conflicts of interest”.  While I consider Disclosure of Interest as a very important thing, one cannot make a general accusation and suspect almost a whole speciality of being bribed. This is demagogue, not science. This categorization results in a striking devaluation of decades of evidence from RCTs and also contradicts e.g. a NICE review (http://www.ncbi.nlm.nih.gov/pubmed/16796929).

Finally, Storebo’s proposal to implement long-term nocebo-controlled studies – despite the strong actual evidence from several decades of RCTs on methylphenidate – implies to administer a substance with no known benefit, but significant side effects for a substantial time period to many patients including minors. In my opinion, this is deeply unethical and conflicts with §33 of the Declaration of Helsinki.

While the grounds for their bottomline claim may be slippery, the authors do a good job in selling it. They have published the original Cochrane review http://www.ncbi.nlm.nih.gov/pubmed/26599576, followed by a publication of the very same data in the prestigious BMJ http://www.ncbi.nlm.nih.gov/pubmed/26608309 and another publication of the same dataset in JAMA http://www.ncbi.nlm.nih.gov/pubmed/27163989. Bear with me, but haven’t I been told in grad school that one of the Ten Commandments in Science is “Thou shalt not publish the same data twice”?

Just by repeating their interpretation over and over in high impact journals, the notion that methylphenidate is not working will trickle in the general conscience while the empirical basis for this claim suggests otherwise. This will impose harm on our patients, and this is why we have to address and disprove these papers actively.