Mental health

Antidepressant paroxetine study 'under-reported data on harms'

"Seroxat [paroxetine] study under-reported harmful effects on young people, say scientists," The Guardian reports. Researchers have reanalysed data about the antidepressant paroxetine – no longer prescribed to young people – and claim important details were not made public.

Researchers who looked at data from the now infamous 1990s "study 329" trial of the antidepressant paroxetine, found reports of suicide attempts that were not included in the original research paper.

The makers of paroxetine, GlaxoSmithKline (GSK), marketed paroxetine as a safe and effective antidepressant for children, despite the evidence of harms. The US Department of Justice sued GSK for a record $3 billion for making false claims.

The new analysis of thousands of pages of data contradicted the original claims that paroxetine was "generally well-tolerated and effective" for treating adolescents with depression. By contrast, the new analysis found "no advantage" from paroxetine and an "increase in harms", compared to placebo.

This new analysis found that the original study paper over-reported the effectiveness of paroxetine and under-estimated potential harms. It raises questions about how much we can rely on the reported results of medical trials, without independent access to review the raw trial data.  

Where did the story come from?

The study was carried out by researchers from Bangor University in Wales, Emory University in Atlanta, US, the University of Adelaide in Australia and the University of Toronto in Canada. The researchers say they had no specific funding source for their work.

The study was published in the peer-reviewed British Medical Journal (BMJ). It was made available on an open-access basis, meaning it is free for anyone to read online.

The story was, in the main, accurately reported in The Independent, The Guardian and the Mail Online.

What kind of research was this?

This was an unusual study, in that it was a re-analysis of a previously reported placebo-controlled double-blind randomised controlled trial

This type of trial is seen as very high-quality, because researchers can directly compare what happens to people taking one type of drug compared to another type, or to a placebo.

However, there have been concerns about how accurately adverse effects are reported in randomised controlled trials, especially those funded by drug manufacturers.

What did the research involve?

The independent researchers asked the manufacturer of paroxetine, GSK for access to the original trial data. They re-analysed the data according to the original trial protocol (the document setting out how the trial should be run). They then compared their findings to the research paper that reported the trial results, which was published in 2001.

The original study reported on 275 young people aged 12 to 18 with major depression, who were randomly allocated to either paroxetine, an older antidepressant drug called imipramine, or placebo, for eight weeks.

The documents studied by the researchers included the clinical study report showing the researchers’ raw data, and one third of the original case reports on the young people who took part in the trial. 

They checked this sample of 93 patients for reports of adverse events, recorded these, and compared them to the events recorded in the clinical study report and the 2001 published research paper.

Because research practices have changed since the 1990s, they analysed the research in different ways, to give comparisons between how the results would have been reported under current best practice, compared to best practice at the time.

What were the basic results?

The researchers found that neither paroxetine or imipramine was more effective than placebo, using the outcome measures specified in the original research protocol. However, the 2001 research paper picked a different set of outcome measures, which they said showed that paroxetine worked better than placebo. This is suspicious, because it suggests that the new outcome measures were chosen specifically to show a positive result, after the original outcome measures failed.

The researchers also found that the 2001 paper seriously under-reported cases of suicidal or self-harming behaviour. The 2001 paper reported five cases of suicidal behaviour for people taking paroxetine, three taking imipramine and one taking placebo. Yet the clinical study report on which the paper should have been based reported seven events for people taking paroxetine.

When the researchers included new cases identified from the case reports of 93 of the 275 patients in the study, they found 11 reports that could be classed as suicidal behaviour. They also found that many hundreds of pages of data were missing from the reports they looked at, without clear reason.

They said the 2001 paper reported 265 adverse events for people taking paroxetine, while the clinical study report showed 338. They said their analysis of the clinical study report identified 481 adverse events, and their scrutiny of case records found that another 23 not been previously reported.

How did the researchers interpret the results?

The researchers said their findings showed "evidence of protocol violations" with the addition of new outcome measures after the results were known, and "unreliable" coding of adverse events, such as suicidal behaviour.

They said the extent of the serious adverse events associated with paroxetine were only apparent when they looked at the individual case reports – a huge task, which involved trawling through 77,000 pages of data made available by GSK.

Conclusion

This study stands as a warning about how supposedly neutral scientific research papers may mislead readers by presenting findings in a certain way.

The differences between the independent analysis published in the BMJ and the 2001 research paper are stark. They cannot both be right. The "authors" of the 2001 paper appear to have picked outcome measures to suit their results, in the way they present evidence of effectiveness. 

It has subsequently come to light that the first draft paper was not actually written by the 22 academics named on the paper, but by a "ghostwriter" paid by GSK.

The study also seems to have under-reported adverse events, even those that were included in the researchers’ clinical study report.

The re-analysis does have some potential flaws. The researchers admit to some uncertainty about how to classify adverse events that happened after the end of the main eight-week phase of the trial, which could be seen as either withdrawal effects or effects of the drug. Because the numbers of young people reported as having suicidal behaviour is relatively small, the re-coding of adverse effects has a large impact.

It is possible that an alternative coding of adverse effects would change the results again. However, re-coding does not explain why adverse effects from the researchers’ clinical study report did not make it into the 2001 paper. The researchers were also able to look at only 93 of the 275 case reports, because they had insufficient time or resources. It is possible that a full re-analysis might change the overall message.

We don’t know how many young people may have been prescribed paroxetine for depression as a result of the 2001 paper. It was prescribed to 8,000 under-18s in the UK in 2001, before the regulatory authorities in the UK banned it for under-18s. However, paroxetine was used much more widely in the US.

The National Institute for Health and Care Excellence (NICE) recommends that only one antidepressant, fluoxetine, should be used for under-18s with moderate to severe depression, and only alongside psychological therapy. Three antidepressants (fluoxetine, sertraline and citalopram) are recommended as additional options for children who have not responded to treatment or who have recurrent depression.

This new analysis seems to show that paroxetine was not effective or safe for the young people in the trial. The fact that the 2001 paper reported it to be both effective and safe raises serious questions about the reliability of industry-funded clinical trials.


NHS Attribution