http://www.forbes.com/sites/larryhusten/2014/04/28/stem-cell-therapy-to-fix-the-heart-a-house-of-cards-about-to-fall/#42529b047341
For
more than a decade cardiac stem cell therapy has attracted an enormous
amount of attention, promise, and research dollars. Now an original and
important new study published in BMJ finds
that many of the most promising results in the field are illusory and
that the potential benefits of stem cells to treat heart disease are
probably far more modest than we’ve been led to believe. The study also
raises disturbing questions about ethics and research conduct (and
misconduct) in a high-flying field.
Researchers in the UK, led by Darrel Francis, closely scrutinized 133 reports from 49 different clinical trials testing autologous bone marrow stem cells in patients with heart disease. They found an astonishingly large number of discrepancies in the reports– altogether more than 600 discrepancies, ranging from minor oversights to serious unexplained errors and apparent deceptions. Many errors were mathematical or statistical errors while others were more general, such as conflicting descriptions of studies as either a prospective randomized trial or a retrospective observational study.
The key finding of the study is that there was a very strong correlation between the number of discrepancies in a study and the reported improvement in heart function as measured by left ventricular ejection fraction(LVEF). The 5 trials with no discrepancies reported no improvement in LVEF (-0.4%). In stark contrast, the 5 trials with the highest number of discrepancies– each with more than 30 discrepancies– reported a very large and, if true, clinically significant improvement in LVEF (+7.7%). This effect was consistent: the more errors there were in the study the more likely the study reported a large treatment effect.
The authors summarized their finding:
Researchers in the UK, led by Darrel Francis, closely scrutinized 133 reports from 49 different clinical trials testing autologous bone marrow stem cells in patients with heart disease. They found an astonishingly large number of discrepancies in the reports– altogether more than 600 discrepancies, ranging from minor oversights to serious unexplained errors and apparent deceptions. Many errors were mathematical or statistical errors while others were more general, such as conflicting descriptions of studies as either a prospective randomized trial or a retrospective observational study.
The key finding of the study is that there was a very strong correlation between the number of discrepancies in a study and the reported improvement in heart function as measured by left ventricular ejection fraction(LVEF). The 5 trials with no discrepancies reported no improvement in LVEF (-0.4%). In stark contrast, the 5 trials with the highest number of discrepancies– each with more than 30 discrepancies– reported a very large and, if true, clinically significant improvement in LVEF (+7.7%). This effect was consistent: the more errors there were in the study the more likely the study reported a large treatment effect.
The authors summarized their finding:
Our study shows that scientists who achieve progressively better consistency of reporting find progressively smaller effects on ejection fraction of treatment with stem cells derived from bone marrow. In trials with a discrepancy count of zero, the ejection fraction effect seems to be zero.
By sheer coincidence, the BMJ publication occurs simultaneously with the publication of a Cochrane review of stem cells for heart disease. The two papers don’t completely overlap, but they are in many ways congruent. Analyzing the literature, the Cochrane reviewers found “some evidence that stem cell treatment may be of benefit.” But, they noted, “the quality of the evidence is relatively low because there were few deaths and hospital readmissions in the studies, and individual study results varied. Further research involving a large number of participants is required to confirm these results.”
Similarly, the BMJ authors note that “viewing all the studies together as a single entity, there is on average a positive effect.” But, they warn, “averaging effect size across all studies might therefore not be wise because it does not reflect their varying factual accuracy.” Although Cochrane is known for its rigorous methodology, it is generally not within the purview of Cochrane reviewers to seek out errors and inconsistencies in individual studies.
It should be noted that the BMJ paper is only the latest, though perhaps the most sweeping, of a series of setbacks to cardiac stem cell research. Earlier this month two important papers, one published in Circulation and one in the Lancet, from the group of Piero Anversa, a very prominent and high profile stem cell researcher, were discredited as a result of an ongoing investigation at Harvard Medical School and Brigham and Women’s Hospital. Last year, in what amounts to a dry run for the BMJ paper, Francis and colleagues published devastating critiques of multiple papers from the German research group led by Bodo-Eckehard Strauer and the C-CURE study published in JACC.
“Shocking, profoundly disappointing and… very sad”
I asked several experts on stem cell and clinical trials experts to comment on the study. All agreed that this sort of tough scrutiny is long overdue. Well-known Yale University cardiologist Harlan Krumholz said that “this important article emphasizes how we cannot allow enthusiasm to get ahead of the science – or even pervert the science to fit our expectations. It is also a clarion call for transparency – new exciting claims (like all science) need to have all the data available for independent scrutiny.”
Steven Epstein is a pioneering gene therapy and stem cell researcher who is currently the director of translational and vascular biology research at the MedStar Heart Institute. He was for many years the chief of the cardiology branch at the NHLBI.
The results and conclusions are shocking, profoundly disappointing and, from a personal perspective, very sad. Nonetheless, the findings and conclusions are not at all surprising to me, as for many years I’ve been aware of investigators presenting results as more positive than they actually were, or even indicating a negative trial was “positive” by emphasizing the effects on one of several secondary endpoints despite the primary endpoint showing no effect.
We certainly need to avoid lumping investigators guilty of actual fraud with those either guilty of distorting their results by hyping them beyond the point that can be scientifically justified, or guilty of sloppy data collection and analysis. Clearly, if fraud is proven, the responsible investigators have to be dealt with severely. However, even the lesser of the offenses in which data have been misrepresented lead, unfortunately, to similar outcomes.
First, funding is misdirected toward investigators who successfully hype their unproven data, while investigators demonstrating small incremental effects, or even negative effects, can’t possibly compete for public or private funding opportunities. What also follows is that the recognition on the part of the scientific community that funding, now usually directed to studies that are “hyped” and are an “apparent success,” feel enormous pressures to present their data in a way that appears as successful as possible, even though such a result requires some distortion of the hard scientific results.
Second, many innocent investigators are misled by such publications and wind up dedicating years of their own research proceeding along paths that will lead down blind alleys.
And it’s not just the individual investigators who are to blame. Journals want to be the first to publish what appears to be cutting edge research, and so they, in my view, have markedly lowered their standards as to the level of documentation needed to be achieved before a preclinical or clinical trial is accepted for publication. It seems the criteria have moved from accepting studies only after they provide compelling evidence proving the validity of the hypotheses examined, toward accepting studies that would be most exciting and newsworthy if true—regardless of the level of proof.
Major funding agencies also deserve major blame, as they too seem to have been seduced into supporting investigators who have successfully marketed exciting concepts rather than successfully supported the concept with solid empirical evidence.
This is a critical time in medical research field. It’s almost as though investigators don’t really know what to believe any more, even if the study that has caught their attention is published in a prestigious journal. On the other hand, we’ve got to avoid assaulting the integrity of investigators in an entire field, the huge majority of whom are hardworking, creative, and honest.
Page 2 at link.
No comments:
Post a Comment