Pages

Thursday, October 16, 2014

AHRQ Evidence Review - Comments From the Advocates

This post comes via Mary Dimmock, Claudia Goodell, Denise Lopez-Majano, and Jennie Spotila. You are welcome to publish it on your site with attribution and a link back to Jennie's post. You are also welcome to use this (and other material they’ve gathered) as a framework for your own comments on the draft evidence review - due October 20th.

Evidence Review Comments Preview

From Occupy CFS, October 15, 2014

By Jennie Spotila

It’s been a challenging few weeks, digesting and analyzing the AHRQ Draft Systematic Evidence Review on Diagnosis and Treatment of ME/CFS. We continue to be deeply concerned about the many flaws in the review, in terms of both the approach it took and how it applied the study protocol.

Our comments on the Review will reflect our significant concerns about how the Evidence Review was conducted, the diagnostic, subgroup, and harms treatment conclusions drawn by this report, and the risk of undue harm that this report creates for patients with ME. We believe a final version should not be published until these scientific issues are resolved.

Most fundamentally, the Evidence Review is grounded in the flawed assumption that eight CFS and ME definitions all represent the same group of patients that are appropriately studied and treated as a single entity or group of closely related entities. Guided by that assumption, this Evidence Review draws conclusions on subgroups, diagnostics, treatments and harms for all CFS and ME patients based on studies done in any of these eight definitions. In doing so, the Evidence Review disregards its own concerns, as well as the substantial body of evidence that these definitions do not all represent the same disease and that the ME definitions are associated with distinguishing biological pathologies. It is unscientific, illogical and risky to lump disparate patients together without regard to substantive differences in their underlying conditions.

Compounding this flawed assumption are the a priori choices in the Review Protocol that focused on a more narrow set of questions than originally planned and that applied restrictive inclusion and exclusion criteria. As a result, evidence that would have refuted the flawed starting assumption or that was required to accurately answer the questions was never considered. Some examples of how these assumptions and protocol choices negatively impacted this Evidence Review include:
  • Evidence about the significant differences in patient populations and in the unreliability and inaccuracy of some of these definitions was ignored and/or dismissed. This includes: Dr. Leonard Jason’s work undermining the Reeves Empirical definition; a study that shows the instability of the Fukuda definition over time in the same patients; studies demonstrating that Fukuda and Reeves encompass different populations; and differences in inclusion and exclusion criteria, especially regarding PEM and psychological disorders.
  • Diagnostic methods were assessed without first establishing a valid reference standard. Since there is no gold reference standard, each definition was allowed to stand as its own reference standard without demonstrating it was a valid reference.
  • Critical biomarker and cardiopulmonary studies, some of which are in clinical use today, were ignored because they were judged to be intended to address etiology, regardless of the importance of the data. This included most of Dr. Snell’s and Dr. Keller’s work on two day CPET, Dr. Cook’s functional imaging studies, Dr. Gordon Broderick’s systems networking studies, Dr. Klimas’s and Dr. Fletcher’s work on NK cells and immune function, and all of the autonomic tests. None of it was considered.
  • Treatment outcomes associated with all symptoms except fatigue were disregarded, potentially resulting in a slanted view of treatment effectiveness and harm. This decision excluded Dr. Lerner’s antiviral work, as well as entire classes of pain medications, antidepressants, anti-inflammatories, immune modulators, sleep treatments and more. If the treatment study looked at changes in objective measures like cardiac function or viral titers, it was excluded. If the treatment study looked at outcomes for a symptom other than fatigue, it was excluded.
  • Treatment trials that were shorter than 12 weeks were excluded, even if the treatment duration was therapeutically appropriate. The big exclusion here was the rituximab trial; despite following patients for 12 months, it was excluded because administration of rituximab was not continuous for 12 weeks (even though rituximab is not approved for 12 weeks continuous administration in ANY disease). Many other medication trials were also excluded for not meeting the 12 week mark.
  • Counseling and CBT treatment trials were inappropriately pooled without regard for the vast differences in therapeutic intent across these trials. This meant that CBT treatments aimed at correcting false illness beliefs were lumped together with pacing and supportive counseling studies, and treated as equivalent.
  • Conclusions about treatment effects and harms failed to consider what is known about ME and its likely response to the therapies being recommended. This means that the PACE (an Oxford study) results for CBT and GET were not only accepted (despite the many flaws in those data), but were determined to be broadly applicable to people meeting any of the case definitions. Data on the abnormal physiological response to exercise in ME patients were excluded, and so the Review did not conclude that CBT and GET could be harmful to these patients (although it did allow it might be possible).
  • The Evidence Review states that its findings are applicable to all patients meeting any CFS or ME definition, regardless of the case definition used in a particular study.
The problems with this Evidence Review are substantial in number, magnitude and extent. At its root is the assumption that any case definition is as good as the rest, and that studies done on one patient population are applicable to every other patient population, despite the significant and objective differences among these patients. The failure to differentiate between patients with the symptom of subjective unexplained fatigue on the one hand, and objective immunological, neurological and metabolic dysfunction on the other, calls into question the entire Evidence Review and all conclusions made about diagnostic methods, the nature of this disease and its subgroups, the benefits and harms of treatment, and the future directions for research.

As the Evidence Review states, the final version of this report may be used in the development of clinical practice guidelines or as a basis for reimbursement and coverage policies. It will also be used in the P2P Workshop and in driving NIH’s research strategy. Given the likelihood of those uses and the Evidence Review’s claim of broad applicability to all CFS and ME patients, the flaws within this report create an undue risk of significant harm to patients with ME and will likely confound research for years to come. These issues must be addressed before this Evidence Review is issued in its final form.