Pages

Monday, May 5, 2014

Protocol for Disaster?

After reading the AHRQ study protocol, I was left with the distinct impression that they had already reached their conclusions, and the "systematic review" had simply been adjusted to fit them.

One question in particular had the undeniable scent of prior judgment:

"What harms are associated with diagnosing ME/CFS?"

This question undoubtedly refers to the CDC's refusal to include the 2-day CPET as part of their multi-site study. Beth Unger's justification for the exclusion was that the "physical toll would be too high" for patients.

But by the same token, the AHRQ review is designed to include the PACE trial. What kind of verbal gymnastics will the P2P perform in order to conclude that the 2-day CPET is harmful while simultaneously recommending GET?

Jennie Spotila has done an in-depth analysis of the review (below) which clarifies what is wrong with the review, and spells out the consequences.

Reprinted with permission

Protocol for Disaster?

By Jennie Spotila on OccupyCFS

The study protocol for the systematic review of ME/CFS was posted by the Agency for Healthcare and Research Quality yesterday.

It’s a recipe for disaster on its own, and within the broader context of the NIH P2P Workshop it’s even worse. Let me show you some of the reasons why.

Remind Me What This Is

The systematic evidence review is the cornerstone of the P2P process. The P2P meeting on ME/CFS will feature a panel of non-ME/CFS experts who will produce a set of recommendations on diagnosis, treatment, and research.

Because the P2P Panel members are not ME/CFS experts, they need background information to do their job. This systematic evidence review done by the Oregon Health & Science University under contract to AHRQ will be that background information. The systematic evidence report will be presented to the Panel in advance of the public P2P meeting, and will be used to establish the structure of the meeting as well.

The systematic review is the foundation. If done correctly, it would be a strong basis for a meaningful workshop. If done poorly, then everything that follows – the workshop and the resulting recommendations – will crumble. Based on the protocol published yesterday, I think “crumble” is putting it mildly.

The Key Questions
"You can’t get the right answer if you don’t ask the right questions."
 ~Dr. Beth Collins-Sharp, CFSAC Minutes, May 23, 2013, p. 12
As I wrote in January, the original draft questions for the evidence review included whether CFS and ME were separate diseases. That question is GONE, my friends. Now the review is only looking at two things:

  1. What methods are available to clinicians to diagnose ME/CFS and how do the use of these methods vary by patient subgroups?
  2. What are the benefits and harms of therapeutic interventions for patients with ME/CFS and how do they vary by patient subgroups?

These questions are based upon a single and critical assumption: ME and CFS are the same disease. Differences among patient groups represent subtypes, not separate diseases. The first and most important question is whether the ME and CFS case definitions all describe one disease. But they’re not asking that question; they have already decided the answer is yes.

The study protocol and other communications from HHS (including today’s CFSAC listserv message) state that the P2P Working Group refined these study questions. The implication is that since ME/CFS experts and one patient served on the Working Group, we should be satisfied that these questions were appropriately refined. But what I’m piecing together from various sources indicates that the Working Group did not sign off on these questions as stated in the protocol.

Regardless of who drafted these questions, they cannot lead to the right answers because they are not the right questions. And when you examine the protocol of how the evidence review will be conducted, these questions get even worse.

Protocol Problems


The real danger signals come from the description of how this evidence review will be done. The issue is what research will be included and assessed in the review. For example, when asking about diagnostic methods, what definitions will be considered?

This evidence review will include studies using “Fukada [sic], Canadian, International, and others“, and the Oxford definition is listed in the table of definitions on page 2 of the protocol. That’s right, the Oxford definition. Oxford requires only one thing for a CFS diagnosis: six months of fatigue. So studies done on people with long-lasting fatigue are potentially eligible for inclusion in this review.

The description of the population to be covered in the review makes that abundantly clear. For the key question on diagnostic methods, the study population will be: “Symptomatic adults (aged 18 years or older) with fatigue.” There’s not even a time limit there. Three months fatigue? Four? Six? Presence of other symptoms? Nope, fatigue is enough.

There is a specific exclusion: “Patients with other underlying diagnosis,” but which conditions are exclusionary is not specified. So will they exclude studies of patients with depression? Because the Oxford definition does not exclude people with depression and anxiety. We’ve seen this language about excluding people with other underlying diagnosis before – and it results in lumping everyone with medically “unexplained” fatigue into one group. This protocol is set up to result in exactly that. It erases the lines between people with idiopathic chronic fatigue and people with ME, and it puts us all in the same bucket for analysis.

And what about the key question on treatment? What studies will be included there? All of them. CBT, GET, complementary/alternative medicine, and symptom-based medication management. It’s not even restricted to placebo trials; trials with no treatment, usual care, and head-to-head trials are all included.

Let’s do the math. Anyone with unexplained fatigue, diagnosed using Oxford or any other definition, and any form of treatment. This adds up to the PACE trial, and studies like that.

But it’s even worse. The review will look at studies published since January 1988 because that was the year “the first set of clinical criteria defining CFS were published.” (page 6) Again, let’s do the math: everything published on ME prior to 1988 will be excluded.

Finally, notice the stated focus of the review: “This report focuses on the clinical outcomes surrounding the attributes of fatigue, especially post-exertional malaise and persistent fatigue, and its impact on overall function and quality of life because these are unifying features of ME/CFS that impact patients.” (page 2) In other words, PEM = fatigue. And fatigue is a unifying concept in ME/CFS. Did anyone involved in drafting this protocol actually listen to anything we said at last year’s FDA meeting?

Bad Science


Maybe you’re thinking it’s better for this review to cast a broad net. Capture as much science as possible and then examine it to answer the key questions. But that’s not going to help us in this case.

This review will include Oxford studies. It will take studies that only require patients to have fatigue and consider them as equivalent to studies that require PEM (or even just fatigue plus other symptoms). In other words, the review will include studies like PACE, and compare them to studies like the rituximab and antiviral trials, as if both patient cohorts were the same.

That assumption – that patients with fatigue are the same as patients with PEM and cognitive dysfunction – is where this whole thing falls apart. That assumption contaminates the entire evidence base of the study.

In fact, this review protocol makes an assumption about how the Institute of Medicine study will answer the same question. It is possible (though not assured) that IOM will design diagnostic criteria for the disease characterized by PEM and cognitive dysfunction. But this evidence review is based on an entirely different patient population that includes people with just fatigue. The conclusions of this evidence review may or may not apply to the population defined by the IOM.

It’s ridiculous!

But it’s the end use that really scares me. Remember that this systematic evidence review report will be provided to that P2P Panel of non-ME/CFS experts. The Panel will not be familiar with the ME/CFS literature before they get this review. And the review will conflate all these definitions and patient populations together as if they are equivalent. I think it’s obvious what conclusion the P2P Panel is likely to draw from this report.

I would love to be wrong about this. I would love for someone to show me how this protocol will result in GOOD science, and how it will give the P2P Panel the right background and foundation for the recommendations they will draft. Please, scientists and policy makers who read this blog – can you show me how this protocol will produce good science? Because I am just not seeing it.

What Do We Do?

This protocol is bad news but it is by no means the last word. Plans are already in motion for how the advocacy community can respond. I will keep you posted as those plans are finalized.

Make no mistake, this evidence review and P2P process are worse than the IOM study. We must respond. We must insist on good science. We must insist that our disease be appropriately defined and studied.