Friday, February 12, 2016

New NIH Study Raises Questions, Concerns, Hackles

After the NIH posted the protocol for its in-house study on ME/CFS, there was a firestorm of protest, which took Dr. Walter Koroshetz, the man in charge of the new NIH ME/CFS research program, by surprise. 

The protest centered on the use of the Reeves definition (also known as the Empirical Definition) to identify the patient cohort. The Reeves definition has barely been used in research since it was devised in 1995, largely because of its inadequacies. 

There were a number of other problems with the protocol, to which you will find references in the Solve ME/CFS Initiative questions. 

NIH hastily removed the protocol. Then they did something without precedent. Dr. Koroshetz called Carol Head, the CEO of Solve ME/CFS Initiative. 

(Below you will find the questions raised by Solve ME/CFS Initiative after the protocol was posted, as well as Carol Head's summary of her conversation with Dr. Koroshetz.) 

What was clear from the initial protocol was that NIH was on a fishing expedition. Rather than build on previous research, they decided to conduct a vague series of tests (blood tests, stool samples, etc.) in hopes that something would turn up. This is precisely the sort of study we don't need right now. Why reinvent the wheel when there are three decades of research on brain anomalies, immune system dysfunction, mitochondrial dysfunction, and endocrine dysfunction begging for follow-up? (Some of those studies have lain dormant for over a decade for lack of replication.) Do we really need to start from scratch - again?


Reprinted with the kind permission of Solve ME/CFS Initiative.

The National Institutes of Health is beginning to recruit participants for its in-house study of ME/CFS patients. The Solve ME/CFS Initiative has identified a number of significant questions and concerns with the design protocol of this research effort. Our organization—represented by our Vice President for Research and Scientific Programs—was immediately in contact with the NIH officials we have an existing, ongoing relationship with to express these serious concerns. We will push forward to determine what may be done to address them and ensure that this study is leveraged to the full benefit of ME/CFS patients. The Solve ME/CFS Initiative also will work in concert with other advocates to ensure maximum impact as a community.

Questions regarding the NIH study protocol that the Solve ME/CFS Initiative will be seeking answers to in the days and weeks to come include:

Is the protocol too broad in its inclusion and as such has little value, or too narrow in that it excludes by design the bulk of relevant patients? Why is it not based on the Canadian Consensus Criteria, which is regarded as the “gold standard” for this complex disease?

Did the protocol examine in-depth the recent advances in the field, including the wealth of information compiled in the 2015 Institute of Medicine report and the slew of commentaries and analysis since, especially on criteria definition beyond the 2003 Reeves criteria? While the study does not rely solely on the Reeves criteria, a clear rationale behind the protocol must be provided. For example, have characteristic ME/CFS symptoms like post-exertional malaise been incorporated under this protocol? If not, why not?

Is this protocol timely and current? In other words, has it benefited from or clearly incorporated the most recent developments in technology, clinical management, basic research or scholarly advances in the field, for instance, the literature and recommendations included in the IOM report?

Has the issue of comorbidity been carefully considered? More specifically, has a clear distinction been made between primary, pathway-specific diagnoses/manifestations versus secondary and pleotropic symptoms like depression or lethargy often associated with a plethora of chronic diseases, such as cancer and diabetes?

Are the study endpoints themselves–both qualitative and quantitative–well defined, established and objective? Additionally, have the number of patient participants been determined according to a bio-statistical analysis for each endpoint? Is the control group the one most relevant to assess the changes in each endpoint or between groups? Are there follow-up plans/alternatives built into the protocol, given its focus on the aspect of immunity and inflammation as an initial stage?

This is the response NIH made to those questions

From Carol Head...

Walter Koroshetz called me moments ago; Zaher Nahle and I had a 20 minute conversation with him and I want to share it.

What was most pointed was his statement that, the NIH often posts protocol-related information online. Most attract zero comment; he is not aware of any posting ever that has attracted the kind of burst of response, all of it negative, that this posting elicited. They were shocked.

While we and others have been telling NIH staff about the intense interest in the ME/CFS community, now it has been clearly illustrated. This gave us the opportunity to explain why: among other things, the intense interest reflects the desperate desire of patients for research progress in this disease. He experienced our decades-long pent-up demand, anger and frustration for federal attention. We also noted that, while virtually all the feedback to NIH was negative, it was HIGHLY informed. We are a patient community that highly attuned to the science and the enormous differences among the several historical diagnostic/clinical criteria.

I told him that some in the patient community plan to boycott the study; we has genuinely mystified. “Why do patients not want ME/CFS research to be done?” We noted that bad research is worse than no research, and “garbage in / garbage out” will occur if the criteria for defining “ME/CFS patients” is not meticulous and highly attuned.

We also stated that funds for this disease MUST come from the federal government; he cited other diseases in which patients have initiated research by amassing significant funds (e.g. he mentioned Huntington’s disease). We discussed the differences in our disease, among them:

  • It is (generally) not fatal (It’s a life sentence, not a death sentence.)
  • There is significant stigma so patients don’t self-identify
  • Patients are often impoverished by this disease
  • It’s difficult to be diagnosed; most are not
  • There are still no clear causes on which to build research budgets.

This makes it quite different from Huntington’s and most other diseases. My sense was that he may not have considered the disease from this “marketing challenge” perspective before, and therefore understand our unique difficulties in raising private research funds; we are so much more dependent on our federal government than most.

He now recognizes that the posting was a significant faux pas; they will post a new protocol. He did not say when.

Dr. Koroshetz has demonstrated his goodwill and genuine desire to move forward in a positive way by proactively calling. He did not have to do so and most probably would not.

Overall, I summarize this as an individual who is committed to doing the right thing and who was shocked by the response. It was/is a wakeup call regarding the intensity of interest and anger in our patient community, and that’s good. We cannot and do not speak for everyone in our patient community (No one can…) and at the same time, I believe that we were able to make incremental progress in closing the enormous gap of understanding that exists between patients and the NIH. I am glad that I was able to contain my longstanding, burning anger long enough to have an intense and candid discussion.

Onward, Carol

Carol Head
President, Solve ME/CFS Initiative

Thursday, February 4, 2016

HHS Ignores Request to Review PACE Trial

On February 3, 2016, a group of advocates wrote the Agency for Healthcare Research and Quality(AHRQ) asking them to reconsider the inclusion of the PACE trial in their review.

The AHRQ Evidence Review for ME/CFS formed the basis for the P2P report, in which GET and CBT were reported as beneficial treatments. This conclusion was based on the results of the PACE trial, a study which has been roundly criticized for its flawed methodology.

In November 2015, a group of U.S. organizations sent a letter to the U.S. Health and Human Services (HHS) asking them to address concerns raised in a series of articles about the PACE trial by journalist David Tuller. Based on these concerns and the call by the National Institute of Health (NIH) Pathways to Prevention report to retire the Oxford definition because it could “impair progress and cause harm,” the letter recommended the following steps as appropriate and necessary to protect patients:
  • The AHRQ revise its evidence review to reflect the issues with PACE and with studies using the Oxford case definition in general; 
  • The Centers for Disease Control and Prevention (CDC) remove findings based on PACE and other Oxford case definition studies from current and planned medical education; 
  • HHS use its leadership position to communicate these concerns to other medical education providers; 
  • HHS call for The Lancet to seek an independent reanalysis of PACE.
In the AHRQ’s response, the authors of the evidence review said that they had already considered some of the concerns raised by Tuller and that the additional information would not change the review’s conclusions. This is completely inconsistent with the published review. (The evidence review ranked PACE as a “good” study with “undetected” reporting bias.) AHRQ’s response failed to address the use of the Oxford case definition as the basis of clinical trials for ME/CFS patients. 

The CDC’s response stated that the IOM and P2P “have placed the findings of the PACE trial in an appropriate context for moving the field forward.” (That is bureaucratese for "we are doing nothing.") Like the AHRQ, the CDC failed to address the inclusion of studies based on the Oxford case definition.

HHS did not respond to the request to call on The Lancet to seek an independent review.

If you have not done so, please sign this petition calling for AHRQ and CDC to investigate the PACE trial.

Related Posts


To: Dr. Arlene Bierman

CC: Dr. Suchitra Iyer, Dr. Wendy Perry

Subject: AHRQ response to community request on PACE and Oxford studies

Date: February 3, 2016

Thank you for your December 24 response to the November 15 patient community letter requesting that AHRQ and CDC investigate the concerns with the PACE trial raised by journalist Dr. David Tuller.1 As you know, this issue is of paramount importance because of the risk of harm to patients from inappropriate treatment recommendations based on flawed studies.

The patient community has requested that AHRQ investigate Dr. Tuller’s concerns and then revise its Evidence Review in light of those concerns and with Oxford definition studies more broadly. The evidence review authors responded that the provided information would not change the conclusions of the report. We disagree.

First, while the authors acknowledge some of the problems with the PACE trial in the full evidence review posted on the AHRQ site, they did not report these problems in the article published in Annals,2 leaving the journal readers unaware of these issues. Of greater concern, in spite of recognizing these issues and stating that they are considered in rating the evidence, the authors still rated PACE as a “good” study with “undetected” reporting bias.3 Such ratings are incompatible with the known flaws in this study and call into question the validity of the evidence based methods used for such a controversial evidence base. Even based on just the information available at the time of AHRQ’s evidence review, the rating of this study and its subsequent impact on the overall treatment conclusions need to be reassessed.

Second, the patient community had also raised concerns with the inclusion of treatment recommendations based on Oxford studies. This problem was highlighted to AHRQ staff when the evidence review protocol was first issued.4 The review itself acknowledged that the Oxford criteria are problematic because Oxford can include patients “with other fatiguing illnesses.” The Pathways to Prevention report stated that the Oxford criteria could “impair progress and cause harm” and called for it to be “retired.”

And yet, the evidence review made general conclusions about the benefits and harms of CBT and GET. For example, the report stated, “GET improved measures of function, fatigue, global improvement as measured by the clinical global impression of change score, and work impairment.” It also concluded that CBT resulted in improvement in physical function scores.

Notably, the evidence review did not qualify these conclusions on treatment effects by case definition. Such statements can reasonably be inferred to apply to all “ME/CFS” patients.5

The obvious question is whether these conclusions would still be true if the Oxford studies had been removed and analyzed separately. In a reply to a published comment on the Annals article raising this issue, the authors acknowledged the importance of analyzing treatment benefits by case definition. They then stated that the improvement in physical function following CBT was seen in Oxford studies, but not in Fukuda studies.6 However, this finding was not stated in the Annals article itself nor did the article report other differences in benefits and harms by case definition. Given that Oxford is acknowledged to include patients with other diseases and given P2P’s call to retire Oxford, the failure to report Oxford findings separate from findings with other case definitions is a serious flaw of this evidence review. The study limitation statements do not compensate for this flaw.

This has real world consequences for patients. The evidence review’s general conclusions about treatment benefits are already being incorporated into clinical guidelines. For instance, referencing AHRQ’s evidence review along with the PACE trial, UpToDate recommends CBT and GET for patients diagnosed by the IOM criteria.7 But CBT and GET have been studied in Oxford cohorts, where they are used to reverse presumed deconditioning, fear of activity, and false beliefs of having an organic disease. Such treatments are obviously inappropriate for the disease that the IOM said is organic, not deconditioning, and characterized by a systemic intolerance to exertion.8 Mixing and matching patient populations in both the evidence review and in clinical guidelines is of questionable medical ethics and creates a significant risk of harm for patients.

We strongly urge AHRQ to work with the evidence review authors to ensure that the PACE trial and its impact on the evidence review’s treatment recommendations is reassessed. Further, it is critical that the authors explicitly report findings for Oxford studies separately from findings in studies using other case definitions. Finally, to ensure that findings in one group of patients are not being harmfully applied to another group of patients, it is essential that the authors explicitly state that treatment conclusions based only on Oxford studies should not be applied to patients meeting other case definitions, particularly those that require post-exertional malaise.

This matter is of the utmost importance to patients. We hope you will give it the full attention that it deserves. Contact Mary Dimmock if you need additional clarification or background.


Massachusetts CFIDS/ME & FM Association
The Myalgic Encephalomyelitis Action Network (#MEAction)
New Jersey ME/CFS Association, Inc.
Open Medicine Foundation (OMF)
Phoenix Rising
Solve ME/CFS Initiative
Pandora Org
Wisconsin ME and CFS Association, Inc.
Mary Dimmock
Claudia Goodell
Sonya Heller Irey
Denise Lopez-Majano (Speak Up About ME )
Matina Nicolson
Donna Pearson
Jennifer Spotila JD (OccupyCFS)
Meghan Shannon
Erica Verrillo (Onward Through the Fog)

1 Patient organizations’ letter to AHRQ and CDC on PACE and Oxford definitions. November 15, 2015.

AHRQ response to November 15, 2015 letter. December 24, 2015.

David Tuller. “TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study.” Virology Blog. October 21-23, 2015.

First installment:

Second installment:

Third installment:

2 Smith MB, Haney E, McDonagh M, Pappas M, Daeges M, Wasson N, et al. “Treatment of Myalgic

Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop.” Ann Intern Med. 2015; 162: 841-850.

The full report is posted on the AHRQ site at this location:

3 Ibid.

• Appendix E: Quality Rating Criteria states that in a “good” study, for instance, “important outcomes are considered.” But as Tuller reported, outcomes were changed midtrial, criteria were changed, and some outcomes/analyses were not reported.

• Appendix F: Strength of Evidence Criteria defines reporting bias, which it states includes: ”Study publication bias, i.e., nonreporting of the full study; Selective outcome reporting bias, i.e., nonreporting (or incomplete reporting) of planned outcomes or reporting of unplanned outcomes; and Selective analysis reporting bias, i.e., reporting of one or more favorable analyses for a given outcome while not reporting other, less favorable analyses.”

• Appendix K: Strength of Evidence tables reported Reporting bias as “Undetected” in rows that included PACE, even when the only study covered in a particular outcome was PACE.

4 The authors state that they correctly included the Oxford criteria because the protocol specified it. However, the problems with inclusion of Oxford were raised immediately after the protocol was issued on May 1, 2014.

• Spotila, J. “Protocol for Disaster.” OccupyCFS. May 3, 2014.

• M. Dimmock contacted Dr. Beth Collins Sharpe (of AHRQ and ex-officio to CFSAC at the time) by email on May 4, 2014 with this concern. On 6/4/2014, Collins-Sharpe responded that she didn’t “think that the different diagnoses will be lumped together for analysis.” She added, “You’re right that it would be comparing Oxford apples to CCC oranges.“ But then the authors lumped together all definitions in the analyses.

Additionally, this issue was raised directly with Dr. Collins

• Spotila J, Dimmock M. Letter submitted to Dr. Francis Collins, Director of NIH, regarding Pathways to Prevention Workshop on ME/CFS.” May 28, 2014.

See attachment 2 (Page 20) discussed the Review Protocol and stated. “At least one case definition
[Oxford] requires no more than unexplained fatigue for a diagnosis of CFS, despite the mounting evidence that such case definitions capture a different study population than definitions that require post-exertional malaise, cognitive dysfunction or other multisystem impairments.“

5 UpToDate is one example where the conclusions of this evidence review have been interpreted as applying to all patients.

It is important to note that PACE trial publications have said that the PACE trial findings also apply to patients defined by the CDC CFS criteria and the London ME criteria. But this patient characterization was done after first selecting patients by Oxford. As Dr. Bruce Levin said in Dr. Tuller’s series on PACE, it is not correct to extrapolate from a subgroup selected from a group of Oxford patients to ME patients as a class. Further, the PACE recovery publication noted that CDC CFS criteria had been modified to require symptoms for just one week and that these modifications could result in inaccurate patient characterizations. The impact of PACE modifications to the ME criteria are unknown.

• White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M, PACE Trial Management Group. “Recovery from chronic fatigue syndrome after treatments given in the PACE trial.” Psychol Med. October 2013’ 43(10): 2226-2235. PMID: 23363640.

6 Smith MB, Haney E, McDonagh M, Pappas M, Daeges M, Wasson N, et al. “Treatment of Myalgic
Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop.” Ann Intern Med. 2015; 162: 841-850.

In the comments section, the authors stated, “Dr. Chu’s comment regarding the importance of analyzing data based on case definitions used for inclusion to trials is consistent with our approach. For example, in the trials of cognitive behavioral therapy (CBT) using the SF-36 physical function item as an outcome measure, the two studies using Oxford criteria indicated improvement, while the two using CDC criteria reported no improvement.”

7 UpToDate clinical guidelines, updated in July 2015, include:

o Gluckman, Stephen. “Clinical features and diagnosis of chronic fatigue syndrome (systemic exertion intolerance disease)” UpToDate. Deputy Editor Park, L. Last updated July 30, 2015. Literature review current through August 2015.

o Gluckman, Stephen. “Treatment of chronic fatigue syndrome (systemic exertion intolerance disease)” UpToDate. Deputy Editor Lee, P. Last updated July 30, 2015. Literature Review current through August 2015.

8 Tucker, M. “IOM Gives Chronic Fatigue Syndrome a New Name and Definition” Medscape Multispecialty. February 10, 2015.

Dr. Clayton, IOM panel chair, noted, "The level of response is much more than would be seen with deconditioning," with reference to the belief voiced by some clinicians that physical abnormalities in these patients are merely a result of their lack of activity.

Wednesday, January 27, 2016

The Answer is No, A Thousand Times No

Last month, four respected researchers asked for data from the PACE trial. This is the umpteeth request, and, predictably, it was denied.

But this time the reason given was not that 1) the request was "vexatious," 2) the trial participants might somehow be harmed, 3) it might infringe on intellectual property rights, or 4) the study might be criticized.

None of the above. Their excuse this time was that "participants may be less willing to participate in a planned feasibility follow up study." In other words, if people with ME/CFS knew how bad the PACE trial really was, they might not be willing to participate in another trial.

Imagine a situation in which a drug is administered to a group of ill people, who then become more ill. But the authors of the trial hide the data and claim that the ill people appear to benefit. When asked for the data they refuse, because they don't want participants to know the drug is harmful in case they do future studies.

How illegal would that be?

The PACE trial authors have no moral compass. They are planning on rehashing their study endlessly to milk it for all its worth. They will continue to spin their "results" until someone in authority puts a stop to it.


At least we’re not vexatious

19 JANUARY 2016, Virology Blog

On 17 December 2015, Ron Davis, Bruce Levin, David Tuller and I requested trial data from the PACE study of treatments for ME/CFS published in The Lancet in 2011. Below is the response to our request from the Records & Compliance Manager of Queen Mary University of London. The bolded portion of our request, noted in the letter, is the following: “we would like the raw data for all four arms of the trial for the following measures: the two primary outcomes of physical function and fatigue (both bimodal and Likert-style scoring), and the multiple criteria for “recovery” as defined in the protocol published in 2007 in BMC Neurology, not as defined in the 2013 paper published in Psychological Medicine. The anonymized, individual-level data for “recovery” should be linked across the four criteria so it is possible to determine how many people achieved “recovery” according to the protocol definition.”


Dear Prof. Racaniello

Thank you for your email of 17th December 2015. I have bolded your request below, made under the Freedom of Information Act 2000.

You have requested raw data, linked at an individual level, from the PACE trial. I can confirm that QMUL holds this data but I am afraid that I cannot supply it. Over the last five years QMUL has received a number of similar requests for data relating to the PACE trial. One of the resultant refusals, relating to Decision Notice FS50565190, is due to be tested at the First-tier Tribunal (Information Rights) during 2016. We believe that the information requested is similarly exempt from release in to the public domain. At this time, we are not in a position to speculate when this ongoing legal action will be concluded.

Any release of information under FOIA is a release to the world at large without limits. The data consists of (sensitive) personal data which was disclosed in the context of a confidential relationship, under a clear obligation of confidence. This is not only in the form of explicit guarantees to participants but also since this is data provided in the context of medical treatment, under the traditional obligation of confidence imposed on medical practitioners. See generally, General Medical Council, ‘Confidentiality’ (2009) available at The information has the necessary quality of confidence and release to the public would lead to an actionable breach.

As such, we believe it is exempt from disclosure under s.41 of FOIA. This is an absolute exemption.

The primary outcomes requested are also exempt under s.22A of FOIA in that these data form part of an ongoing programme of research.

This exemption is subject to the public interest test. While there is a public interest in public authorities being transparent generally and we acknowledge that there is ongoing debate around PACE and research in to CFS/ME, which might favour disclosure, this is outweighed at this time by the prejudice to the programme of research and the interests of participants. This is because participants may be less willing to participate in a planned feasibility follow up study, since we have promised to keep their data confidential and planned papers from PACE, whether from QMUL or other collaborators, may be affected.

On balance we believe that the public interest in withholding this information outweighs the public interest in disclosing it.

In accordance with s.17, please accept this as a refusal notice.

For your information, the PACE PIs and their associated organisations are currently reviewing a data sharing policy.

If you are dissatisfied with this response, you may ask QMUL to conduct a review of this decision. To do this, please contact the College in writing (including by fax, letter or email), describe the original request, explain your grounds for dissatisfaction, and include an address for correspondence. You have 40 working days from receipt of this communication to submit a review request. When the review process has been completed, if you are still dissatisfied, you may ask the Information Commissioner to intervene. Please see for details.

Yours sincerely

Paul Smallcombe
Records & Information Compliance Manager

Friday, January 22, 2016

Mitochondrial DNA and ME/CFS - One Pathogen, Many Responses

Differences in nuclear DNA vs mitochondrial DNA:
1)Nuclear DNA is a double helix, mitochondrial DNA is circular
2) Nuclear DNA is maternal and paternal, mitochondrial DNA is maternal
(mitochondria are inherited from the mother)
4) Mito DNA haplogroups are people descended from a single ancestor
A new study of mitochondrial DNA in ME/CFS patients has provided some important clues as to the variation of symptoms seen in patients.

Four important points brought out 
in this study were:

1) None of the patients showed any evidence of a mitochondrial genetic disease.

2) No difference was seen in the types of mitochondrial DNA between patients and healthy individuals

3) There was no increased susceptibility to ME/CFS among people with different mitochondrial SNPs (single variations in DNA)

4) However, there were associations of SNPs with certain symptoms and/or their severity. Individuals who carry a particular SNP, for example, are predicted to be at greater risk of experiencing particular types of symptoms once they become ill. (Single nucleotide polymorphisms, frequently called SNPs (pronounced “snips”), are the most common type of genetic variation among people.)

What this means is that 1) ME/CFS is not genetic, 2) ME/CFS patients do not have pre-dispositions for getting the disease, and 3) there may be a single pathogen causing the disease.

The authors conclude:

"A puzzling aspect of ME/CFS has been the diversity of symptoms and the variation of their severity among different individuals. These differences should not be taken as proof that more than one insult was the initiating factor, nor that different patients have different underlying problems. It remains possible that much of the diversity of the manifestation of the illness results from genetic diversity rather than the existence of multiple fundamental causes."

This study provides a rationale for outbreaks and clusters. It also accounts for both the discrepancies in Fukuda, CCC, and ICC clinical case definitions as well as the large number of possible combinations of symptoms. These case definitions may be describing the same illness, caused by the same pathogen, as it is experienced by people with distinct genetic variations.

The findings of this study represent a major shift in thinking, not just about ME/CFS, but about all diseases. This study explains how a single pathogen can create multiple symptoms, and how those symptoms may manifest themselves depending on genetic variations in the host. The findings also may account for ranges in severity.

You can read about the Chronic Fatigue Initiative HERE.


By Maureen Hanson

This is a simplified explanation of the 2016 academic paper published in the Journal of Translational Medicine.

Mitochondrial DNA variants correlate with symptoms in myalgic encephalomyelitis/chronic fatigue syndrome by Paul Billing-Ross, Arnaud Germain, Kaixiong Ye, Alon Keinan, Zhenglong Gu, and Maureen R. Hanson. J. Translational Medicine. 2016, 14:19

Patients with ME/CFS experience a profound lack of energy, severe fatigue, along with a variety of other symptoms, including one or more of the following: muscle pain, headaches, gastrointestinal discomfort, difficulty concentrating, exacerbation of symptoms following exercise, abnormal regulation of blood pressure and heart rate, and unrefreshing sleep. Mitochondria, sub-cellular organelles are responsible for producing ATP, the energy coinage of the cell, through conversion of glucose. Therefore, a logical approach to learn more about a disease affecting energy is probing of the function of mitochondria.

Mitochondria are made up of molecules encoded by the nuclear genome--DNA located in the nucleus--as well as the mitochondrial genome—a small amount of DNA present within each organelle. Defects in mitochondrial DNA lead to devastating genetic diseases, with such symptoms as brain abnormalities, severe fatigue, blindness or defective heart function—and can be fatal. The mitochondrial genome of healthy humans also exhibits some natural variation—a single component of the mitochondrial DNA sometimes differs between one human and another—this is known as a SNP (single nucleotide polymorphism, "snip"). Often more than one SNP differs between one population of humans and another—for example, mitochondrial genomes whose origin can be traced to France differ in a number of SNPs from those in people in Central Asia. These different types of mitochondrial genomes, based on a specific set of SNPs, are referred to as haplogroups. Even people whose mitochondrial DNA belongs to the same haplogroup can differ among one another because of some variation in additional SNPs. Some mitochondrial SNPs have been associated with various characteristics, such as adaptation to cold weather or high altitude environments and have been implicated in susceptibility to diabetes and various inflammatory diseases. An informative review of the role of mitochondria in disease has been written by Wallace and Chalkia, researchers at the University of Pennsylvania.

A further complexity of mitochondrial genetics is that there are many individual mitochondria within the same cell, and thus many copies of mitochondrial DNA in each cell. Sometimes new mutations arise so that some of the copies of DNA within the same cell, and therefore within the same person, differ from one another. This situation is called “heteroplasmy”. As cells grow and multiply, by chance there can be uneven distribution of normal vs. abnormal DNA to different cells. If mitochondrial DNA with a harmful mutation becomes the predominant type in a particular tissue, serious symptoms will emerge.

In our JTM paper, work that was primarily supported by the Chronic Fatigue Initiative, we sequenced the mitochondrial DNA from a cohort of ME/CFS patients and healthy individuals, using DNA extracted from white blood cells stored in the biobank developed by the Chronic Fatigue Initiative.

We asked four primary questions:
  1. Were any of the ME/CFS patients identified by 6 well-known ME/CFS experts misdiagnosed and are actually victims of a mitochondrial genetic disease?
  2. Do people with ME/CFS carry more copies of mitochondrial DNA with harmful mutations than healthy people (heteroplasmy)?
  3. Are people belonging to one haplogroup more likely to fall victim to ME/CFS than another? 
  4. Are people who have particular SNPs more likely to experience particular symptoms or have increased severity of symptoms?
Our work showed that none of the blood samples obtained from 193 patients identified by the CFI’s 6 expert M.D.s gave any indication of a mitochondrial genetic disease.

Furthermore, we found no difference in the degree of heteroplasmy between patients and healthy individuals.

We also observed no increased susceptibility to ME/CFS among individuals carrying particular haplogroups or SNPs within a haplogroup.

However, we did detect associations of particular SNPs with certain symptoms and/or their severity. For example, we did find that individuals with particular SNPs were more likely to have gastrointestinal distress, chemical or light sensitivity, disrupted sleep, or flu-like symptoms. This finding does NOT mean that if your mitochondrial DNA carries one of these SNPs, you will inevitably experience a particular symptom or have higher severity of some symptoms. Instead, because a particular SNP was seen more often in ME/CFS patients with certain characteristics, individuals that carry that SNP are predicted to be at greater risk of experiencing particular types of symptoms once they become ill.

This study demonstrates the importance of a well-characterized cohort of patients and controls along with detailed clinical information about their experience of illness. Without the data from the lengthy patient questionnaires collected along with the subject’s blood, we could not have correlated SNPs with patient characteristics. While the materials from the CFI subjects are extremely valuable and our results are statistically significant, greater numbers of subjects must be analyzed to determine whether the correlations we detected continue to hold up when more patients are studied, and whether such correlations exist within people carrying other haplogroups. 

Due to the European origin of most of the ancestors of the CFI subjects, most belong to haplogroup H, the most common European haplogroup. A much larger number of haplogroup H subjects, as well as large cohorts of individuals with other haplogroups, will be necessary to analyze to dissect out other possible correlations or to determine whether or not any of the correlations we detected with a relatively small population are spurious. With more subjects, we might also be able to detect additional correlations that were not obvious from our initial study.

Whether or not the genetic correlations we have observed are verified or not through further work, our study indicates an important hypothesis that should be tested in ME/CFS. How much of the variation in symptoms between different individuals results from their different nuclear and/or mitochondrial genetic makeup, rather than variation in the inciting cause?

A puzzling aspect of ME/CFS has been the diversity of symptoms and the variation of their severity among different individuals. These differences should not be taken as proof that more than one insult was the initiating factor, nor that different patients have different underlying problems. It remains possible that much of the diversity of the manifestation of the illness results from genetic diversity rather than the existence of multiple fundamental causes.

This article was written by Professor Maureen Hanson and is licensed under a Creative Commons Attribution 4.0 International License.

Maureen R. Hanson
Liberty Hyde Bailey Professor
Phone: 607-254-4833
Fax: 607-255-6249

Hanson Laboratory
Department of Molecular Biology and Genetics
321 Biotechnology Building
Cornell University
Ithaca, NY 14853
Phone: 607-254-4832

Wednesday, January 20, 2016

Trial By Error, Continued: More Nonsense from The Lancet Psychiatry

In David Tuller's most recent post on Virology Blog, he  discusses the absurdity of the PACE authors' claims regarding bias.

To break it down:

1) The PACE trial team distributed a newsletter to the trial participants during the trial.

2) The newsletter contained glowing testimonials from patients and doctors about the success of their treatment, but did not specify which treatment was being administered.

3) The same newsletter stated that the government had endorsed GET and CBT as the best treatments.

4) The PACE trial authors claim that because they presented a positive view of ALL the treatments, this did not constitute bias, regardless of their pitch regarding government endorsement.

Imagine finding these statements in a newsletter.
"This treatment literally saved my life. I don't know what I would have done without it!" ~Jane Doe, patient.
"My patients have never felt better! From now on, I am going to give this treatment to all of my patients!" Dr. John Doe.
The FDA endorses Mutilen as the best treatment for insomnia based on all available evidence.
What reader isn't going to put two and two together?

I do not believe for one minute that the newsletter was "amateurish" mistake on the part of the researchers. It was a painfully obvious attempt to glean positive results from a trial that had clearly failed.

No matter how you look at it, the PACE trial is a poster child for fraudulent research.

Reprinted with permission.

Trial By Error, Continued: More Nonsense from The Lancet Psychiatry

19 JANUARY 2016

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

The PACE authors have long demonstrated great facility in evading questions they don’t want to answer. They did this in their response to correspondence about the original 2011 Lancet paper. They did it again in the correspondence about the 2013 recovery paper, and in their response to my Virology Blog series. Now they have done it in their answer to critics of their most recent paper on follow-up data, published last October in The Lancet Psychiatry.

(They published the paper just a week after my investigation ran. Wasn’t that a lucky coincidence?)

The Lancet Psychiatry follow-up had null findings: Two years or more after randomization, there were no differences in reported levels of fatigue and physical function between those assigned to any of the groups. The results showed that cognitive behavior therapy and graded exercise therapy provided no long-term benefits because those in the other two groups reported improvement during the year or more after the trial was over. Yet the authors, once again, attempted to spin this mess as a success.

In their letters, James Coyne, Keith Laws, Frank Twist, and Charles Shepherd all provide sharp and effective critiques of the follow-up study. I’ll let others tackle the PACE team’s counter-claims about study design and statistical analysis. I want to focus once more on the issue of the PACE participant newsletter, which they again defend in their Lancet Psychiatry response.

Here’s what they write: “One of these newsletters included positive quotes from participants. Since these participants were from all four treatment arms (which were not named) these quotes were [not]…a source of bias.”

Let’s recap what I wrote about this newsletter in my investigation. The newsletter was published in December 2008, with at least a third of the study’s sample still undergoing assessment. The newsletter included six glowing testimonials from participants about their positive experiences with the trial, as well as a seventh statement from one participant’s primary care doctor. None of the seven statements recounted any negative outcomes, presumably conveying to remaining participants that the trial was producing a 100 % satisfaction rate. The authors argue that the absence of the specific names of the study arms means that these quotes could not be “a source of bias.”

This is a preposterous claim. The PACE authors apparently believe that it is not a problem to influence all of your participants in a positive direction, and that this does not constitute bias. They have repeated this argument multiple times. I find it hard to believe they take it seriously, but perhaps they actually do. In any case, no one else should. As I have written before, they have no idea how the testimonials might have affected anyone in any of the four groups—so they have no basis for claiming that this uncontrolled co-intervention did not alter their results.

Moreover, the authors now ignore the other significant effort in that newsletter to influence participant opinion: publication of an article noting that a federal clinical guidelines committee had selected cognitive behavior therapy and graded exercise therapy as effective treatments “based on the best available evidence.” Given that the trial itself was supposed to be assessing the efficacy of these treatments, informing participants that they have already been deemed to be effective would appear likely to impact participants’ responses. The PACE authors apparently disagree.

It is worth remembering what top experts have said about the publication of this newsletter and its impact on the trial results. “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism,” Bruce Levin, a biostatistician at Columbia University, told me.

My Berkeley colleague, epidemiologist Arthur Reingold, said he was flabbergasted to see that the researchers had distributed material promoting the interventions being investigated, whether they were named or not. This fact alone, he noted, made him wonder if other aspects of the trial would also raise methodological or ethical concerns.

“Given the subjective nature of the primary outcomes, broadcasting testimonials from those who had received interventions under study would seem to violate a basic tenet of research design, and potentially introduce substantial reporting and information bias,” he said. “I am hard-pressed to recall a precedent for such an approach in other therapeutic trials. Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Saturday, January 9, 2016

ME/CFS: The Last Great Medical Cover Up

"The Last Great Medical Cover Up" is a powerful documentary featuring interviews with ME/CFS patients in the UK. Patients recount how they were "pushed to the breaking point" with GET, misdiagnosed with psychiatric illnesses, and told "everybody's tired." 

These interviews will move you to tears, but they will also resonate with many who have had the same experiences in the US.

Everyone should watch this film.

Wednesday, January 6, 2016

Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

David Tuller has been relentless in his pursuit of the truth.

He has doggedly, insistently, and ceaselessly sought answers from PACE authors about the validity of their results, and about their research methods.

The authors of the PACE trial have turned down his efforts to speak with them, refusing him as they have every other person who has requested access to their data.

Recently, David Tuller posted the questions he has attempted, unsuccessfully, to pose to the authors of the PACE trial. All of these are reasonable questions about methodology, trial results, intent, possible conflicts of interest, and trial participant rights. All of these questions should have been asked by the various journals that published the PACE trial results. All of these questions should have been posed by the institutions that sponsored the trial.

And all of these questions - Every. Single. One - should have been asked by the researchers themselves.

That is how responsible scientists behave.


Reprinted with permission.

Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

4 JANUARY 2016, Virology Blog, By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

I have been seeking answers from the PACE researchers for more than a year. At the end of this post, I have included the list of questions I’d compiled by last September, when my investigation was nearing publication. Most of these questions remain unanswered.

The PACE researchers are currently under intense criticism for having rejected as “vexatious” a request for trial data from psychologist James Coyne—an action called “unforgivable” by Columbia statistician Andrew Gelman and “absurd” by Retraction Watch. Several colleagues and I have filed a subsequent request for the main PACE results, including data for the primary outcomes of fatigue and physical function and for “recovery” as defined in the trial protocol. The PACE team has two more weeks to release this data, or explain why it won’t.

Any data from the PACE trial will likely confirm what my Virology Blog series has already revealed: The results cannot stand up to serious scrutiny. But the numbers will not provide answers to the questions I find most compelling. Only the researchers themselves can explain why they made so many ill-advised choices during the trial.

In December, 2014, after months of research, I e-mailed Peter White, Trudie Chalder and Michael Sharpe—the lead PACE researcher and his two main colleagues–and offered to fly to London to meet them. They declined to talk with me. In an email, Dr. White cited my previous coverage of the illness as a reason. (The investigators and I had already engaged in an exchange of letters in The New York Times in 2011, involving a PACE-related story I had written.) “I have concluded that it would not be worthwhile our having a conversation,” Dr. White wrote in his e-mail.

I decided to postpone further attempts to contact them for the story until it was near completion. (Dr. Chalder and I did speak in January 2015 about a new study from the PACE data, and I previously described our differing memories of the conversation.) In the meantime, I wrote and rewrote the piece and tweaked it and trimmed it and then pasted back in stuff that I’d already cut out. Last June, I sent a very long draft to Retraction Watch, which had agreed to review it for possible publication.

I still hoped Dr. White would relent and decide to talk with me. Over the summer, I drew up a list of dozens of questions that covered every single issue addressed in my investigation.

I had noticed the kinds of non-responsive responses Dr. White and his colleagues provided in journal correspondence and other venues whenever patients made cogent and incontrovertible points. They appeared to excel at avoiding hard questions, ignoring inconvenient facts, and misstating key details. I was surprised and perplexed that smart journal editors, public health officials, reporters and others accepted their replies without pointing out glaring methodological problems—such as the bizarre fact that the study’s outcome thresholds for improvement on its primary measures indicated worse health status than the entry criteria required to demonstrate serious disability.

So my list of questions included lots of follow-ups that would help me push past the PACE team’s standard portfolio of evasions. And if, as I suspected, I wouldn’t get the chance to pose the questions myself, I hoped the list would be a useful guide for anyone who wanted to conduct a rigorous interview with Dr. White or his colleagues about the trial’s methodological problems. (Dr. White never agreed to talk with me; I sent my questions to Retraction Watch as part of the fact-checking process.)

In September, Retraction Watch interviewed Dr. White in connection with my piece, as noted in a recent post about Dr. Coyne’s data request. Retraction Watch and I subsequently determined that we differed on the best approach and direction for the story. On October 21st to 23rd, Virology Blog ran my 14,000-word investigation.

But I still don’t have the answers to my questions.


List of Questions, September 1, 2015:

I am posting this list verbatim, although if I were pulling it together today I would add, subtract and rephrase some questions. (I might have misstated a statistical concept or two.) The list is by no means exhaustive. Patients and researchers could easily come up with a host of additional items. The PACE team seems to have a lot to answer for.

1) In June, a report commissioned by the National Institutes of Health declared that the Oxford criteria should be “retired” because the case definition impeded progress and possibly caused harm. As you know, the concern is that it is so non-specific that it leads to heterogeneous study samples that include people with many illnesses besides ME/CFS. How do you respond to that concern?

2) In published remarks after Dr. White’s presentation in Bristol last fall, Dr. Jonathan Edwards wrote: “What Dr White seemed not to understand is that a simple reason for not accepting the conclusion is that an unblinded trial in a situation where endpoints are subjective is valueless.” What is your response to Dr. Edward’s position?

3) The December 2008 PACE participants’ newsletter included an article about the UK NICE guidelines. The article noted that the recommended treatments, “based on the best available evidence,” included two of the interventions being studied–CBT and GET. (The article didn’t mention that PACE investigator Jessica Bavington also served on the NICE guidelines committee.) The same newsletter included glowing testimonials from satisfied participants about their positive outcomes from the trial “therapĂ„y” and “treatment” but included no statements from participants with negative outcomes. According to the graph illustrating recruitment statistics in the same newsletter, about 200 or so participants were still slated to undergo one or more of their assessments after publication of the newsletter.

Were you concerned that publishing such statements would bias the remaining study subjects? If not, why not? A biostatistics professor from Columbia told me that for investigators to publish such information during a trial was “the height of clinical trial amateurism,” and that at the very least you should have assessed responses before and after disseminating the newsletter to ensure that there was no bias resulting from the statements. What is your response? Also, should the article about the NICE guidelines have disclosed that Jessica Bavington was on the committee and therefore playing a dual role?

4) In your protocol, you promised to abide by the Declaration of Helsinki. The declaration mandates that obtaining informed consent requires that prospective participants be “adequately informed” about “any possible conflicts of interest” and “institutional affiliations of the researcher.” In the Lancet and other papers, you disclosed financial and consulting ties with insurance companies as “conflicts of interest.” But trial participants I have interviewed said they did not find out about these “conflicts of interest” until after they completed the trial. They felt this violated their rights as participants to informed consent. One demanded her data be removed from the study after the fact. I have reviewed participant information and consent forms, including those from version 5.0 of the protocol, and none contain the disclosures mandated by the Declaration of Helsinki.

Why did you decide not to inform prospective participants about your “conflicts of interest” and “institutional affiliations” as part of the informed consent process? Do you believe this omission violates the Declaration of Helsinki’s provisions on disclosure to participants? Can you document that any PACE participants were told of your “possible conflicts of interest” and “institutional affiliations” during the informed consent process?

5) For both fatigue and physical function, your thresholds for “normal range” (Lancet) and “recovery” (Psych Med) indicated a greater level of disability than the entry criteria, meaning participants could be fatigued or physically disabled enough for entry but “recovered” at the same time. Thirteen percent of the sample was already “within normal range” on physical function, fatigue or both at baseline, according to information obtained under a freedom-of-information request.

Can you explain the logic of that overlap? Why did the Lancet and Psych Med papers not specifically mention or discuss the implication of the overlaps, or disclose that 13 percent of the study sample were already “within normal range” on an indicator at baseline? Do you believe that such overlaps affect the interpretation of the results? If not, why not? What oversight committee specifically approved this outcome measure? Or was it not approved by any committee, since it was a post-hoc analysis?

6) You have explained these “normal ranges” as the product of taking the mean value +/- 1 SD of the scores of  representative populations–the standard approach to obtaining normal ranges when data are normally distributed. Yet the values in both those referenced source populations (Bowling for physical function, Chalder for fatigue) are clustered toward the healthier ends, as both papers make clear, so the conventional formula does not provide an accurate normal range. In a 2007 paper, Dr. White mentioned this problem of skewed populations and the challenge they posed to calculation of normal ranges.

Why did you not use other methods for determining normal ranges from your clustered data sets from Bowling and Chalder, such as basing them on percentiles? Why did you not mention the concern or limitation about using conventional methods in the PACE papers, as Dr. White did in the 2007 paper? Is this application of conventional statistical methods for non-normally distributed data the reason why you had such broad normal ranges that ended up overlapping with the fatigue and physical function entry criteria?

7) According to the protocol, the main finding from the primary measures would be rates of “positive outcomes”/”overall improvers,” which would have allowed for individual-level. Instead, the main finding was a comparison of the mean performances of the groups–aggregate results that did not provide important information about how many got better or worse. Who approved this specific change? Were you concerned about losing the individual-level assessments?

8) The other two methods of assessing the primary outcomes were both post-hoc analyses. Do you agree that post-hoc analyses carry significantly less weight than pre-specified results? Did any PACE oversight committees specifically approve the post-hoc analyses?

9) The improvement required to achieve a “clinically useful benefit” was defined as 8 points on the SF-36 scale and 2 points on the continuous scoring for the fatigue scale. In the protocol, categorical thresholds for a “positive outcome” were designated as 75 on the SF-36 and 3 on the Chalder fatigue scale, so achieving that would have required an increase of at least 10 points on the SF-36 and 3 points (bimodal) for fatigue. Do you agree that the protocol measure required participants to demonstrate greater improvements to achieve the “positive outcome” scores than the post-hoc “clinically useful benefit”?

10) When you published your protocol in BMC Neurology in 2007, the journal appended an “editor’s comment” that urged readers to compare the published papers with the protocol “to ensure that no deviations from the protocol occurred during the study.” The comment urged readers to “contact the authors” in the event of such changes. In asking for the results per the protocol, patients and others followed the suggestion in the editor’s comment appended to your protocol. Why have you declined to release the data upon request? Can you explain why Queen Mary has considered requests for results per the original protocol “vexatious”?

11) In cases when protocol changes are absolutely necessary, researchers often conduct sensitivity analyses to assess the impact of the changes, and/or publish the findings from both the original and changed sets of assumptions. Why did you decide not to take either of these standard approaches?

12) You made it clear, in your response to correspondence in the Lancet, that the 2011 paper was not addressing “recovery.” Why, then, did Dr. Chalder refer at the 2011 press conference to the “normal range” data as indicating that patients got “back to normal”–i.e. they “recovered”? And since you had input into the accompanying commentary in the Lancet before publication, according to the press complaints commission, why did you not dissuade the writers from declaring a 30 percent “recovery” rate? Do you agree with the commentary that PACE used “a strict criterion for recovery,” given that in both of the primary outcomes participants could get worse and be counted as “recovered,” or “back to normal” in Dr. Chalder’s words?

13) Much of the press coverage focused on “recovery,” even though the paper was making no such claim. Were you at all concerned that the media was mis-interpreting or over-interpreting the results, and did you feel some responsibility for that, given that Dr. Chalder’s statement of “back to normal” and the commentary claim of a 30 percent “recovery” rate were prime sources of those claims?

14) You changed your fatigue outcome scoring method from bimodal to continuous mid-trial, but cited no references in support of this that might have caused you to change your mind since the protocol. Specifically, you did not explain that the FINE trial reported benefits for its intervention only in a post-hoc re-analysis of its fatigue data using continuous scoring.

Were the FINE findings the impetus for the change in scoring in your paper? If so, why was this reason not mentioned or cited? If not, what specific change prompted your mid-trial decision to alter the protocol in this way? And given that the FINE trial was promoted as the “sister study” to PACE, why were that trial and its negative findings not mentioned in the text of the Lancet paper? Do you believe those findings are irrelevant to PACE? Moreover, since the Likert-style analysis of fatigue was already a secondary outcome in PACE, why did you not simply provide both bimodal and continuous analyses rather than drop the bimodal scoring altogether?

15)  The “number needed to treat” (NNT) for CBT and GET was 7, as Dr. Sharpe indicated in an Australian radio interview after the Lancet publication. But based on the “normal range” data, the NNT for SMC was also 7, since those participants achieved a 15% rate of “being within normal range,” accounting for half of the rate experienced under the rehabilitative interventions.

Is that what Dr. Sharpe meant in the radio interview when he said: “What this trial wasn’t able to answer is how much better are these treatments and really not having very much treatment at all”? If not, what did Dr. Sharpe mean? Wasn’t the trial designed to answer the very question Dr. Sharpe cited? Since each of the rehabilitative intervention arms as well as the SMC arm had an NNT of 7, would it be accurate to interpret the “normal range” findings as demonstrating that CBT and GET worked as well as SMC, but not any better?

16) The PACE paper was widely interpreted, based on your findings and statements, as demonstrating that “pacing” isn’t effective. Yet patients describe “pacing” as an individual, flexible, self-help method for adapting to the illness. Would packaging and operationalizing it as a “treatment” to be administered by a “therapist” alter its nature and therefore its impact? If not, why not? Why do you think the evidence from APT can be extrapolated to what patients themselves call “pacing”? Also, given your partnership with Action4ME in developing APT, how do you explain the organization rejection of the findings in the statement issued after the study was published?

17) In your response to correspondence in the Lancet, you acknowledged a mistake in describing the Bowling sample as a “working age” rather than “adult” population–a mistake that changes the interpretation of the findings. Comparing the PACE participants to a sicker group but mislabeling it a healthier one makes the PACE results look better than they were; the percentage of participants scoring “within normal range” would clearly have been even lower had they actually been compared to the real “working age” population rather than the larger and more debilitated “adult” population. Yet the Lancet paper itself has not been corrected, so current readers are provided with misinformation about the measurement and interpretation of one of the study’s two primary outcomes.

Why hasn’t the paper been corrected? Do you believe that everyone who reads the paper also reads the correspondence, making it unnecessary to correct the paper itself? Or do you think the mistake is insignificant and so does not warrant a correction in the paper itself? Lancet policy calls for corrections–not mentions in correspondence–for mistakes that affect interpretation or replicability. Do you disagree that this mistake affects interpretation or replicability?

18) In our exchange of letters in the NYTimes four years ago, you argued that PACE provided “robust” evidence for treatment with CBT and GET “no matter how the illness is defined,” based on the two sub-group analyses. Yet Oxford requires that fatigue be the primary complaint–a requirement that is not a part of either of your other two sub-group case definitions. (“Fatigue” per se is not part of the ME definition at all, since post-exertional malaise is the core symptom; the CDC obviously requires “fatigue,” but not that it be the primary symptom, and patients can present with post-exertional malaise or cognitive problems as being their “primary” complaint.)

Given that discrepancy, why do you believe the PACE findings can be extrapolated to others “no matter how the illness is defined,” as you wrote in the NYTimes? Is it your assumption that everyone who met the other two criteria would automatically be screened in by the Oxford criteria, despite the discrepancies in the case definitions?

19) None of the multiple outcomes you cited as “objective” in the protocol supported the subjective outcomes suggesting improvement (excluding the extremely modest increase in the six-minute walking test for the GET group)? Does this lack of objective support for improvement and recovery concern you?  Should the failure of the objective measures raise questions about whether people have achieved any actual benefits or improvements in performance?

20) If wearing the actometer was considered too much of a burden for patients to wear at the end of the trial, when presumably many of them would have been improved, why wasn’t it too much of a burden for patients at the beginning of the trial? In retrospect, given that your other objective findings failed, do you regret having made that decision?

21) In your response to correspondence after publication of the Psych Med paper, you mentioned multiple problems with the “objectivity” of the six-minute walking test that invalidated comparisons with other studies. Yet PACE started assessing people using this test when the trial began recruitment in 2005, and the serious limitations–the short corridors requiring patients to turn around more than was standard, the decision not to encourage patients during the test, etc.–presumably become apparent quickly.

Why then, in the published protocol in 2007, did you describe the walking test as an “objective” measure of function? Given that the study had been assessing patients for two years already, why had you not already recognized the limitations of the test and realized that it was apparently useless as an objective measure? When did you actually recognize these limitations?

22) In the Psych Med paper, you described “recovery” as recovery only from the current episode of illness–a limitation of the term not mentioned in the protocol. Since this definition describes what most people would refer to as “remission,” not “recovery,” why did you choose to use the word “recovery”–in the protocol and in the paper–in the first place? Would the term “remission” have been more accurate and less misleading? Not surprisingly, the media coverage focused on “recovery,” not on “remission.” Were you concerned that this coverage gave readers and viewers an inaccurate impression of the findings, since few readers or viewers would understand that what the Psych Med paper examined was in fact “remission” and not “recovery,” as most people would understand the terms?

23) In the Psychological Medicine definition of “recovery,” you relaxed all four of the criteria. For the first two, you adopted the “normal range” scores for fatigue and physical function from the Lancet paper, with “recovery” thresholds lower than the entry criteria. For the Clinical Global Impression scale, “recovery” in the Psych Med paper required a 1 or 2, rather than just a 1, as in the protocol. For the fourth element, you split the single category of not meeting any of the three case definitions into two separate categories–one less restrictive (‘trial recovery’) than the original proposed in the protocol (now renamed ‘clinical recovery’).

What oversight committee approved the changes in the overall definition of recovery from the protocol, including the relaxation of all four elements of the definition? Can you cite any references for your reconsideration of the CGI scale, and explain what new information prompted this reconsideration after the trial? Can you provide any references for the decision to split the final “recovery” element into two categories, and explain what new information prompted this change after the trial?

24) The Psychological Medicine paper, in dismissing the original “recovery” threshold of 85 on the SF-36, asserted that 50 percent of the population would score below this mean value and that it was therefore not an appropriate cut-off. But that statement conflates the mean and median values; given that this is not a normally distributed sample and that the median value is much higher than the mean in this population, the statement about 50 percent performing below 85 is clearly wrong.

Since the source populations were skewed and not normally distributed, can you explain this claim that 50 percent of the population would perform below the mean? And since this reasoning for dismissing the threshold of 85 is wrong, can you provide another explanation for why that threshold needed to be revised downward so significantly? Why has this erroneous claim not been corrected?

25) What are the results, per the protocol definition of “recovery”?

26) The PLoS One paper reported that a sensitivity analysis found that the findings of the societal cost-effectiveness of CBT and GET would be “robust” even when informal care was measured not by replacement cost of a health-care worker but using alternative assumptions of minimum wage or zero pay. When readers challenged this claim that the findings would be “robust” under these alternative assumptions, the lead author, Paul McCrone, agreed in his responses that changing the value for informal care would, in fact, change the outcomes. He then criticized the alternative assumptions because they did not adequately value the family’s caregiving work, even though they had been included in the PACE statistical plan.

Why did the PLoS One paper include an apparently inaccurate sensitivity analysis that claimed the societal cost-effectiveness findings for CBT and GET were “robust” under the alternative assumptions, even though that wasn’t the case? And if the alternative assumptions were “controversial” and “restrictive, as the lead author wrote in one of his posted responses, then why did the PACE team include them in the statistical plan in the first place?
Related Posts Plugin for WordPress, Blogger...