Wednesday, January 27, 2016

The Answer is No, A Thousand Times No

Last month, four respected researchers asked for data from the PACE trial. This is the umpteeth request, and, predictably, it was denied.

But this time the reason given was not that 1) the request was "vexatious," 2) the trial participants might somehow be harmed, 3) it might infringe on intellectual property rights, or 4) the study might be criticized.

None of the above. Their excuse this time was that "participants may be less willing to participate in a planned feasibility follow up study." In other words, if people with ME/CFS knew how bad the PACE trial really was, they might not be willing to participate in another trial.

Imagine a situation in which a drug is administered to a group of ill people, who then become more ill. But the authors of the trial hide the data and claim that the ill people appear to benefit. When asked for the data they refuse, because they don't want participants to know the drug is harmful in case they do future studies.

How illegal would that be?

The PACE trial authors have no moral compass. They are planning on rehashing their study endlessly to milk it for all its worth. They will continue to spin their "results" until someone in authority puts a stop to it.


At least we’re not vexatious

19 JANUARY 2016, Virology Blog

On 17 December 2015, Ron Davis, Bruce Levin, David Tuller and I requested trial data from the PACE study of treatments for ME/CFS published in The Lancet in 2011. Below is the response to our request from the Records & Compliance Manager of Queen Mary University of London. The bolded portion of our request, noted in the letter, is the following: “we would like the raw data for all four arms of the trial for the following measures: the two primary outcomes of physical function and fatigue (both bimodal and Likert-style scoring), and the multiple criteria for “recovery” as defined in the protocol published in 2007 in BMC Neurology, not as defined in the 2013 paper published in Psychological Medicine. The anonymized, individual-level data for “recovery” should be linked across the four criteria so it is possible to determine how many people achieved “recovery” according to the protocol definition.”


Dear Prof. Racaniello

Thank you for your email of 17th December 2015. I have bolded your request below, made under the Freedom of Information Act 2000.

You have requested raw data, linked at an individual level, from the PACE trial. I can confirm that QMUL holds this data but I am afraid that I cannot supply it. Over the last five years QMUL has received a number of similar requests for data relating to the PACE trial. One of the resultant refusals, relating to Decision Notice FS50565190, is due to be tested at the First-tier Tribunal (Information Rights) during 2016. We believe that the information requested is similarly exempt from release in to the public domain. At this time, we are not in a position to speculate when this ongoing legal action will be concluded.

Any release of information under FOIA is a release to the world at large without limits. The data consists of (sensitive) personal data which was disclosed in the context of a confidential relationship, under a clear obligation of confidence. This is not only in the form of explicit guarantees to participants but also since this is data provided in the context of medical treatment, under the traditional obligation of confidence imposed on medical practitioners. See generally, General Medical Council, ‘Confidentiality’ (2009) available at The information has the necessary quality of confidence and release to the public would lead to an actionable breach.

As such, we believe it is exempt from disclosure under s.41 of FOIA. This is an absolute exemption.

The primary outcomes requested are also exempt under s.22A of FOIA in that these data form part of an ongoing programme of research.

This exemption is subject to the public interest test. While there is a public interest in public authorities being transparent generally and we acknowledge that there is ongoing debate around PACE and research in to CFS/ME, which might favour disclosure, this is outweighed at this time by the prejudice to the programme of research and the interests of participants. This is because participants may be less willing to participate in a planned feasibility follow up study, since we have promised to keep their data confidential and planned papers from PACE, whether from QMUL or other collaborators, may be affected.

On balance we believe that the public interest in withholding this information outweighs the public interest in disclosing it.

In accordance with s.17, please accept this as a refusal notice.

For your information, the PACE PIs and their associated organisations are currently reviewing a data sharing policy.

If you are dissatisfied with this response, you may ask QMUL to conduct a review of this decision. To do this, please contact the College in writing (including by fax, letter or email), describe the original request, explain your grounds for dissatisfaction, and include an address for correspondence. You have 40 working days from receipt of this communication to submit a review request. When the review process has been completed, if you are still dissatisfied, you may ask the Information Commissioner to intervene. Please see for details.

Yours sincerely

Paul Smallcombe
Records & Information Compliance Manager

Friday, January 22, 2016

Mitochondrial DNA and ME/CFS - One Pathogen, Many Responses

Differences in nuclear DNA vs mitochondrial DNA:
1)Nuclear DNA is a double helix, mitochondrial DNA is circular
2) Nuclear DNA is maternal and paternal, mitochondrial DNA is maternal
(mitochondria are inherited from the mother)
4) Mito DNA haplogroups are people descended from a single ancestor
A new study of mitochondrial DNA in ME/CFS patients has provided some important clues as to the variation of symptoms seen in patients.

Four important points brought out 
in this study were:

1) None of the patients showed any evidence of a mitochondrial genetic disease.

2) No difference was seen in the types of mitochondrial DNA between patients and healthy individuals

3) There was no increased susceptibility to ME/CFS among people with different mitochondrial SNPs (single variations in DNA)

4) However, there were associations of SNPs with certain symptoms and/or their severity. Individuals who carry a particular SNP, for example, are predicted to be at greater risk of experiencing particular types of symptoms once they become ill. (Single nucleotide polymorphisms, frequently called SNPs (pronounced “snips”), are the most common type of genetic variation among people.)

What this means is that 1) ME/CFS is not genetic, 2) ME/CFS patients do not have pre-dispositions for getting the disease, and 3) there may be a single pathogen causing the disease.

The authors conclude:

"A puzzling aspect of ME/CFS has been the diversity of symptoms and the variation of their severity among different individuals. These differences should not be taken as proof that more than one insult was the initiating factor, nor that different patients have different underlying problems. It remains possible that much of the diversity of the manifestation of the illness results from genetic diversity rather than the existence of multiple fundamental causes."

This study provides a rationale for outbreaks and clusters. It also accounts for both the discrepancies in Fukuda, CCC, and ICC clinical case definitions as well as the large number of possible combinations of symptoms. These case definitions may be describing the same illness, caused by the same pathogen, as it is experienced by people with distinct genetic variations.

The findings of this study represent a major shift in thinking, not just about ME/CFS, but about all diseases. This study explains how a single pathogen can create multiple symptoms, and how those symptoms may manifest themselves depending on genetic variations in the host. The findings also may account for ranges in severity.

You can read about the Chronic Fatigue Initiative HERE.


By Maureen Hanson

This is a simplified explanation of the 2016 academic paper published in the Journal of Translational Medicine.

Mitochondrial DNA variants correlate with symptoms in myalgic encephalomyelitis/chronic fatigue syndrome by Paul Billing-Ross, Arnaud Germain, Kaixiong Ye, Alon Keinan, Zhenglong Gu, and Maureen R. Hanson. J. Translational Medicine. 2016, 14:19

Patients with ME/CFS experience a profound lack of energy, severe fatigue, along with a variety of other symptoms, including one or more of the following: muscle pain, headaches, gastrointestinal discomfort, difficulty concentrating, exacerbation of symptoms following exercise, abnormal regulation of blood pressure and heart rate, and unrefreshing sleep. Mitochondria, sub-cellular organelles are responsible for producing ATP, the energy coinage of the cell, through conversion of glucose. Therefore, a logical approach to learn more about a disease affecting energy is probing of the function of mitochondria.

Mitochondria are made up of molecules encoded by the nuclear genome--DNA located in the nucleus--as well as the mitochondrial genome—a small amount of DNA present within each organelle. Defects in mitochondrial DNA lead to devastating genetic diseases, with such symptoms as brain abnormalities, severe fatigue, blindness or defective heart function—and can be fatal. The mitochondrial genome of healthy humans also exhibits some natural variation—a single component of the mitochondrial DNA sometimes differs between one human and another—this is known as a SNP (single nucleotide polymorphism, "snip"). Often more than one SNP differs between one population of humans and another—for example, mitochondrial genomes whose origin can be traced to France differ in a number of SNPs from those in people in Central Asia. These different types of mitochondrial genomes, based on a specific set of SNPs, are referred to as haplogroups. Even people whose mitochondrial DNA belongs to the same haplogroup can differ among one another because of some variation in additional SNPs. Some mitochondrial SNPs have been associated with various characteristics, such as adaptation to cold weather or high altitude environments and have been implicated in susceptibility to diabetes and various inflammatory diseases. An informative review of the role of mitochondria in disease has been written by Wallace and Chalkia, researchers at the University of Pennsylvania.

A further complexity of mitochondrial genetics is that there are many individual mitochondria within the same cell, and thus many copies of mitochondrial DNA in each cell. Sometimes new mutations arise so that some of the copies of DNA within the same cell, and therefore within the same person, differ from one another. This situation is called “heteroplasmy”. As cells grow and multiply, by chance there can be uneven distribution of normal vs. abnormal DNA to different cells. If mitochondrial DNA with a harmful mutation becomes the predominant type in a particular tissue, serious symptoms will emerge.

In our JTM paper, work that was primarily supported by the Chronic Fatigue Initiative, we sequenced the mitochondrial DNA from a cohort of ME/CFS patients and healthy individuals, using DNA extracted from white blood cells stored in the biobank developed by the Chronic Fatigue Initiative.

We asked four primary questions:
  1. Were any of the ME/CFS patients identified by 6 well-known ME/CFS experts misdiagnosed and are actually victims of a mitochondrial genetic disease?
  2. Do people with ME/CFS carry more copies of mitochondrial DNA with harmful mutations than healthy people (heteroplasmy)?
  3. Are people belonging to one haplogroup more likely to fall victim to ME/CFS than another? 
  4. Are people who have particular SNPs more likely to experience particular symptoms or have increased severity of symptoms?
Our work showed that none of the blood samples obtained from 193 patients identified by the CFI’s 6 expert M.D.s gave any indication of a mitochondrial genetic disease.

Furthermore, we found no difference in the degree of heteroplasmy between patients and healthy individuals.

We also observed no increased susceptibility to ME/CFS among individuals carrying particular haplogroups or SNPs within a haplogroup.

However, we did detect associations of particular SNPs with certain symptoms and/or their severity. For example, we did find that individuals with particular SNPs were more likely to have gastrointestinal distress, chemical or light sensitivity, disrupted sleep, or flu-like symptoms. This finding does NOT mean that if your mitochondrial DNA carries one of these SNPs, you will inevitably experience a particular symptom or have higher severity of some symptoms. Instead, because a particular SNP was seen more often in ME/CFS patients with certain characteristics, individuals that carry that SNP are predicted to be at greater risk of experiencing particular types of symptoms once they become ill.

This study demonstrates the importance of a well-characterized cohort of patients and controls along with detailed clinical information about their experience of illness. Without the data from the lengthy patient questionnaires collected along with the subject’s blood, we could not have correlated SNPs with patient characteristics. While the materials from the CFI subjects are extremely valuable and our results are statistically significant, greater numbers of subjects must be analyzed to determine whether the correlations we detected continue to hold up when more patients are studied, and whether such correlations exist within people carrying other haplogroups. 

Due to the European origin of most of the ancestors of the CFI subjects, most belong to haplogroup H, the most common European haplogroup. A much larger number of haplogroup H subjects, as well as large cohorts of individuals with other haplogroups, will be necessary to analyze to dissect out other possible correlations or to determine whether or not any of the correlations we detected with a relatively small population are spurious. With more subjects, we might also be able to detect additional correlations that were not obvious from our initial study.

Whether or not the genetic correlations we have observed are verified or not through further work, our study indicates an important hypothesis that should be tested in ME/CFS. How much of the variation in symptoms between different individuals results from their different nuclear and/or mitochondrial genetic makeup, rather than variation in the inciting cause?

A puzzling aspect of ME/CFS has been the diversity of symptoms and the variation of their severity among different individuals. These differences should not be taken as proof that more than one insult was the initiating factor, nor that different patients have different underlying problems. It remains possible that much of the diversity of the manifestation of the illness results from genetic diversity rather than the existence of multiple fundamental causes.

This article was written by Professor Maureen Hanson and is licensed under a Creative Commons Attribution 4.0 International License.

Maureen R. Hanson
Liberty Hyde Bailey Professor
Phone: 607-254-4833
Fax: 607-255-6249

Hanson Laboratory
Department of Molecular Biology and Genetics
321 Biotechnology Building
Cornell University
Ithaca, NY 14853
Phone: 607-254-4832

Wednesday, January 20, 2016

Trial By Error, Continued: More Nonsense from The Lancet Psychiatry

In David Tuller's most recent post on Virology Blog, he  discusses the absurdity of the PACE authors' claims regarding bias.

To break it down:

1) The PACE trial team distributed a newsletter to the trial participants during the trial.

2) The newsletter contained glowing testimonials from patients and doctors about the success of their treatment, but did not specify which treatment was being administered.

3) The same newsletter stated that the government had endorsed GET and CBT as the best treatments.

4) The PACE trial authors claim that because they presented a positive view of ALL the treatments, this did not constitute bias, regardless of their pitch regarding government endorsement.

Imagine finding these statements in a newsletter.
"This treatment literally saved my life. I don't know what I would have done without it!" ~Jane Doe, patient.
"My patients have never felt better! From now on, I am going to give this treatment to all of my patients!" Dr. John Doe.
The FDA endorses Mutilen as the best treatment for insomnia based on all available evidence.
What reader isn't going to put two and two together?

I do not believe for one minute that the newsletter was "amateurish" mistake on the part of the researchers. It was a painfully obvious attempt to glean positive results from a trial that had clearly failed.

No matter how you look at it, the PACE trial is a poster child for fraudulent research.

Reprinted with permission.

Trial By Error, Continued: More Nonsense from The Lancet Psychiatry

19 JANUARY 2016

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

The PACE authors have long demonstrated great facility in evading questions they don’t want to answer. They did this in their response to correspondence about the original 2011 Lancet paper. They did it again in the correspondence about the 2013 recovery paper, and in their response to my Virology Blog series. Now they have done it in their answer to critics of their most recent paper on follow-up data, published last October in The Lancet Psychiatry.

(They published the paper just a week after my investigation ran. Wasn’t that a lucky coincidence?)

The Lancet Psychiatry follow-up had null findings: Two years or more after randomization, there were no differences in reported levels of fatigue and physical function between those assigned to any of the groups. The results showed that cognitive behavior therapy and graded exercise therapy provided no long-term benefits because those in the other two groups reported improvement during the year or more after the trial was over. Yet the authors, once again, attempted to spin this mess as a success.

In their letters, James Coyne, Keith Laws, Frank Twist, and Charles Shepherd all provide sharp and effective critiques of the follow-up study. I’ll let others tackle the PACE team’s counter-claims about study design and statistical analysis. I want to focus once more on the issue of the PACE participant newsletter, which they again defend in their Lancet Psychiatry response.

Here’s what they write: “One of these newsletters included positive quotes from participants. Since these participants were from all four treatment arms (which were not named) these quotes were [not]…a source of bias.”

Let’s recap what I wrote about this newsletter in my investigation. The newsletter was published in December 2008, with at least a third of the study’s sample still undergoing assessment. The newsletter included six glowing testimonials from participants about their positive experiences with the trial, as well as a seventh statement from one participant’s primary care doctor. None of the seven statements recounted any negative outcomes, presumably conveying to remaining participants that the trial was producing a 100 % satisfaction rate. The authors argue that the absence of the specific names of the study arms means that these quotes could not be “a source of bias.”

This is a preposterous claim. The PACE authors apparently believe that it is not a problem to influence all of your participants in a positive direction, and that this does not constitute bias. They have repeated this argument multiple times. I find it hard to believe they take it seriously, but perhaps they actually do. In any case, no one else should. As I have written before, they have no idea how the testimonials might have affected anyone in any of the four groups—so they have no basis for claiming that this uncontrolled co-intervention did not alter their results.

Moreover, the authors now ignore the other significant effort in that newsletter to influence participant opinion: publication of an article noting that a federal clinical guidelines committee had selected cognitive behavior therapy and graded exercise therapy as effective treatments “based on the best available evidence.” Given that the trial itself was supposed to be assessing the efficacy of these treatments, informing participants that they have already been deemed to be effective would appear likely to impact participants’ responses. The PACE authors apparently disagree.

It is worth remembering what top experts have said about the publication of this newsletter and its impact on the trial results. “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism,” Bruce Levin, a biostatistician at Columbia University, told me.

My Berkeley colleague, epidemiologist Arthur Reingold, said he was flabbergasted to see that the researchers had distributed material promoting the interventions being investigated, whether they were named or not. This fact alone, he noted, made him wonder if other aspects of the trial would also raise methodological or ethical concerns.

“Given the subjective nature of the primary outcomes, broadcasting testimonials from those who had received interventions under study would seem to violate a basic tenet of research design, and potentially introduce substantial reporting and information bias,” he said. “I am hard-pressed to recall a precedent for such an approach in other therapeutic trials. Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Saturday, January 9, 2016

ME/CFS: The Last Great Medical Cover Up

"The Last Great Medical Cover Up" is a powerful documentary featuring interviews with ME/CFS patients in the UK. Patients recount how they were "pushed to the breaking point" with GET, misdiagnosed with psychiatric illnesses, and told "everybody's tired." 

These interviews will move you to tears, but they will also resonate with many who have had the same experiences in the US.

Everyone should watch this film.

Wednesday, January 6, 2016

Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

David Tuller has been relentless in his pursuit of the truth.

He has doggedly, insistently, and ceaselessly sought answers from PACE authors about the validity of their results, and about their research methods.

The authors of the PACE trial have turned down his efforts to speak with them, refusing him as they have every other person who has requested access to their data.

Recently, David Tuller posted the questions he has attempted, unsuccessfully, to pose to the authors of the PACE trial. All of these are reasonable questions about methodology, trial results, intent, possible conflicts of interest, and trial participant rights. All of these questions should have been asked by the various journals that published the PACE trial results. All of these questions should have been posed by the institutions that sponsored the trial.

And all of these questions - Every. Single. One - should have been asked by the researchers themselves.

That is how responsible scientists behave.


Reprinted with permission.

Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

4 JANUARY 2016, Virology Blog, By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

I have been seeking answers from the PACE researchers for more than a year. At the end of this post, I have included the list of questions I’d compiled by last September, when my investigation was nearing publication. Most of these questions remain unanswered.

The PACE researchers are currently under intense criticism for having rejected as “vexatious” a request for trial data from psychologist James Coyne—an action called “unforgivable” by Columbia statistician Andrew Gelman and “absurd” by Retraction Watch. Several colleagues and I have filed a subsequent request for the main PACE results, including data for the primary outcomes of fatigue and physical function and for “recovery” as defined in the trial protocol. The PACE team has two more weeks to release this data, or explain why it won’t.

Any data from the PACE trial will likely confirm what my Virology Blog series has already revealed: The results cannot stand up to serious scrutiny. But the numbers will not provide answers to the questions I find most compelling. Only the researchers themselves can explain why they made so many ill-advised choices during the trial.

In December, 2014, after months of research, I e-mailed Peter White, Trudie Chalder and Michael Sharpe—the lead PACE researcher and his two main colleagues–and offered to fly to London to meet them. They declined to talk with me. In an email, Dr. White cited my previous coverage of the illness as a reason. (The investigators and I had already engaged in an exchange of letters in The New York Times in 2011, involving a PACE-related story I had written.) “I have concluded that it would not be worthwhile our having a conversation,” Dr. White wrote in his e-mail.

I decided to postpone further attempts to contact them for the story until it was near completion. (Dr. Chalder and I did speak in January 2015 about a new study from the PACE data, and I previously described our differing memories of the conversation.) In the meantime, I wrote and rewrote the piece and tweaked it and trimmed it and then pasted back in stuff that I’d already cut out. Last June, I sent a very long draft to Retraction Watch, which had agreed to review it for possible publication.

I still hoped Dr. White would relent and decide to talk with me. Over the summer, I drew up a list of dozens of questions that covered every single issue addressed in my investigation.

I had noticed the kinds of non-responsive responses Dr. White and his colleagues provided in journal correspondence and other venues whenever patients made cogent and incontrovertible points. They appeared to excel at avoiding hard questions, ignoring inconvenient facts, and misstating key details. I was surprised and perplexed that smart journal editors, public health officials, reporters and others accepted their replies without pointing out glaring methodological problems—such as the bizarre fact that the study’s outcome thresholds for improvement on its primary measures indicated worse health status than the entry criteria required to demonstrate serious disability.

So my list of questions included lots of follow-ups that would help me push past the PACE team’s standard portfolio of evasions. And if, as I suspected, I wouldn’t get the chance to pose the questions myself, I hoped the list would be a useful guide for anyone who wanted to conduct a rigorous interview with Dr. White or his colleagues about the trial’s methodological problems. (Dr. White never agreed to talk with me; I sent my questions to Retraction Watch as part of the fact-checking process.)

In September, Retraction Watch interviewed Dr. White in connection with my piece, as noted in a recent post about Dr. Coyne’s data request. Retraction Watch and I subsequently determined that we differed on the best approach and direction for the story. On October 21st to 23rd, Virology Blog ran my 14,000-word investigation.

But I still don’t have the answers to my questions.


List of Questions, September 1, 2015:

I am posting this list verbatim, although if I were pulling it together today I would add, subtract and rephrase some questions. (I might have misstated a statistical concept or two.) The list is by no means exhaustive. Patients and researchers could easily come up with a host of additional items. The PACE team seems to have a lot to answer for.

1) In June, a report commissioned by the National Institutes of Health declared that the Oxford criteria should be “retired” because the case definition impeded progress and possibly caused harm. As you know, the concern is that it is so non-specific that it leads to heterogeneous study samples that include people with many illnesses besides ME/CFS. How do you respond to that concern?

2) In published remarks after Dr. White’s presentation in Bristol last fall, Dr. Jonathan Edwards wrote: “What Dr White seemed not to understand is that a simple reason for not accepting the conclusion is that an unblinded trial in a situation where endpoints are subjective is valueless.” What is your response to Dr. Edward’s position?

3) The December 2008 PACE participants’ newsletter included an article about the UK NICE guidelines. The article noted that the recommended treatments, “based on the best available evidence,” included two of the interventions being studied–CBT and GET. (The article didn’t mention that PACE investigator Jessica Bavington also served on the NICE guidelines committee.) The same newsletter included glowing testimonials from satisfied participants about their positive outcomes from the trial “therapĂ„y” and “treatment” but included no statements from participants with negative outcomes. According to the graph illustrating recruitment statistics in the same newsletter, about 200 or so participants were still slated to undergo one or more of their assessments after publication of the newsletter.

Were you concerned that publishing such statements would bias the remaining study subjects? If not, why not? A biostatistics professor from Columbia told me that for investigators to publish such information during a trial was “the height of clinical trial amateurism,” and that at the very least you should have assessed responses before and after disseminating the newsletter to ensure that there was no bias resulting from the statements. What is your response? Also, should the article about the NICE guidelines have disclosed that Jessica Bavington was on the committee and therefore playing a dual role?

4) In your protocol, you promised to abide by the Declaration of Helsinki. The declaration mandates that obtaining informed consent requires that prospective participants be “adequately informed” about “any possible conflicts of interest” and “institutional affiliations of the researcher.” In the Lancet and other papers, you disclosed financial and consulting ties with insurance companies as “conflicts of interest.” But trial participants I have interviewed said they did not find out about these “conflicts of interest” until after they completed the trial. They felt this violated their rights as participants to informed consent. One demanded her data be removed from the study after the fact. I have reviewed participant information and consent forms, including those from version 5.0 of the protocol, and none contain the disclosures mandated by the Declaration of Helsinki.

Why did you decide not to inform prospective participants about your “conflicts of interest” and “institutional affiliations” as part of the informed consent process? Do you believe this omission violates the Declaration of Helsinki’s provisions on disclosure to participants? Can you document that any PACE participants were told of your “possible conflicts of interest” and “institutional affiliations” during the informed consent process?

5) For both fatigue and physical function, your thresholds for “normal range” (Lancet) and “recovery” (Psych Med) indicated a greater level of disability than the entry criteria, meaning participants could be fatigued or physically disabled enough for entry but “recovered” at the same time. Thirteen percent of the sample was already “within normal range” on physical function, fatigue or both at baseline, according to information obtained under a freedom-of-information request.

Can you explain the logic of that overlap? Why did the Lancet and Psych Med papers not specifically mention or discuss the implication of the overlaps, or disclose that 13 percent of the study sample were already “within normal range” on an indicator at baseline? Do you believe that such overlaps affect the interpretation of the results? If not, why not? What oversight committee specifically approved this outcome measure? Or was it not approved by any committee, since it was a post-hoc analysis?

6) You have explained these “normal ranges” as the product of taking the mean value +/- 1 SD of the scores of  representative populations–the standard approach to obtaining normal ranges when data are normally distributed. Yet the values in both those referenced source populations (Bowling for physical function, Chalder for fatigue) are clustered toward the healthier ends, as both papers make clear, so the conventional formula does not provide an accurate normal range. In a 2007 paper, Dr. White mentioned this problem of skewed populations and the challenge they posed to calculation of normal ranges.

Why did you not use other methods for determining normal ranges from your clustered data sets from Bowling and Chalder, such as basing them on percentiles? Why did you not mention the concern or limitation about using conventional methods in the PACE papers, as Dr. White did in the 2007 paper? Is this application of conventional statistical methods for non-normally distributed data the reason why you had such broad normal ranges that ended up overlapping with the fatigue and physical function entry criteria?

7) According to the protocol, the main finding from the primary measures would be rates of “positive outcomes”/”overall improvers,” which would have allowed for individual-level. Instead, the main finding was a comparison of the mean performances of the groups–aggregate results that did not provide important information about how many got better or worse. Who approved this specific change? Were you concerned about losing the individual-level assessments?

8) The other two methods of assessing the primary outcomes were both post-hoc analyses. Do you agree that post-hoc analyses carry significantly less weight than pre-specified results? Did any PACE oversight committees specifically approve the post-hoc analyses?

9) The improvement required to achieve a “clinically useful benefit” was defined as 8 points on the SF-36 scale and 2 points on the continuous scoring for the fatigue scale. In the protocol, categorical thresholds for a “positive outcome” were designated as 75 on the SF-36 and 3 on the Chalder fatigue scale, so achieving that would have required an increase of at least 10 points on the SF-36 and 3 points (bimodal) for fatigue. Do you agree that the protocol measure required participants to demonstrate greater improvements to achieve the “positive outcome” scores than the post-hoc “clinically useful benefit”?

10) When you published your protocol in BMC Neurology in 2007, the journal appended an “editor’s comment” that urged readers to compare the published papers with the protocol “to ensure that no deviations from the protocol occurred during the study.” The comment urged readers to “contact the authors” in the event of such changes. In asking for the results per the protocol, patients and others followed the suggestion in the editor’s comment appended to your protocol. Why have you declined to release the data upon request? Can you explain why Queen Mary has considered requests for results per the original protocol “vexatious”?

11) In cases when protocol changes are absolutely necessary, researchers often conduct sensitivity analyses to assess the impact of the changes, and/or publish the findings from both the original and changed sets of assumptions. Why did you decide not to take either of these standard approaches?

12) You made it clear, in your response to correspondence in the Lancet, that the 2011 paper was not addressing “recovery.” Why, then, did Dr. Chalder refer at the 2011 press conference to the “normal range” data as indicating that patients got “back to normal”–i.e. they “recovered”? And since you had input into the accompanying commentary in the Lancet before publication, according to the press complaints commission, why did you not dissuade the writers from declaring a 30 percent “recovery” rate? Do you agree with the commentary that PACE used “a strict criterion for recovery,” given that in both of the primary outcomes participants could get worse and be counted as “recovered,” or “back to normal” in Dr. Chalder’s words?

13) Much of the press coverage focused on “recovery,” even though the paper was making no such claim. Were you at all concerned that the media was mis-interpreting or over-interpreting the results, and did you feel some responsibility for that, given that Dr. Chalder’s statement of “back to normal” and the commentary claim of a 30 percent “recovery” rate were prime sources of those claims?

14) You changed your fatigue outcome scoring method from bimodal to continuous mid-trial, but cited no references in support of this that might have caused you to change your mind since the protocol. Specifically, you did not explain that the FINE trial reported benefits for its intervention only in a post-hoc re-analysis of its fatigue data using continuous scoring.

Were the FINE findings the impetus for the change in scoring in your paper? If so, why was this reason not mentioned or cited? If not, what specific change prompted your mid-trial decision to alter the protocol in this way? And given that the FINE trial was promoted as the “sister study” to PACE, why were that trial and its negative findings not mentioned in the text of the Lancet paper? Do you believe those findings are irrelevant to PACE? Moreover, since the Likert-style analysis of fatigue was already a secondary outcome in PACE, why did you not simply provide both bimodal and continuous analyses rather than drop the bimodal scoring altogether?

15)  The “number needed to treat” (NNT) for CBT and GET was 7, as Dr. Sharpe indicated in an Australian radio interview after the Lancet publication. But based on the “normal range” data, the NNT for SMC was also 7, since those participants achieved a 15% rate of “being within normal range,” accounting for half of the rate experienced under the rehabilitative interventions.

Is that what Dr. Sharpe meant in the radio interview when he said: “What this trial wasn’t able to answer is how much better are these treatments and really not having very much treatment at all”? If not, what did Dr. Sharpe mean? Wasn’t the trial designed to answer the very question Dr. Sharpe cited? Since each of the rehabilitative intervention arms as well as the SMC arm had an NNT of 7, would it be accurate to interpret the “normal range” findings as demonstrating that CBT and GET worked as well as SMC, but not any better?

16) The PACE paper was widely interpreted, based on your findings and statements, as demonstrating that “pacing” isn’t effective. Yet patients describe “pacing” as an individual, flexible, self-help method for adapting to the illness. Would packaging and operationalizing it as a “treatment” to be administered by a “therapist” alter its nature and therefore its impact? If not, why not? Why do you think the evidence from APT can be extrapolated to what patients themselves call “pacing”? Also, given your partnership with Action4ME in developing APT, how do you explain the organization rejection of the findings in the statement issued after the study was published?

17) In your response to correspondence in the Lancet, you acknowledged a mistake in describing the Bowling sample as a “working age” rather than “adult” population–a mistake that changes the interpretation of the findings. Comparing the PACE participants to a sicker group but mislabeling it a healthier one makes the PACE results look better than they were; the percentage of participants scoring “within normal range” would clearly have been even lower had they actually been compared to the real “working age” population rather than the larger and more debilitated “adult” population. Yet the Lancet paper itself has not been corrected, so current readers are provided with misinformation about the measurement and interpretation of one of the study’s two primary outcomes.

Why hasn’t the paper been corrected? Do you believe that everyone who reads the paper also reads the correspondence, making it unnecessary to correct the paper itself? Or do you think the mistake is insignificant and so does not warrant a correction in the paper itself? Lancet policy calls for corrections–not mentions in correspondence–for mistakes that affect interpretation or replicability. Do you disagree that this mistake affects interpretation or replicability?

18) In our exchange of letters in the NYTimes four years ago, you argued that PACE provided “robust” evidence for treatment with CBT and GET “no matter how the illness is defined,” based on the two sub-group analyses. Yet Oxford requires that fatigue be the primary complaint–a requirement that is not a part of either of your other two sub-group case definitions. (“Fatigue” per se is not part of the ME definition at all, since post-exertional malaise is the core symptom; the CDC obviously requires “fatigue,” but not that it be the primary symptom, and patients can present with post-exertional malaise or cognitive problems as being their “primary” complaint.)

Given that discrepancy, why do you believe the PACE findings can be extrapolated to others “no matter how the illness is defined,” as you wrote in the NYTimes? Is it your assumption that everyone who met the other two criteria would automatically be screened in by the Oxford criteria, despite the discrepancies in the case definitions?

19) None of the multiple outcomes you cited as “objective” in the protocol supported the subjective outcomes suggesting improvement (excluding the extremely modest increase in the six-minute walking test for the GET group)? Does this lack of objective support for improvement and recovery concern you?  Should the failure of the objective measures raise questions about whether people have achieved any actual benefits or improvements in performance?

20) If wearing the actometer was considered too much of a burden for patients to wear at the end of the trial, when presumably many of them would have been improved, why wasn’t it too much of a burden for patients at the beginning of the trial? In retrospect, given that your other objective findings failed, do you regret having made that decision?

21) In your response to correspondence after publication of the Psych Med paper, you mentioned multiple problems with the “objectivity” of the six-minute walking test that invalidated comparisons with other studies. Yet PACE started assessing people using this test when the trial began recruitment in 2005, and the serious limitations–the short corridors requiring patients to turn around more than was standard, the decision not to encourage patients during the test, etc.–presumably become apparent quickly.

Why then, in the published protocol in 2007, did you describe the walking test as an “objective” measure of function? Given that the study had been assessing patients for two years already, why had you not already recognized the limitations of the test and realized that it was apparently useless as an objective measure? When did you actually recognize these limitations?

22) In the Psych Med paper, you described “recovery” as recovery only from the current episode of illness–a limitation of the term not mentioned in the protocol. Since this definition describes what most people would refer to as “remission,” not “recovery,” why did you choose to use the word “recovery”–in the protocol and in the paper–in the first place? Would the term “remission” have been more accurate and less misleading? Not surprisingly, the media coverage focused on “recovery,” not on “remission.” Were you concerned that this coverage gave readers and viewers an inaccurate impression of the findings, since few readers or viewers would understand that what the Psych Med paper examined was in fact “remission” and not “recovery,” as most people would understand the terms?

23) In the Psychological Medicine definition of “recovery,” you relaxed all four of the criteria. For the first two, you adopted the “normal range” scores for fatigue and physical function from the Lancet paper, with “recovery” thresholds lower than the entry criteria. For the Clinical Global Impression scale, “recovery” in the Psych Med paper required a 1 or 2, rather than just a 1, as in the protocol. For the fourth element, you split the single category of not meeting any of the three case definitions into two separate categories–one less restrictive (‘trial recovery’) than the original proposed in the protocol (now renamed ‘clinical recovery’).

What oversight committee approved the changes in the overall definition of recovery from the protocol, including the relaxation of all four elements of the definition? Can you cite any references for your reconsideration of the CGI scale, and explain what new information prompted this reconsideration after the trial? Can you provide any references for the decision to split the final “recovery” element into two categories, and explain what new information prompted this change after the trial?

24) The Psychological Medicine paper, in dismissing the original “recovery” threshold of 85 on the SF-36, asserted that 50 percent of the population would score below this mean value and that it was therefore not an appropriate cut-off. But that statement conflates the mean and median values; given that this is not a normally distributed sample and that the median value is much higher than the mean in this population, the statement about 50 percent performing below 85 is clearly wrong.

Since the source populations were skewed and not normally distributed, can you explain this claim that 50 percent of the population would perform below the mean? And since this reasoning for dismissing the threshold of 85 is wrong, can you provide another explanation for why that threshold needed to be revised downward so significantly? Why has this erroneous claim not been corrected?

25) What are the results, per the protocol definition of “recovery”?

26) The PLoS One paper reported that a sensitivity analysis found that the findings of the societal cost-effectiveness of CBT and GET would be “robust” even when informal care was measured not by replacement cost of a health-care worker but using alternative assumptions of minimum wage or zero pay. When readers challenged this claim that the findings would be “robust” under these alternative assumptions, the lead author, Paul McCrone, agreed in his responses that changing the value for informal care would, in fact, change the outcomes. He then criticized the alternative assumptions because they did not adequately value the family’s caregiving work, even though they had been included in the PACE statistical plan.

Why did the PLoS One paper include an apparently inaccurate sensitivity analysis that claimed the societal cost-effectiveness findings for CBT and GET were “robust” under the alternative assumptions, even though that wasn’t the case? And if the alternative assumptions were “controversial” and “restrictive, as the lead author wrote in one of his posted responses, then why did the PACE team include them in the statistical plan in the first place?
Related Posts Plugin for WordPress, Blogger...