Tuesday, November 17, 2015

Why Was the FINE Study Disappeared?

David Tuller has posted another wonderful article in his series on the PACE trial. In this article he asks the important question, Whatever happened to the FINE study?

The FINE study was published shortly before the PACE trial. (You can read the FINE study here.) 

It used similar treatment strategies as well as a similar timeline as PACE, but unlike the PACE trial the authors concluded at the end of 70-week study that patients had not significantly benefited from "pragmatic care" (supportive listening, and gradually increasing activity). 

Tuller's question is perfectly valid. Why does everyone cite the PACE trial, with its dubious claims of "recovery," while ignoring the FINE trial, which made no claims that patients had gotten better, let alone recovered?

Please feel free to express what you think of the PACE Trial.

You can send an email to the Lancet's editor, Richard Horton: richard.horton@lancet.com. 

Refer to: "Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial" published by The Lancet, Volume 377, No. 9768, p 823–836, 5 March 2011. (You can read the full study here.)

You can sign a petition asking the Lancet to retract their publication of the PACE trial HERE.

Reprinted with the kind permission of David Tuller. This article first appeared on Dr. Vincent Racaniello's Virology Blog.

________________________________________________________

Trial By Error, Continued: Why has the PACE Study’s “Sister Trial” been “Disappeared” and Forgotten?

By David Tuller, DrPH, 9 NOVEMBER 2015

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In 2010, the BMJ published the results of the Fatigue Intervention by Nurses Evaluation, or FINE. The investigators for this companion trial to PACE, also funded by the Medical Research Council, reported no benefits to ME/CFS patients from the interventions tested.

In medical research, null findings often get ignored in favor or more exciting “positive” results. In this vein, the FINE trial seems to have vanished from the public discussion over the controversial findings from the PACE study. I thought it was important to re-focus some attention on this related effort to prove that “deconditioning” is the cause of the devastating symptoms of ME/CFS. (This piece is also too long but hopefully not quite as dense.)

An update on something else: I want to thank the public relations manager from Queen Mary University of London for clarifying his previous assertion that I did not seek comment from the PACE investigators before Virology Blog posted my story. In an e-mail, he explained that he did not mean to suggest that I hadn’t contacted them for interviews. He only meant, he wrote, that I hadn’t sent them my draft posts for comment before publication. He apologized for the misunderstanding.

I accept his apology, so that’s the end of the matter. In my return e-mail, however, I did let him know I was surprised at the expectation that I might have shared the draft with the PACE investigators before publication. I would not have done that whether or not they had granted me interviews. This is journalism, not peer-review. Different rules.


************************************************************************

In 2003, with much fanfare, the U.K. Medical Research Council announced that it would fund two major studies of non-pharmacological treatments for chronic fatigue syndrome. In addition to PACE, the agency decided to back a second, smaller study called “Fatigue Intervention by Nurses Evaluation,” or FINE. Because the PACE trial was targeting patients well enough to attend sessions at a medical clinic, the complementary FINE study was designed to test treatments for more severely ill patients.

(Chronic fatigue syndrome is also known as myalgic encephalomyelitis, CFS/ME, and ME/CFS, which has now been adopted by U.S. government agencies. The British investigators of FINE and PACE prefer to call it chronic fatigue syndrome, or sometimes CFS/ME.)

Alison Wearden, a psychologist at the University of Manchester, was the lead FINE investigator. She also sat on the PACE Trial Steering Committee and wrote an article about FINE for one of the PACE trial’s participant newsletters. The Medical Research Council and the PACE team referred to FINE as PACE’s “sister” trial. The two studies included the same two primary outcome measures, self-reported fatigue and physical function, and used the same scales to assess them.

The FINE results were published in BMJ in April, 2010. Yet when the first PACE results were published in The Lancet the following year, the investigators did not mention the FINE trial in the text. The trial has also been virtually ignored in the subsequent public debate over the results of the PACE trial and the effectiveness, or lack thereof, of the PACE approach.

What happened? Why has the FINE trial been “disappeared”?

*****

The main goal of the FINE trial was to test a treatment for homebound patients that adapted and combined elements of cognitive behavior therapy and graded exercise therapy, the two rehabilitative therapies being tested in PACE. The approach, called “pragmatic rehabilitation,” had been successfully tested in a small previous study. In FINE, the investigators planned to compare “pragmatic rehabilitation” with another intervention and with standard care from a general practitioner.

Here’s what the Medical Research Council wrote about the main intervention in an article in its newsletter, MRC Network, in the summer of 2003: “Pragmatic rehabilitation…is delivered by specially trained nurses, who give patients a detailed physiological explanation of symptom patterns. This is followed by a treatment programme focussing on graded exercise, sleep and relaxation.”

The second intervention arm featured a treatment called “supportive listening,” a patient-centered and non-directive counseling approach. This treatment presumed that patients might improve if they felt that the therapist empathized with them, took their concerns seriously, and allowed them to find their own approach to addressing the illness.

The Medical Research Council committed 1.3 million pounds to the FINE trial. The study was conducted in northwest England, with 296 patients recruited from primary care. Each intervention took place over 18 weeks and consisted of ten sessions–five home visits lasting up to 90 minutes alternating with five telephone conversations of up to 30 minutes.

As in the PACE trial, patients were selected using the Oxford criteria for chronic fatigue syndrome, defined as the presence of six months of medically unexplained fatigue, with no other symptoms required. The Oxford criteria have been widely criticized for yielding heterogeneous samples, anda report commissioned by the National Institutes of Health this year recommended by the case definition be “retired” for that reason.

More specific case definitions for the illness require the presence of core symptoms like post-exertional malaise, cognitive problems and sleep disorders, rather than just fatigue per se. Because the symptom called post-exertional malaise means that patients can suffer severe relapses after minimal exertion, many patients and advocacy organizations consider increases in activity to be potentially dangerous.

To be eligible for the FINE trial, participants needed to score 70 or less out of 100 on the physical function scale, the Medical Outcomes Study 36-Item Short Form Health Survey, known as the SF-36. They also needed to score a 4 or more out of 11 on the 11-item Chalder Fatigue Scale, with each item scored as either 0 or 1. On the fatigue scale, a higher score indicated greater fatigue.

Among other measures, the trial also included a key objective outcome–the “time to take 20 steps, (or number of steps
taken, if this is not achieved) and maximum heart rate reached on a step-test.”

Participants were to be assessed on these measures at 20 weeks, which as right after the end of the treatment period, and again at 70 weeks, which was one year after the end of treatment. According to the FINE trial protocol, published in the journal BMC Medicine in 2006, “short-term assessments of outcome in a chronic health condition such as CFS/ME can be misleading” and declared the 70-week assessment to be the “primary outcome point.”

*****

The theoretical model behind the FINE trial and pragmatic rehabilitation paralleled the PACE concept. The physical symptoms were presumed to be the result not of a pathological disease process but of “deconditioning” or “dysregulation” caused by sedentary behavior, accompanied by disrupted sleep cycles and stress. The sedentary behavior was itself presumed to be triggered by patients’ “unhelpful’ conviction that they suffered from a progressive medical illness. Counteracting the deconditioning involved re-establishing normal sleep cycles, reducing anxiety levels and gently increasing physical exertion, even if patients remained homebound.

“The treatment [pragmatic rehabilitation] is based on a model proposing that CFS/ME is best understood as a consequence of physiological dysregulation associated with inactivity and disturbance of sleep and circadian rhythms,” stated the FINE trial protocol. “We have argued that these conditions…are often maintained by illness beliefs that lead to exercise-avoidance. The essential feature of the treatment is the provision of a detailed explanation for patients’ symptoms, couched in terms of the physiological dysregulation model, from which flows the rationale for a graded return to activity.”

On the FINE trial website, a 2004 presentation about pragmatic rehabilitation explained the illness in somewhat simpler terms, comparing it to “very severe jetlag.” After explaining how and why pragmatic rehabilitation led to physical improvement, the presentation offered this hopeful message, in boldface: “There is no disease–you have a right to full health. This is a good news diagnosis. Carefully built up exercise can reverse the condition. Go for 100% recovery.”

In contrast, patients, advcoates and many leading scientists have completely rejected the PACE and FINE approach. They believe the evidence overwhelmingly points to an immunological and neurological disorder triggered by an initial infection or some other physiological insult. Last month, the National Institutes of Health ratified this perspective when it announced a major new push to seek biomedical answers to the disease, which it refers to as ME/CFS.

As in PACE, patients in the FINE trial were issued different treatment manuals depending upon their assigned study arm. The treatment manual for pragmatic rehabilitation repeatedly informed participants that the therapy could help them get better—even though the trial itself was designed to test the effectiveness of the therapy. (In the PACE trial, the manuals for the cognitive behavior and graded therapy arms also included many statements promoting the idea that the therapies could successfully treat the illness.)

“This booklet has been written with the help of patients who have made a full recovery from Chronic Fatigue Syndrome,” stated the FINE pragmatic rehabilitation manual on its second page. “Facts and information which were important to them in making this recovery have been included.” The manual noted that the patients who helped write it had been treated at the Royal Liverpool University Hospital but did not include more specific details about their “full recovery” from the illness.

Among the “facts and information” included in the manual were assertions that the trial participants, contrary to what they might themselves believe, had no persistent viral infection and “no underlying serious disease.” The manual promised them that pragmatic rehabilitation could help them overcome the illness and the deconditioning perpetuating it. “Instead of CFS controlling you, you can start to regain control of your body and your life,” stated the manual.

Finally, as in PACE, participants were encouraged to change their beliefs about their condition by “building the right thoughts for your recovery.” Participants were warned that “unhelpful thoughts”—such as the idea that continued symptoms indicated the presence of an organic disease and could not be attributed to deconditioning—“can put you off parts of the treatment programme and so delay or prevent recovery.”

The supportive listening manual did not similarly promote the idea that “recovery” from the illness was possible. During the sessions, the manual explained, “The listener, your therapist, will provide support and encourage you to find ways to cope by using your own resources to change, manage or adapt to difficulties…She will not tell you what to do, advise, coach or direct you.”

*****

A qualitative study about the challenges of the FINE research process, published by the investigators in the journal Implementation Science in 2011, shed light on how much the theoretical framework and the treatment approaches frustrated and angered trial participants. According to the interviews with some of the nurses, nurse supervisors, and participants involved in FINE, the home visits often bristled with tension over the different perceptions of what caused the illness and which interventions could help.

“At times, this lack of agreement over the nature of the condition and lack of acceptance as to the rationale behind the treatment led to conflict,” noted the FINE investigators in the qualitative paper. “A particularly difficult challenge of interacting with patients for the nurses and their supervisors was managing patients’ resistance to the treatment.”

One participant in the pragmatic rehabilitation arm, who apparently found it difficult to do what was apparently expected, attributed this resistance to the insistence that deconditioning caused the symptoms and that activity would reverse them. “If all that was standing between me and recovery was the reconditioning I could work it out and do it, but what I have got is not just a reconditioning problem,” the participant said. “I have got something where there is damage and a complete lack of strength actually getting into the muscles and you can’t work with what you haven’t got in terms of energy.”

Another participant in the pragmatic rehabilitation arm was more blunt. “I kept arguing with her [the nurse administering the treatment] all the time because I didn’t agree with what she said,” said the participant, who ended up dropping out of the trial.

Some participants in the supportive listening arm also questioned the value of the treatment they were receiving, according to the study. “I mostly believe it was more physical than anything else, and I didn’t see how talking could truthfully, you know, if it was physical, do anything,” said one.

In fact, the theoretical orientation also alienated some prospective participants as well, according to interviews the investigators conducted with some patients who declined to enter the trial. ‘It [the PR intervention] insisted that physiologically there was nothing wrong,” said one such patient. “There was nothing wrong with my glands, there was nothing wrong, that it was just deconditioned muscles. And I didn’t believe that…I can’t get well with treatment you don’t believe in.”

When patients challenged or criticized the therapeutic interventions, the study found, nurses sometimes felt their authority and expertise to be under threat. “They are testing you all the time,” said one nurse. Another reported: “That anger…it’s very wearing and demoralizing.”

One nurse remembered the difficulties she faced with a particular participant. “I used to go there and she would totally block me, she would sit with her arms folded, total silence in the house,” said the nurse. “It was tortuous for both of us.”

At times, nurses themselves responded to these difficult interactions with bouts of anger directed at the participants, according to a supervisor.

“Their frustration has reached the point where they sort of boiled over,” said the supervisor. “There is sort of feeling that the patient should be grateful and follow your advice, and in actual fact, what happens is the patient is quite resistant and there is this thing like you know, ‘The bastards don’t want to get better.’”

*****

BMJ published the FINE results in 2010. The FINE investigators found no statistically significant benefits to either pragmatic rehabilitation or supportive listening at 70 weeks. Despite these null findings one year after the end of the 18-week course of treatment, the mean scores of those in the pragmatic rehabilitative arm demonstrated at 20 weeks a “clinically modest” but statistically significant reduction in fatigue—a drop of one point (plus a little) on the 11-point fatigue scale. The slight improvement still meant that participants were much more fatigued than the initial entry threshold for disability, and any benefits were no longer statistically significant by the final assessment.

Despite the null findings at 70 weeks, the authors put a positive gloss on the results, reporting first in the abstract that fatigue was “significantly improved” at 20 weeks. Given the very modest one-point change in average fatigue scores, perhaps the FINE investigators intended to report instead that there was a “statistically significant improvement” at 20 weeks—an accurate phrase with a somewhat different meaning.

The abstract included another interesting linguistic element. While the trial protocol had designated the 70-week assessment as “the primary outcome point,” the abstract of the paper itself now stated that “the primary clinical outcomes were fatigue and physical functioning at the end of treatment (20 weeks) and 70 weeks from recruitment.”

After redefining their primary outcome points to include the 20-week as well as the 70-week assessment, the abstract promoted the positive effects found at the earlier point as the study’s main finding. Only after communicating the initial benefits did they note that these advantages for pragmatic rehabilitation later wore off. The FINE paper cited no oversight committee approval for this expanded interpretation of the trial’s primary outcome points to include the 20-week assessment, nor did it mention the protocol’s caveat about the “misleading” nature of short-term assessments in chronic health conditions.

In fact, within the text of the paper, the investigators noted that the “pre-designated outcome point” was 70 weeks. But they did not explain why they then decided to highlight most in the abstract what was not the pre-designated but instead a post-hoc “primary” outcome point—the 20-week assessment.

A BMJ editorial that accompanied the FINE trial also accentuated the positive results at 20 weeks rather than the bad news at 70 weeks. According to the editorial’s subhead, pragmatic rehabilitation “has a short term benefit, but supportive listening does not.” The editorial did not note that this was not the pre-designated primary outcome point. The null results for that outcome point—the 70-week assessment—were not mentioned until later in the editorial.

*****

Patients and advocates soon began criticizing the study in the “rapid response” section of the BMJ website, citing its theoretical framework, the use of the broad Oxford criteria as a case definition, and the failure to provide the step-test outcomes, among other issues.

“The data provide strong evidence that the anxiety and deconditioning model of CFS/ME on which the trial is predicated is either wrong or, at best, incomplete,” wrote one patient. “These results are immensely important because they demonstrate that if a cure for CFS/ME is to be found, one must look beyond the psycho-behavioural paradigm.”

Another patient wrote that the study was “a wake-up call to the whole
of the medical establishment” to take the illness seriously. One predicted “that there will those who say that the this trial failed because
the patients were not trying hard enough.”

A physician from Australia sought to defend the interests not of patients but of the English language, decrying the lack of hyphens in the paper’s full title: “Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial.”

“The hyphen is a coupling 
between carriages of words to ensure unambiguous
 transmission of thought,” wrote the doctor. “Surely this should read ‘Nurse-led, home-based, self-
help…’

“Lest English sink further into the Great Despond of 
ambiguity and non-sense [hyphen included in the original comment], may I implore the co-editors of
the BMJ to be the vigilant watchdogs of our mother tongue
 which at the hands of a younger ‘texting’ generation is heading towards anarchy.” [The original comment did not include the expected comma between ‘tongue’ and ‘which.’]

*****

In a response on the BMJ website a month after publishing the study, the FINE investigators reported that they had conducted a post-hoc analysis with a different kind of scoring for the Chalder Fatigue Scale.

Instead of scoring the answers as 0 or 1 using what was called a bimodal scale, they rescored them using what was called a continuous scale, with values ranging from 0 to 3. The full range of possible scores now ran from 0 to 33, rather than 0 to 11. (As collected, the data for the Chalder Fatigue Scale allowed for either scoring system; however, the original entry criteria of 4 on the bimodal scale would translate into a range from 4 to as high as 19 on the revised scale.)

With the revised scoring, they now reported a “clinically modest, but statistically significant effect” of pragmatic rehabilitation at 70 weeks—a reduction from baseline of about 2.5 points on the 0 to 33 scale. This final score represented some increase in fatigue from the 20-week interim assessment point.

In their comment on the website, the FINE investigators now reaffirmed that the 70-week assessment was “our primary outcome point.” This statement conformed to the protocol but differed from the suggestion in the BMJ paper that the 20-week results also represented “primary” outcomes. Given that the post-hoc rescoring allowed the investigators to report statistically significant results at the 70-week endpoint, this zig-zag back to the protocol language was perhaps not surprising.

In their comment, the FINE investigators also explained that they did not report their step-test results—their one objective measure of physical capacity–“due to a significant amount of missing data.” They did not provide an explanation for the missing data. (One obvious possible reason for missing data on an objective fitness test is that participants were too disabled to perform it at all.)

The FINE investigators did not address the question of whether the title of their paper should have included hyphens.

In the rapid comments, Tom Kindlon, a patient and advocate from a Dublin suburb, responded to the FINE investigators’ decision to report their new post-hoc analysis of the fatigue scale. He noted that the investigators themselves had chosen the bimodal scoring system for their study rather than the continuous method.

“I’m
 sure many pharmacological and non-pharmacological studies could look
 different if investigators decided to use a different scoring method or
scale at the end, if the results weren’t as impressive as they’d hoped,” he wrote. “But that is not normally how medicine works. So, while it is interesting
 that the researchers have shared this data, I think the data in the main
paper should be seen as the main data.”

*****

The FINE investigators have published a number of other papers arising from their study. In a 2013 paper on mediators of the effects of pragmatic rehabilitation, they reported that there were no differences between the three groups on the objective measure of physical capacity, the step test, despite their earlier decision not to publish the data in the BMJ paper.

Wearden herself presented the trial as a high point of her professional career in a 2013 interviewfor the website of the University of Manchester’s School of Psychological Sciences. “I suppose the thing I did that I’m most proud of is I ran a large treatment trial of pragmatic rehabilitation treatment for patients with chronic fatigue syndrome,” she said in the interview. “We successfully carried that trial out and found a treatment that improved patients’ fatigue, so that’s probably the thing that I’m most proud of.”

The interview did not mention that the improvement at 20 weeks was transient until the investigators performed a post-hoc-analysis and rescored the fatigue scale.

*****

The Science Media Centre, a self-styled “independent” purveyor of information about science and scientific research to journalists, has consistently shown an interest in research on what it calls CFS/ME. It held a press briefing for the first PACE results published in The Lancet in 2011, and has helped publicize the release of subsequent studies from the PACE team.

However, the Science Media Centre does not appear to have done anything to publicize the 2010 release of the FINE trial, despite its interest in the topic. A search of the center’s website for the lead FINE investigator, Alison Wearden, yielded no results. And a search for CFS/ME indicated that the first study embraced by the center’s publicity machine was the 2011 Lancet paper.

That might help explain why the FINE trial was virtually ignored by the media. A search on the LexisNexis database for “PACE trial” and “chronic fatigue syndrome” yielded 21 “newspaper” articles (I use the “apostrophes” here because I don’t know if that number includes articles on newspaper websites that did not appear in the print product; the accuracy of the number is also in question because the list did not include two PACE-related articles that I wrote for The New York Times).

Searches on the database combining “chronic fatigue syndrome” with either “FINE trial” or “pragmatic rehabilitation” yielded no results. (I used the version of LexisNexis Academic available to me through the University of California library system.)

Other researchers have also paid scant attention to the FINE trial, especially when compared to the PACE study. According to Google Scholar, the 2011 PACE paper in The Lancet has been cited 355 times. In contrast, the 2010 FINE paper in BMJ has only been cited 39 times.

*****

The PACE investigators likely exacerbated this virtual disappearance of the FINE trial by their decision not to mention it in their Lancet paper, despite its longstanding status as a “sister trial” and the relevance of the findings to their own study of cognitive behavior therapy and graded exercise therapy. The PACE investigators have not explained their reasons for ignoring the FINE trial. (I wrote about this lapse in my Virology Blog story, but in their response the PACE investigators did not mention it.)

This absence is particularly striking in light of the decision made by the PACE investigators to drop their protocol method of assessing the Chalder Fatigue Scale. In the protocol, their primary fatigue outcome was based on bimodal scoring on the 11-item fatigue scale. The protocol included continuous scoring on the fatigue scale, with the 0 to 33 scale, as a secondary outcome.

In the PACE paper itself, the investigators announced that they had dropped the bimodal scoring in favor of the continuous scoring “to more sensitively test our hypotheses of effectiveness.” They did not explain why they simply didn’t provide the findings under both scoring methods, since the data as collected allowed for both analyses. They also did not cite any references to support this mid-trial decision, nor did they explain what prompted it.

They certainly did not mention that PACE’s “sister” study, the FINE trial, had reported null results at the 70-week endpoint—that is, until the investigators rescored the data using a continuous scale rather than the bimodal scale used in the original paper.

The three main PACE investigators—psychiatrist Peter White and Michael Sharpe, and behavioral psychologist Trudie Chalder—did not respond to an e-mail request for comment on why their Lancet paper did not mention the FINE study, especially in reference to their post-hoc decision to change the method of scoring the fatigue scale. Lancet editor Richard Horton also did not respond to an e-mail request for an interview on whether he believed the Lancet paper should have included information about the FINE trial and its results.

Friday, November 13, 2015

Researchers Urge Independent Review of PACE Trial

The PACE Trial has been very much in the news lately with the virtually simultaneous exposé of the flaws of the trial by investigative journalist David Tuller and the publication of the PACE follow-up. (You can read David Tuller's articles here. The PACE follow-up is here.)

Given the many methodological flaws of the PACE trial, and its consistent refusal to release data from the trial for independent review, it is a wonder that it was ever published. However, not only was the trial rushed to publication, there is evidence that the editor of The Lancet, Richard Horton, was himself biased. In an interview on Australian radio in 2011, Horton claimed that criticisms about PACE are "attack" made by  "highly organised, very vocal and very damaging group of individuals who have, I would say, actually hijacked this agenda and distorted the debate." 

Ironically, Richard Horton recently issued a statement that a lot of published research is incorrect.  “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue," he said. With even deeper irony, he pointed to "tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance." Science has, to quote Horton again, "taken a turn towards darkness.” And Horton himself did much to turn it.

Calling for a review the PACE trial in an open letter, six prominent researchers have urged Dr. Horton to live up to his own words. According to the researchers, the reviewers should be "completely independent of, and have no conflicts of interests involving, the PACE investigators and the funders of the trial."

If you would like to express what you think of the PACE Trial, you can send an email to the Lancet's editor, Richard Horton: richard.horton@lancet.com. Refer to: "Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial" published by The Lancet, Volume 377, No. 9768, p 823–836, 5 March 2011. (You can read the full study here.)

You can sign a petition asking the Lancet to retract their publication of the PACE trial HERE.

________________________________________________


An open letter to Dr. Richard Horton and The Lancet

Virology Blog, 13 NOVEMBER 2015


Dr. Richard Horton
The Lancet
125 London Wall
London, EC2Y 5AS, UK

Dear Dr. Horton:

In February, 2011, The Lancet published an article called “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomized trial.” The article reported that two “rehabilitative” approaches, cognitive behavior therapy and graded exercise therapy, were effective in treating chronic fatigue syndrome, also known as myalgic encephalomyelitis, ME/CFS and CFS/ME. The study received international attention and has had widespread influence on research, treatment options and public attitudes.

The PACE study was an unblinded clinical trial with subjective primary outcomes, a design that requires strict vigilance in order to prevent the possibility of bias. Yet the study suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings. The patient and advocacy communities have known this for years, but a recent in-depth report on this site, which included statements from five of us, has brought the extent of the problems to the attention of a broader public. The PACE investigators have replied to many of the criticisms, but their responses have not addressed or answered key concerns.

The major flaws documented at length in the recent report include, but are not limited to, the following:

*The Lancet paper included an analysis in which the outcome thresholds for being “within the normal range” on the two primary measures of fatigue and physical function demonstrated worse health than the criteria for entry, which already indicated serious disability. In fact, 13 percent of the study participants were already “within the normal range” on one or both outcome measures at baseline, but the investigators did not disclose this salient fact in the Lancet paper. In an accompanying Lancet commentary, colleagues of the PACE team defined participants who met these expansive “normal ranges” as having achieved a “strict criterion for recovery.” The PACE authors reviewed this commentary before publication.

*During the trial, the authors published a newsletter for participants that included positive testimonials from earlier participants about the benefits of the “therapy” and “treatment.” The same newsletter included an article that cited the two rehabilitative interventions pioneered by the researchers and being tested in the PACE trial as having been recommended by a U.K. clinical guidelines committee “based on the best available evidence.” The newsletter did not mention that a key PACE investigator also served on the clinical guidelines committee. At the time of the newsletter, two hundred or more participants—about a third of the total sample–were still undergoing assessments.

*Mid-trial, the PACE investigators changed their protocol methods of assessing their primary outcome measures of fatigue and physical function. This is of particular concern in an unblinded trial like PACE, in which outcome trends are often apparent long before outcome data are seen. The investigators provided no sensitivity analyses to assess the impact of the changes and have refused requests to provide the results per the methods outlined in their protocol.

*The PACE investigators based their claims of treatment success solely on their subjective outcomes. In the Lancet paper, the results of a six-minute walking test—described in the protocol as “an objective measure of physical capacity”–did not support such claims, notwithstanding the minimal gains in one arm. In subsequent comments in another journal, the investigators dismissed the walking-test results as irrelevant, non-objective and fraught with limitations. All the other objective measures in PACE, presented in other journals, also failed. The results of one objective measure, the fitness step-test, were provided in a 2015 paper in The Lancet Psychiatry, but only in the form of a tiny graph. A request for the step-test data used to create the graph was rejected as “vexatious.”

*The investigators violated their promise in the PACE protocol to adhere to the Declaration of Helsinki, which mandates that prospective participants be “adequately informed” about researchers’ “possible conflicts of interest.” The main investigators have had financial and consulting relationships with disability insurance companies, advising them that rehabilitative therapies like those tested in PACE could help ME/CFS claimants get off benefits and back to work. They disclosed these insurance industry links in The Lancet but did not inform trial participants, contrary to their protocol commitment. This serious ethical breach raises concerns about whether the consent obtained from the 641 trial participants is legitimate.

Such flaws have no place in published research. This is of particular concern in the case of the PACE trial because of its significant impact on government policy, public health practice, clinical care, and decisions about disability insurance and other social benefits. Under the circumstances, it is incumbent upon The Lancet to address this matter as soon as possible.

We therefore urge The Lancet to seek an independent re-analysis of the individual-level PACE trial data, with appropriate sensitivity analyses, from highly respected reviewers with extensive expertise in statistics and study design. The reviewers should be from outside the U.K. and outside the domains of psychiatry and psychological medicine. They should also be completely independent of, and have no conflicts of interests involving, the PACE investigators and the funders of the trial.

Thank you very much for your quick attention to this matter.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London

Leonard A. Jason, PhD
Professor of Psychology
DePaul University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley

Friday, November 6, 2015

NIH's Promise to Increase Research Funding for ME/CFS - A Patient's Perspective

Rivka Solomon is a long-time advocate for ME/CFS patients. She has been ill for 25 years, and has been witness to the many changes - political and social - affecting the patient community over the past two decades. On November 2, WBUR published her reaction to NIH's promise to increase funding for ME/CFS research.

Reprinted with permission.

Often Bedridden For 25 Years, Advocate Welcomes NIH Move On Fatigue Syndrome

By Rivka Solomon, WBUR, November 2, 2015

Last week, the National Institutes of Health announced a welcome change: They promised to help the more than 1 million Americans who have the devastating disease commonly known as chronic fatigue syndrome.

This is akin to the NIH finally recognizing multiple sclerosis or Parkinson’s disease, two other debilitating neurological illnesses that also have no known cause or cure.

The name chronic fatigue syndrome, which trivializes the true horrors of the disease, was adopted by the government decades ago and has been truly detrimental to the patients. The stigmatizing name has allowed doctors, the media and even families whose loved ones got sick to dismiss patients as mere lazy malingerers.

After all, who isn’t fatigued in today’s hustle and bustle world? Take a nap. Get over it. Exercise it away.

Well, I tried. But napping didn’t help, and exercise made me significantly sicker.

So for the last 25 years in which I have had myalgic encephalomyelitis, or ME — the name the World Health Organization uses and the name most patients prefer — I have been forced to spend much of my life in or near bed.

Try doing that for a quarter of a century.

Myalgic encephalomyelitis means, literally, pain and inflammation of the brain and spinal cord. But what does that translate to in the real world? I often struggle with exhaustion so crushing it is hard to get to the bathroom, let alone lift my arms to shampoo my hair; brain fog so thick that formulating and finishing thoughts is a struggle; vertigo that makes it hard to see or stand up straight; numb hands; Jello-like legs; joint and muscle pain; and a hyper sensitivity to chemicals and perfumes that turns me into a canary in the coal mine.

The hallmark of the disease, though, is the inability to exert any energy — physical or intellectual — without a relapse or flare of unknown length. Sometimes it can take days or weeks to regain my strength after a phone conversation. It is as if my body can’t replace the cellular energy required to do, well, just about anything.

All this came on after mono. That was all it took. Mononucleosis. Other patients have gotten myalgic encephalomyelitis/chronic fatigue syndrome — abbreviated as ME/CFS — from other assaults that apparently slapped down their immune systems, too, and/or triggered an autoimmune response. The result? With no commonly accepted diagnosis and no FDA-approved treatments, many of us have been languishing for years.

Then, in 2014 and 2015, the NIH sponsored two initiatives: a report generated by the Pathways to Prevention program and a report from the prestigious Institute of Medicine. Between the two, they found ME/CFS was a serious disease that can significantly impair the lives of those who get it. They also found that research into ME/CFS was seriously underfunded and there is an urgent need to invest in it.

How true.

For years, the NIH has been allocating a pittance to ME/CFS research. This is most strikingly seen when compared to other neuro-immune diseases. Multiple sclerosis, with 400,000 U.S. patients, gets funded $102 million per year. ME/CFS, with more than 1 million U.S. patients, gets a paltry $5 million per year. The NIH gives more money to research on hay fever than ME/CFS. And yet people with hay fever don’t spend decades in bed, too weak to function.

The question is why?

Why, in the past 30 years, when ME/CFS reared its ugly head on the American scene and we were given the moniker of chronic fatigue syndrome (that’d be like calling Parkinson’s “shaky person’s syndrome”), did the government ignore us? Worse, why did they delegitimize, marginalize and psychologize this disease by funding studies to supposedly show we have personality disorders, a fear of leaving our homes or childhood trauma? (Yes, these are real studies.) Why tell us exercise will help, when that would be like giving sugar to a person with diabetes?

Perhaps they wanted to spare insurance companies the expense of treating us? Perhaps leadership at the National Institutes of Health had a bias against us? Or perhaps governments only respond to pressure; and with the patients so sick there are few who can lobby on Capitol Hill or demonstrate in the streets, such as the highly effective HIV/AIDS activists from ACT UP.

But with last week’s NIH press release promising to bolster ME/CFS research, the tide is turning.

Surely, the years of patient advocates struggling, often from bed, to get the government’s attention made an impact. It was also likely personal relationships: The disease is now so prevalent that NIH Director Dr. Francis Collins has current and past employees with the disease, and top-notch scientist colleagues with family members too sick to feed themselves. All petitioned Dr. Collins for help.

Whatever turned the tide, I spent the day that the NIH put out its press release crying. I was relieved that my government was essentially acknowledging for the first time that ME/CFS is a serious disease with a profound unmet need. From bed, I typed frantically on my computer with others from the online patient community: Could it be, we asked each other, that with Dr. Collins’ promise of help, our own government may actually, finally, come to our rescue?

I have already lost my 30s, my 40s and some of my 50s to this disease. Could the end of my nightmare be in sight?

Dr. Collins, now is the time to attach a dollar sign to your promise of help. And please make it happen fast. We patients are petitioning for funding equity, making the funding commensurate with the burden of the disease and the population in need — that is, at least $250 million per year. I and at least 1 million other Americans, 17 million worldwide, can’t wait to get our lives back. The help can’t come soon enough.

All in all, it’s been an astonishing week full of hope for ME/CFS patients. We’ve had recognition and a promise of help from the U.S. government. And, like icing on the cake, a dogged investigative journalist, David Tuller, has taken on the single-most noted study upholding the idea that ME/CFS is a trivial condition that is all in our heads. See his article here: “Trial By Error: The Troubling Case Of The PACE Chronic Fatigue Syndrome Study.”

Finally, we ME/CFS patients are being taken seriously. What a welcome change.

Rivka’s Suggested Do’s And Don’t’s:

— Do have great compassion for those with ME/CFS.
— Do ask how you can help and make offers of specific help, from vacuuming to cooking meals to rides to doctor’s appointments.
— Do remember them, even if they have not been able to leave the house for weeks or months: Call and remind them you care.
— Don’t think that ME/CFS is “all in their head.”
— Don’t suggest they exercise when they can’t.
— Don’t tell patients that you are tired too.

Rivka Solomon is a Massachusetts writer whose commentaries have been featured on WBUR. She created a women’s empowerment program, That Takes Ovaries, based on her book and play of the same name, and is now writing a book about her 25 years with ME/CFS and Lyme disease.

Wednesday, November 4, 2015

David Tuller Responds to the PACE Investigators

David Tuller's exposé of the PACE trial aroused considerable attention in the media.

It also aroused the normally complacent authors of the PACE trial itself. Having published their "findings" repeatedly in some of the most prestigious medical journals in the world, and having fulfilled their mission to provide a rationale for eliminating costly treatments in favor of those requiring little expense and only rudimentary expertise, the PACE authors have contented themselves with denying access to their data, and cloaking themselves silence.

Tuller's articles awakened them from their stupor. On October 30, Professors Peter White, Trudie Chalder and Michael Sharpe (co-principal investigators of the PACE trial) responded to three blog posts by David Tuller. (You can read their response here.) The first part of their response is a reiteration of the trial. Later, in an attempt to obfuscate Tuller's main criticisms, they counter several statements that Tuller did not make with an impressive amount of ultimately meaningless verbiage. (This is known as a "snow job.")

It is interesting to examine the techniques White, Chalder, and Sharpe used to sidestep Tuller's critique because they are all used in propaganda: obfuscation, misdirection, minimization ("bias could not possibly have been caused by touting their treatments as 'approved by NHS'"), cherry-picking (interpreting data “in light of their context and validity”), and vilification (FOIA requests were "vexatious").

The PACE authors have repeatedly - and shamelessly - exaggerated, speculated upon, and, in all likelihood, falsified results. (We can't know the full extent of their manipulation of the results if the primary data are not released.) It is about time they were taken to task.

You can read Part 1 of David Tuller's exposé here.

You can read Part 2 here.

You can read Part 3 (final installment) here.

If you would like to express your view of the PACE Trial, you can send an email to the Lancet's editor, Richard Horton: richard.horton@lancet.com. Refer to: "Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial" published by The Lancet, Volume 377, No. 9768, p 823–836, 5 March 2011. (You can read the full study here.)

You can sign a petition asking the Lancet to retract their publication of the PACE trial HERE.
______________________________________

Reprinted with permission.

By David Tuller, Virology Blog, 30 OCTOBER 2015

David Tuller’s three-installment investigation of the PACE trial for chronic fatigue syndrome, “Trial By Error,” has received enormous attention. Although the PACE investigators declined David’s efforts to interview them, they have now requested the right to reply. Today, virology blog posts their response to David’s story, and below, his response to their response. 

According to the communications department of Queen Mary University, the PACE investigators have been receiving abuse on social media as a result of David Tuller’s posts. When I published Mr. Tuller’s articles, my intent was to provide a forum for discussion of the controversial PACE results. Abuse of any kind should not have been, and must not be, part of that discourse. -vrr
_________________________________________________________

Last December, I offered to fly to London to meet with the main PACE investigators to discuss my many concerns. They declined the offer. Dr. White cited my previous coverage of the issue as the reason and noted that “we think our work speaks for itself.” Efforts to reach out to them for interviews two weeks ago also proved unsuccessful.

After my story ran on virology blog last week, a public relations manager for medicine and dentistry in the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello. He requested, on behalf of the PACE authors, the right to respond. (Queen Mary University is Dr. White’s home base.)

That response arrived Wednesday. My first inclination, when I read it, was that I had already rebutted most of their criticisms in my 14,000-word piece, so it seemed like a waste of time to engage in further extended debate.

Later in the day, however, the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello again, with an urgent request to publish the response as soon as possible. The PACE investigators, he said, were receiving “a lot of abuse” on social media as a result of my posts, so they wanted to correct the “misinformation” as soon as possible.

Because I needed a day or two to prepare a careful response to the PACE team’s rebuttal, Dr. Racaniello agreed to post them together on Friday morning.

On Thursday, Dr. Racaniello received yet another appeal from the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University. Dissatisfied with the Friday publishing timeline, he again urged expedited publication because “David’s blog posts contain a number of inaccuracies, may cause a considerable amount of reputational damage, and he did not seek comment from any of the study authors before the virology blog was published.”

The charge that I did not seek comment from the authors was at odds with the facts, as Dr. Racaniello knew. (It is always possible to argue about accuracy and reputational damage.) Given that much of the argument for expedited posting rested on the public relations manager’s obviously “dysfunctional cognition” that I had unfairly neglected to provide the PACE authors with an opportunity to respond, Dr. Racaniello decided to stick with his pre-planned posting schedule.

Before addressing the PACE investigators’ specific criticisms, I want to apologize sincerely to Dr. White, Dr. Chalder, Dr. Sharpe and their colleagues on behalf of anyone who might have interpreted my account of what went wrong with the PACE trial as license to target the investigators for “abuse.” That was obviously not my intention in examining their work, and I urge anyone engaging in such behavior to stop immediately. No one should have to suffer abuse, whether online or in the analog world, and all victims of abuse deserve enormous sympathy and compassion.

However, in this case, it seems I myself am being accused of having incited a campaign of social media “abuse” and potentially causing “reputational damage” through purportedly inaccurate and misinformed reporting. Because of the seriousness of these accusations, and because such accusations have a way of surfacing in news reports, I feel it is prudent to rebut the PACE authors’ criticisms in far more detail that I otherwise would. (I apologize in advance to the obsessives and others who feel they need to slog through this rebuttal; I urge you to take care not to over-exert yourself!)

In their effort to correct the “misinformation” and “inaccuracies” in my story about the PACE trial, the authors make claims and offer accounts similar to those they have previously presented in published comments and papers. In the past, astonishingly, journal editors, peer reviewers, reporters, public health officials, and the British medical and academic establishments have accepted these sorts of non-responsive responses as adequate explanations for some of the study’s fundamental flaws. I do not.

None of what they have written in their response actually addresses or resolves the core issues that I wrote about last week. They have ignored many of the questions raised in the article. In their response, they have also not mentioned the devastating criticisms of the trial from top researchers from Columbia, Stanford, University College London, and elsewhere. They have not addressed why major reports this year from the Institute of Medicine and the National Institutes of Health have presented portraits of the disease starkly at odds with the PACE framework and approach.

I will ignore their overview of the findings and will focus on the specific criticisms of my work. (I will, however, mention here that my piece discussed why their claims of cost-effectiveness for cognitive behavior therapy and graded exercise therapy are based on inaccurate statements in a paper published in PLoS One in 2012).

"13% of patients had already “recovered” on entry into the trial"

I did not write that 13% of the participants were “recovered” at baseline, as the PACE authors state. I wrote that they were “recovered” or already at the “recovery” thresholds for two specific indicators, physical function and fatigue, at baseline—a different statement, and an accurate one.

The authors acknowledge, in any event, that 13% of the sample was “within normal range” at baseline. For the 2013 paper in Psychological Medicine, these “normal range” thresholds were re-purposed as two of the four required “recovery” criteria.

And that begs the question: Why, at baseline, was 13% of the sample “within normal range” or “recovered” on any indicator in the first place? Why did entry criteria for disability overlap with outcome scores for being “within the normal range” or “recovered”? The PACE authors have never provided an explanation of this anomaly.

In their response, the authors state that they outlined other criteria that needed to be met for someone to be called “recovered.” This is true; as I wrote last week, participants needed to meet “recovery” criteria on four different indicators to be considered “recovered.” The PACE authors did not provide data for two of the indicators in the 2011 Lancet paper, so in that paper they could not report results for “recovery.”

However, at the press conference presenting the 2011 Lancet paper, Trudie Chalder referred to people who met the overlapping disability/”normal range” thresholds as having gotten “back to normal”—an explicit “recovery” claim. In a Lancet comment published along with the PACE study itself, colleagues of the PACE team referred to these bizarre “normal range” thresholds for physical function and fatigue as a “strict criterion for recovery.” As I documented, the Lancet comment was discussed with the PACE authors before publication; the phrase “strict criterion for recovery” obviously survived that discussion.

Much of the coverage of the 2011 paper reported that patients got “back to normal” or “recovered,” based on Dr. Chalder’s statement and the Lancet comment. The PACE authors made no public attempt to correct the record in the months after this apparently inaccurate news coverage, until they published a letter in the Lancet. In the response to Virology Blog, they say that they were discussing “normal ranges” in the Lancet paper, and not “recovery.” Yet they have not explained why Chalder spoke about participants getting “back to normal” and why their colleagues wrote that the nonsensical “normal ranges” thresholds represented a “strict criterion of recovery.”

Moreover, they still have not responded to the essential questions: How does this analysis make sense? What are the implications for the findings if 13 % are already “within normal range” or “recovered” on one of the two primary outcome measures? How can they be “disabled” enough on the two primary measures to qualify for the study if they’re already “within normal range” or “recovered”? And why did the PACE team use the wrong statistical methods for calculating their “normal ranges” when they knew that method was wrong for the data sources they had?

"Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee."

The PACE authors apparently believe it is appropriate to disseminate positive testimonials during a trial as long as the therapies or interventions are not mentioned. (James Coyne dissected this unusual position yesterday.)

This is their argument: “It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.” Apparently, the PACE investigators believe that if you bias all the arms of your study in a positive direction, you are not introducing bias into your study. It is hard to know what to say about this argument.

Furthermore, the PACE authors argue that the U.K. government’s new treatment guidelines had been widely reported. Therefore, they contend, it didn’t matter that–in the middle of a trial to test the efficacy of cognitive behavior therapy and graded exercise therapy–they had informed participants that the government had already approved cognitive behavior therapy and graded exercise therapy “based on the best available evidence.”

They are wrong. They introduced an uncontrolled, unpredictable co-intervention into their study, and they have no idea what the impact might have been on any of the four arms.

In their response, the PACE authors note that the participants’ newsletter article, in addition to cognitive behavior therapy and graded exercise therapy, included a third intervention, Activity Management. As they correctly note, I did not mention this third intervention in my Virology Blog story. The PACE authors now write: “These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.”

This statement is nonsense. Their third intervention was called “Adaptive Pacing Therapy,” and they developed it specifically for testing in the PACE trial. It is unclear why they now state that their third intervention was Activity Management, or why they think participants would know that Activity Management was synonymous with Adaptive Pacing Therapy. After all, cognitive behavior therapy and graded exercise therapy also involve some form of “activity management.” Precision in language matters in science.

Finally, the investigators say that Jessica Bavington, a co-author of the 2011 paper, had already left the PACE team before she served on the government committee that endorsed the PACE therapies. That might be, but it is irrelevant to the question that I raised in my piece: whether her dual role presented a conflict of interest that should have been disclosed to participants in the newsletter article about the U.K. treatment guidelines. The PACE newsletter article presented the U.K. guideline committee’s work as if it were independent of the PACE trial itself, when it was not.

"Bias was caused by changing the two primary outcomes and how they were analyzed"

 The PACE authors seem to think it is acceptable to change methods of assessing primary outcome measures during a trial as long as they get committee approval, announce it in the paper, and provide some sort of reasonable-sounding explanation as to why they made the change. They are wrong.

They need as well to justify the changes with references or citations that support their new interpretations of their indicators, and they need to conduct sensitivity analyses to assess the impact of the changes on their findings. Then they need to explain why their preferred findings are more robust than the initial, per-protocol findings. They did not take these steps for any of the many changes they made from their protocol.

The PACE authors mention the change from bimodal to Likert-style scoring on the Chalder Fatigue Scale. They repeat their previous explanation of why they made this change. But they have ignored what I wrote in my story—that the year before PACE was published, its “sister” study, called the FINE trial, had no significant findings on the physical function and fatigue scales at the end of the trial and only found modest benefits in a post-hoc analysis after making the same change in scoring that PACE later made. The FINE study was not mentioned in PACE. The PACE authors have not explained why they left out this significant information about their “sister” study.

Regarding the abandonment of the original method of assessing the physical function scores, this is what they say in their response: “We decided this composite method [their protocol method] would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.” They mention that they received committee approval, and that the changes were made before examining the outcome data.

The authors have presented these arguments previously. However, they have not responded to the questions I raised in my story. Why did they not report any sensitivity analyses for the changes in methods of assessing the primary outcome measures? (Sensitivity analyses can assess how changes in assumptions or variables impact outcomes.) What prompted them to reconsider their assessment methods in the middle of the trial? Were they concerned that a mean-based measure, unlike their original protocol measure, did not provide any information about proportions of participants who improved or got worse? Any information about proportions of participants who got better or worse were from post-hoc analyses—one of which was the perplexing “normal range” analysis.

Moreover, this was an unblinded trial, and researchers generally have an idea of outcome trends before examining outcome data. When the PACE authors made the changes, did they already have an idea of outcome trends? They have not answered that question.

"Our interpretation was misleading after changing the criteria for determining recovery"

 The PACE authors relaxed all four of their criteria for “recovery” in their 2013 paper and cited no committees who approved this overall redefinition of this critical concept. Three of these relaxations involved expanded thresholds; the fourth involved splitting one category into two sub-categories—one less restrictive and one more restrictive. The authors gave the full results for the less restrictive category of “recovery.”

The PACE authors now say that they changed the “recovery” thresholds on three of the variables “since we believed that the revised thresholds better reflected recovery.” Again, they apparently think that simply stating their belief that the revisions were better justifies making the changes.

Let’s review for a second. The physical function threshold for “recovery” fell from 85 out of 100 in the protocol, to a score of 60 in the 2013 paper. And that “recovery” score of 60 was lower than the entry score of 65 to qualify for the study. The PACE authors have not explained how the lower score of 60 “better reflected recovery”—especially since the entry score of 65 already represented serious disability. Similar problems afflicted the fatigue scale “recovery” threshold.

The PACE authors also report that “we included those who felt “much” (and “very much”) better in their overall health” as one of the criteria for “recovery.” This is true. They are referring to the Clinical Global Impression scale. In the protocol, participants needed to score a 1 (“very much better”) on this scale to be considered “recovered” on that indicator. In the 2013 paper, participants could score a 1 (“very much better”) or a 2 (“much better”). The PACE authors provided no citations to support this expanded interpretation of the scale. They simply explained in the paper that they now thought “much better” reflected the process of recovery and so those who gave a score of 2 should also be considered to have achieved the scale’s “recovery” threshold.

With the fourth criterion—not meeting any of the three case definitions used to define the illness in the study—the PACE authors gave themselves another option. Those who did not meet the study’s main case definition but still met one or both of the other two were now eligible for a new category called “trial recovery.” They did not explain why or when they made this change.

The PACE authors provided no sensitivity analyses to measure the impact of the significant changes in the four separate criteria for “recovery,” as well as in the overall re-definition. And remember, participants at baseline could already have achived the “recovery” requirements for one or two of the four criteria—the physical function and fatigue scales. And 13% of them already had.

"Requests for data under the freedom of information act were rejected as vexatious"

The PACE authors have rejected requests for the results per the protocol and many other requests for documents and data as well—at least two for being “vexatious,” as they now report. In my story, I incorrectly stated that requests for per-protocol data were rejected as “vexatious.” In fact, earlier requests for per-protocol data were rejected for other reasons.

One recent request rejected as “vexatious” involved the PACE investigators’ 2015 paper in The Lancet Psychiatry. In this paper, they published their last “objective” outcome measure (except for wages, which they still have not published)—a measure of fitness called a “step-test.” But they only published a tiny graph on a page with many other tiny graphs, not the actual numbers from which the graph was drawn.

The graph was too small to extract any data, but it appeared that the cognitive behavior therapy and graded exercise therapy groups did worse than the other two. A request for the step-test data from which they created the graph was rejected as “vexatious.”

However, I apologize to the PACE authors that I made it appear they were using the term “vexatious” more extensively in rejecting requests for information than they actually have been. I also apologize for stating incorrectly that requests for per protocol data specifically had been rejected as “vexatious.”

This is probably a good time to address the PACE authors’ repeated refrain that concerns about patient confidentiality prevent them from releasing raw data and other information from the trial. They state: “The safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does [sic] not always protect the identity of a person, as they may be recognized from personal and medical information.”

This argument against the release of data doesn’t really hold up, given that researchers share data all the time without compromising confidentiality. Really, it’s not that difficult to do!

(It also bears noting that the PACE authors’ dedication to participant protection did not extend to fulfilling their protocol promise to inform participants of their “possible conflicts of interest”—see below.)

"Subjective and objective outcomes"

The PACE authors included multiple objective measures in their protocol. All of them failed to demonstrate real treatment success or “recovery.” The extremely modest improvements in the exercise therapy arm in the walking test still left them more severely disabled people with people with pacemakers, cystic fibrosis patients, and relatively healthy women in their 70s.

The authors now write: “We interpreted these data in the light of their context and validity.”

What the PACE team actually did was to dismiss their own objective data as irrelevant or not actually objective after all. In doing so, they cited various reasons they should have considered before including these measures in the study as “objective” outcomes. They provide one example in their response. They selected employment data as an objective measure of function, and then—as they explain in their response, and have explained previously–they decided afterwards that it wasn’t an objective measure of function after all, for this and that reason.

The PACE authors consider this interpreting data “in light of their context and validity.” To me, it looks like tossing data they don’t like.

What they should do, but have not, is to ask whether the failure of all their objective measures might mean they should start questioning the meaning, reliability and validity of their reported subjective results.

"There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent"

The PACE authors here seriously misstate the concerns I raised in my piece. I did not assert that bias was caused by their involvement with insurance companies. I asserted that they violated an international research ethics document and broke a commitment they made in their protocol to inform participants of “any possible conflicts of interest.” Whether bias actually occurred is not the point.

In their approved protocol, the authors promised to adhere to the Declaration of Helsinki, a foundational human rights document that is explicit on what constitutes legitimate informed consent: Prospective participants must be “adequately informed” of “any possible conflicts of interest.” The PACE authors now suggest this disclosure was unnecessary because 1) the conflicts weren’t really conflicts after all; 2) they disclosed these “non-conflicts” as potential conflicts of interest in the Lancet and other publications, 3) they had a lot of investigators but only three had links with insurers, and 4) they informed participants about who funded the research.

These responses are not serious. They do nothing to explain why the PACE authors broke their own commitment to inform participants about “any possible conflicts of interest.” It is not acceptable to promise to follow a human rights declaration, receive approvals for a study, and then ignore inconvenient provisions. No one is much concerned about PACE investigator #19; people are concerned because the three main PACE investigators have  advised disability insurers that cognitive behavior therapy and graded exercise therapy can get claimants off benefits and back to work.

That the PACE authors made the appropriate disclosures to journal editors is irrelevant; it is unclear why they are raising this as a defense. The Declaration of Helsinki is about protecting human research subjects, not about protecting journal editors and journal readers. And providing information to participants about funding sources, however ethical that might be, is not the same as disclosing information about “any possible conflicts of interest.” The PACE authors know this.

Moreover, the PACE authors appear to define “conflict of interest” quite narrowly. Just because the insurers were not involved in the study itself does not mean there is no conflict of interest and does not alleviate the PACE authors of the promise they made to inform trial participants of these affiliations. No one required them to cite the Declaration of Helsinki in their protocol as part of the process of gaining approvals for their trial.

As it stands, the PACE study appears to have no legitimate informed consent for any of the 641 participants, per the commitments the investigators themselves made in their protocol. This is a serious ethical breach.

I raised other concerns in my story that the authors have not addressed. I will save everyone much grief and not go over them again here.

I want to acknowledge two additional minor errors. In the last section of the piece, I referred to the drug rituximab as an “anti-inflammatory.” While it does have anti-inflammatory effects, rituximab should more properly be referred to as an “immunomodulatory” drug.

Also, in the first section of the story, I wrote that Dr. Chalder and Dr. Sharpe did not return e-mails I sent them last December, seeking interviews. However, during a recent review of e-mails from last December, I found a return e-mail from Dr. Sharpe that I had forgotten about. In the e-mail, Dr. Sharpe declined my request for an interview.

I apologize to Dr. Sharpe for suggesting he hadn’t responded to my e-mail last December.

Monday, November 2, 2015

TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study (final installment)

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley. He has written many articles on ME/CFS, including pieces for the New York Times.

In this series, he takes an in-depth look at the PACE trial, examining not just the flaws of its methodology, but conflicts of interest among its researchers, inflated claims of success, unjustified conclusions, and a serious lack of diligence on the part of the medical journals which printed the results.

Predictably, the authors of the PACE trial did not respond to Tuller's requests for an interview. However, once his series had garnered attention in the media, they demanded a rebuttal, which the Virology Blog printed. On October 30, Professors Peter White, Trudie Chalder and Michael Sharpe (co-principal investigators of the PACE trial) responded to David Tuller's posts with what can only be called "doublespeak." Yes, they said, 13% of the trial's participants met the cut-off for "recovered" before the trial, but they hadn't actually recovered. More obfuscation followed.

Tuller, it would be safe to say, shredded the rebuttal.

You can read Part 1 here.

You can read Part 2 here.

If you would like to express what you think of the PACE Trial, you can send an email to the Lancet's editor, Richard Horton: richard.horton@lancet.com. Refer to: "Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial" published by The Lancet, Volume 377, No. 9768, p 823–836, 5 March 2011. (You can read the full study here.)

You can sign a petition asking the Lancet to retract their publication of the PACE trial HERE.

Reprinted with the kind permission of David Tuller. This article first appeared on Dr. Vincent Racaniello's Virology Blog.

_____________________

TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study (final installment)

By David Tuller, DrPH, Virology Blog, 23 OCTOBER 2015

A few years ago, Dr. Racaniello let me hijack this space for a long piece about the CDC’s persistent incompetence in its efforts to address the devastating illness the agency itself had misnamed “chronic fatigue syndrome.” Now I’m back with an even longer piece about the U.K’s controversial and highly influential PACE trial. The $8 million study, funded by British government agencies, purportedly proved that patients could “recover” from the illness through treatment with one of two rehabilitative, non-pharmacological interventions: graded exercise therapy, involving a gradual increase in activity, and a specialized form of cognitive behavior therapy. The main authors, a well-established group of British mental health professionals, published their first results in The Lancet in 2011, with additional results in subsequent papers.

Much of what I report here will not be news to the patient and advocacy communities, which have produced a voluminous online archive of critical commentary on the PACE trial. I could not have written this piece without the benefit of that research and the help of a few statistics-savvy sources who talked me through their complicated findings. I am also indebted to colleagues and friends in both public health and journalism, who provided valuable suggestions and advice on earlier drafts. Today’s Virology Blog installment is the final quarter; the first and second installment were published previously. I was originally working on this piece with Retraction Watch, but we could not ultimately agree on the direction and approach.

SUMMARY

This examination of the PACE trial of chronic fatigue syndrome identified several major flaws:

*The study included a bizarre paradox: participants’ baseline scores for the two primary outcomes of physical function and fatigue could qualify them simultaneously as disabled enough to get into the trial but already “recovered” on those indicators–even before any treatment. In fact, 13 percent of the study sample was already “recovered” on one of these two measures at the start of the study.

*In the middle of the study, the PACE team published a newsletter for participants that included glowing testimonials from earlier trial subjects about how much the “therapy” and “treatment” helped them. The newsletter also included an article informing participants that the two interventions pioneered by the investigators and being tested for efficacy in the trial, graded exercise therapy and cognitive behavior therapy, had been recommended as treatments by a U.K. government committee “based on the best available evidence.” The newsletter article did not mention that a key PACE investigator was also serving on the U.K. government committee that endorsed the PACE therapies.

*The PACE team changed all the methods outlined in its protocol for assessing the primary outcomes of physical function and fatigue, but did not take necessary steps to demonstrate that the revised methods and findings were robust, such as including sensitivity analyses. The researchers also relaxed all four of the criteria outlined in the protocol for defining “recovery.” They have rejected requests from patients for the findings as originally promised in the protocol as “vexatious.”

*The PACE claims of successful treatment and “recovery” were based solely on subjective outcomes. All the objective measures from the trial—a walking test, a step test, and data on employment and the receipt of financial information—failed to provide any evidence to support such claims. Afterwards, the PACE authors dismissed their own main objective measures as non-objective, irrelevant, or unreliable.

*In seeking informed consent, the PACE authors violated their own protocol, which included an explicit commitment to tell prospective participants about any possible conflicts of interest. The main investigators have had longstanding financial and consulting ties with disability insurance companies, having advised them for years that cognitive behavior therapy and graded exercise therapy could get claimants off benefits and back to work. Yet prospective participants were not told about any insurance industry links and the information was not included on consent forms. The authors did include the information in the “conflicts of interest” sections of the published papers.

Top researchers who have reviewed the study say it is fraught with indefensible methodological problems. Here is a sampling of their comments:

Dr. Bruce Levin, Columbia University: “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism.”

Dr. Ronald Davis, Stanford University: “I’m shocked that the Lancet published it…The PACE study has so many flaws and there are so many questions you’d want to ask about it that I don’t understand how it got through any kind of peer review.”

Dr. Arthur Reingold, University of California, Berkeley: “Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Dr. Jonathan Edwards, University College London: “It’s a mass of un-interpretability to me…All the issues with the trial are extremely worrying, making interpretation of the clinical significance of the findings more or less impossible.”

Dr. Leonard Jason, DePaul University: “The PACE authors should have reduced the kind of blatant methodological lapses that can impugn the credibility of the research, such as having overlapping recovery and entry/disability criteria.”

************************************************************************

PART FOUR:

The Publication Aftermath

Publication of the paper triggered what The Lancet described in an editorial as “an outpouring of consternation and condemnation from individuals or groups outside our usual reach.” Patients expressed frustration and dismay that once again they were being told to exercise and seek psychotherapy. They were angry as well that the paper ignored the substantial evidence pointing to patients’ underlying biological abnormalities.

Even Action For ME, the organization that developed the adaptive pacing therapy with the PACE investigators, declared in a statement that it was “surprised and disappointed” at “the exaggerated claims” being made about the rehabilitative therapies. And the findings that the treatments did not cause relapses, noted Peter Spencer, Action For ME’s chief executive officer, in the statement, “contradict the considerable evidence of our own surveys and those of other patient groups.”

Many believed the use of the broad Oxford criteria helped explain some of the reported benefits and lack of adverse effects. Although people with psychosis, bipolar disorder, substance “misuse,” organic brain disorder, or an eating disorder were screened out of the PACE sample, 47 percent of the participants were nonetheless diagnosed with “mood and anxiety disorders,” including depression. But just as cognitive and behavioral interventions have proven successful with people suffering from primary depression, as DePaul psychologist Leonard Jason had noted, the increased activity was also unlikely to harm such participants if they did not also experience the core ME/CFS symptom of post-exertional malaise.

Others, like Tom Kindlon, speculated that many of the patients in the two rehabilitative arms, even if they had reported subjective improvements, might not have significantly increased their levels of exertion. To bolster this argument, he noted the poor results from the six-minute walking test, which suggested little or no improvement in physical functioning.

“If participants did not follow the directives and did not gradually increase their total activity levels, they might not suffer the relapses and flare-ups that patients sometimes report with these approaches,” said Kindlon.

During an Australian radio interview, Lancet editor Richard Horton denounced what he called the “orchestrated response” from patients, based on “the flimsiest and most unfair allegations,” seeking to undermine the credibility of the research and the researchers. “One sees a fairly small, but highly organized, very vocal and very damaging group of individuals who have, I would say, actually hijacked this agenda and distorted the debate so that it actually harms the overwhelming majority of patients,” he said.

In fact, he added, “what the investigators did scrupulously was to look at chronic fatigue syndrome from an utterly impartial perspective.”

In explaining The Lancet’s decision to publish the results, Horton told the interviewer that the paper had undergone “endless rounds of peer review.” Yet the ScienceDirect database version of the article indicated that The Lancet had “fast-tracked” it to publication. According to current Lancet policy, a standard fast-tracked article is published within four weeks of receipt of the manuscript.

Michael Sharpe, one of the lead investigators, also participated in the Australian radio interview. In response to a question from the host, he acknowledged that only one in seven participants received a “clinically important treatment benefit” from the rehabilitative therapies of graded exercise therapy and cognitive behavior therapy—a key data point not mentioned in the Lancet paper.

“What this trial isn’t able to answer is how much better are these treatments than really not having very much treatment at all,” Sharpe told the radio host in what might have been an unguarded moment, given that the U.K. government had spent five million pounds on the PACE study to find out the answer. Sharpe’s statement also appeared to contradict the effusive “recovery” and “back-to-normal” news stories that had greeted the reported findings.

***

In correspondence published three months after the trial, the PACE authors gave no ground. In response to complaints about changes from the protocol, they wrote that the mid-trial revisions “were made to improve either recruitment or interpretability” and “were approved by the Trial Steering Committee, were fully reported in our paper, and were made before examining outcome data to avoid outcome reporting bias.” They did not mention whether, since it was an unblinded trial, they already had a general sense of outcome trends even before examining the actual outcome data. And they did not explain why they did not conduct sensitivity analyses to measure the impact of the protocol changes.

They defended their post-hoc “normal ranges” for fatigue and physical function as having been calculated through the “conventional” statistical formula of taking the mean plus/minus one standard deviation. As in the Lancet paper itself, however, they did not mention or explain the unusual overlaps between the entry criteria for disability and the outcome criteria for being within the “normal range.” And they did not explain why they used this “conventional” method for determining normal ranges when their two population-based data sources did not have normal distributions, a problem White himself had acknowledged in his 2007 study.

The authors clarified that the Lancet paper had not discussed “recovery” at all; they promised to address that issue in a future publication. But they did not explain why Chalder, at the press conference, had declared that patients got “back to normal.”

They also did not explain why they had not objected to the claim in the accompanying commentary, written by their colleagues and discussed with them pre-publication, that 30 percent of participants in the rehabilitative arms had achieved “recovery” based on a “strict criterion” —especially since that “strict criterion” allowed participants to get worse and still be “recovered.” Finally, they did not explain why, if the paper was not about “recovery,” they had not issued public statements to correct the apparently inaccurate news coverage that had reported how study participants in the graded exercise therapy and cognitive behavior therapy arms had “recovered” and gotten “back to normal.”

The authors acknowledged one error. They had described their source for the “normal range” for physical function as a “working-age” population rather than what it actually was–an “adult” population. (Unlike a “working-age” population, an “adult” population includes elderly people and is therefore less healthy. Had the PACE participants’ scores on the SF-36 physical function scale actually been compared to the SF-36 responses of the working-age subset of the adult population used as the source for the “normal range,” the percentages achieving the “normal range” threshold of this healthier group would have been even lower than the reported results.)

Yet The Lancet did not append a correction to the article itself, leaving readers completely unaware that it contained—and still contains–a mistake that involved a primary outcome and made the findings appear better than they actually were. (Lancet policy calls for correcting “any substantial error” and “any numerical error in the results, or any factual error in interpretation of results.”)

***

A 2012 paper in PLoS One, on financial aspects of the illness, included outcomes for some additional objective measures. Instead of a decrease in financial benefits received by those in the rehabilitative therapy arms, as would be expected if disabled people improved enough to increase their ability to work, the paper reported a modest average increase in the receipt of benefits across all the arms of the study. There were also no differences among the groups in days lost from work.

The investigators did not include the promised information on wages. They also had still not published the results of the self-paced step-test, described in the protocol as a measure of fitness.

In another finding, the PLoS One paper argued that the graded exercise and cognitive behavior therapies were the most cost-effective treatments from a societal perspective. In reaching this conclusion, the investigators valued so-called  “informal” care—unpaid care provided by family and friends–at the replacement cost of a homecare worker. The PACE statistical analysis plan (approved in 2010 but not published until 2013) had included two additional, lower-cost assumptions. The first valued informal care at minimum wage, the second at zero compensation.

The PLoS One paper itself did not provide these additional findings, noting only that “sensitivity analyses revealed that the results were robust for alternative assumptions.”

Commenters on the PLoS One website, including Tom Kindlon, challenged the claim that the findings would be “robust” under the alternative assumptions for informal care. In fact, they pointed out, the lower-cost conditions would reduce or fully eliminate the reported societal cost-benefit advantages of the cognitive behavior and graded exercise therapies.

In a posted response, the paper’s lead author, Paul McCrone, conceded that the commenters were right about the impact that the lower-cost, alternative assumptions would have on the findings. However, McCrone did not explain or even mention the apparently erroneous sensitivity analyses he had cited in the paper, which had found the societal cost-benefit advantages for graded exercise therapy and cognitive behavior therapy to be “robust” under all assumptions. Instead, he argued that the two lower-cost approaches were unfair to caregivers because families deserved more economic consideration for their labor.

“In our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial,” McCrone wrote. “Likewise, to assume it only has the value of the minimum wage is also very restrictive.”

In a subsequent comment, Kindlon chided McCrone, pointing out that he had still not explained the paper’s claim that the sensitivity analyses showed the findings were “robust” for all assumptions. Kindlon also noted that the alternative, lower-cost assumptions were included in PACE’s own statistical plan.

“Remember it was the investigators themselves that chose the alternative assumptions,” wrote Kindlon. “If it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way. There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.”

***

The journal Psychological Medicine published the long-awaited findings on “recovery” in January, 2013. In the paper, the investigators imposed a serious limitation on their construct of “recovery.” They now defined it as recovery solely from the most recent bout of illness—a health status generally known as  “remission,” not “recovery.” The protocol definition included no such limitation.

In a commentary, Fred Friedberg, a psychologist in the psychiatry department at Stony Brook University and an expert on the illness, criticized the PACE authors’ use of the term “recovery” as inaccurate. “Their central construct…refers only to recovery from the current episode, rather than sustained recovery over long periods,” he and a colleague wrote. The term “remission,” they noted, was “less prone to misinterpretation and exaggeration.”

Tom Kindlon was more direct. “No one forced them to use the word ‘recovery’ in the protocol and in the title of the paper,” he said. “If they meant ‘remission,’ they should have said ‘remission.’” As with the release of the Lancet paper, when Chalder spoke of getting “back to normal” and the commentary claimed “recovery” based on a “strict criterion,” Kindlon believed the PACE approach to naming the paper and reporting the results would once again lead to inaccurate news reports touting claims of “recovery.”

In the new paper, the PACE investigators loosened all four of the protocol’s required criteria for “recovery” but did not mention which, if any, oversight committees approved this overall redefinition of the term. Two of the four revised criteria for “recovery” were the Lancet paper’s fatigue and physical function “normal ranges.” Like the Lancet paper, the Psychological Medicine paper did not point out that these “normal ranges”—now re-purposed as “recovery” thresholds–overlapped with the study’s entry criteria for disability, so that participants could already be “recovered” on one or both of these two indicators from the outset.

The four revised “recovery” criteria were:

*For physical function, “recovery” required a score of 60 or more. In the protocol, “recovery” required a score of 85 or more. At entry, a score of 65 or less was required to demonstrate enough disability to be included in the trial. This entry threshold of 65 indicated better health than the new “recovery” threshold of 60.

*For fatigue, a score of 18 or less out of 33 (on the fatigue scale, a higher score indicated more fatigue). In the protocol, “recovery” required a score of 3 or less out of 11 under the original scoring system. At entry, a score of at least 12 on the revised scale was required to demonstrate enough fatigue to be included the trial. This entry threshold of 12 indicated better health than the new “recovery” threshold of 18.

*A score of 1 (“very much better”) or 2 (“much better”) out of 7 on the Clinical Global Impression scale. In the protocol, “recovery” required a score of 1 (“very much better” on the Clinical Global Impression scale; a score of 2 (“much better”) was not good enough. The investigators made this change, they wrote, because “we considered that participants rating their overall health as ‘much better’ represented the process of recovery.” They did not cite references to justify their post-protocol reconsideration of the meaning of the Clinical Global Impression scale, nor did they explain when and why they changed their minds about how to interpret it.

*The last protocol requirement for “recovery”—not meeting any of the three case definitions used in the study–was now divided into less and more restrictive sub-categories. Presuming participants met the relaxed fatigue, physical function, and Clinical Global Impression thresholds, those who no longer met the Oxford criteria were now defined as having achieved “trial recovery,” even if they still met one of the other two case definitions, the CDC’s chronic fatigue syndrome case definition and the ME definition. Those who fulfilled the protocol’s stricter criteria of not meeting any of the three case definitions were now defined as having achieved “clinical recovery.” The authors did not explain when or why they decided to divide this category into two.

After these multiple relaxations of the protocol definition of “recovery,” the paper reported the full data for the less restrictive category of “trial recovery,” not the more restrictive category of “clinical recovery.” The authors found that the odds of “trial recovery” in the cognitive behavior therapy and graded exercise therapy arms were more than triple those in the adaptive pacing therapy and specialist medical care arms. They did not report having conducted any sensitivity analyses to measure the impact of all the changes in protocol definition of “recovery.”

They acknowledged that the “trial recovery” rate from the two rehabilitative treatments, at 22 percent in each group, was low. They suggested that increasing the total number of graded exercise therapy and cognitive behavior therapy sessions and/or bundling the two interventions could boost the rates.

***

Like the Lancet paper, the “recovery” findings received uncritical media coverage—and as Tom Kindlon feared, the news accounts did not generally mention “remission.” Nor did they discuss the dramatic changes in all four of the criteria from the original protocol definition of “recovery.” Not surprisingly, the report drew fire from patients and advocacy groups.

Commenters on the journal’s website and on patient and advocacy blogs challenged the revised definition for “recovery,” including the use of the overlapping “normal ranges” for fatigue and physical function as two of the four criteria. They wondered why the PACE authors used the term “recovery” at all, given the serious limitation they had placed on its meaning. They also noted that the investigators were ignoring the Lancet paper’s objective results from the six-minute walking test in assessing whether people had recovered, as well as the employment and benefits data from the PLoS One paper—all of which failed to support the “recovery” claims.

In their response, White and his colleagues defended their use of the term “recovery” by noting that they explained clearly what they meant in the paper itself. “We were careful to give a precise definition of recovery and to emphasize that it applied at one particular point only and to the current episode of illness,” they wrote. But they did not explain why, given that narrow definition, they simply did not use the standard term “remission, ” since there was always the possibility that the word “recovery” would lead to misunderstanding of the findings.

Once again, they did not address or explain why the entry criteria for disability and the outcome criteria for the physical function and fatigue “normal ranges”—now redefined as “recovery” thresholds–overlapped. They again did not explain why they used the statistical formula to find “normal ranges” for normally distributed populations on samples that they knew were skewed. And they now disavowed the significance of objective measures they themselves had selected, starting with the walking test, which had been described as “an objective outcome measure of physical capacity” in the protocol.

“We dispute that in the PACE trial the six-minute walking test offered a better and more ‘objective’ measure of recovery,” they now wrote, citing “practical limitations” with the data.

For one thing, the researchers now explained that during the walking test, in deference to participants’ poor health, they did not verbally encourage them, in contrast to standard practice. For another, they did not have follow-up walking tests for more than a quarter of the sample, a significant data gap that they did not explain. (One possible explanation is that participants were too sick to do the walking test at all, suggesting that the findings might have looked significantly worse if they had included actual results from those missing subjects.)

Finally, the PACE investigators explained, they had only 10 meters of corridor space for conducting the test, rather than the standard of 30 to 50 meters–although they did not explain whether all six of their study centers around the country, or just some of them, suffered from this deficiency. “This meant that participants had to stop and turn around more frequently, slowing them down and thereby vitiating comparisons with other studies,” wrote the investigators.

This explanation raised further questions, however. The investigators had started assessing participants–and administering the walking-test–in 2005. Yet two years later, in the protocol published in BMC Neurology, they did not mention any comparison-vitiating problems; instead, they described the walking test as an “objective” measure of physical capacity. While the protocol itself was written before the trial started, the authors posted a comment on the BMC Neurology web page in 2008, in response to patient comments, that reaffirmed the six-minute walking test as one of “several objective outcome measures.”

In their response in the Psychological Medicine correspondence, White and his colleagues did not explain if they had recognized the walking test’s comparison-vitiating limitations by the time they published their protocol in 2007 or their comment on BMC Neurology’s website in 2008–and if not, why not.

In their response, they also dismissed the relevance of their employment and benefits outcomes, which had been described as “another more objective measure of function” in the protocol. “Recovery from illness is a health status, not an economic one, and plenty of working people are unwell, while well people do not necessarily work,” they now wrote. “In addition, follow-up at 6 months after the end of therapy may be too short a period to affect either benefits or employment. We therefore disagree…that such outcomes constitute a useful component of recovery in the PACE trial.”

In conclusion, they wrote in their Psychological Medicine response, cognitive behavior therapy and graded exercise therapy “should now be routinely offered to all those who may benefit from them.”

***

Each published paper fueled new questions. Patients and advocates filed dozens of freedom-of-information requests for PACE-related documents and data with Queen Mary University of London, White’s institutional home and the designated administrator for such matters.

How many PACE participants, patients wanted to know, were “recovered” according to the much stricter criteria in the 2007 protocol? How many participants were already “within the normal range” on fatigue or physical function when they entered the study? When exactly were the changes made to the assessment strategies promised in the protocol, what oversight committees approved them, and why?

Some requests were granted. One response revealed that 85 participants—or 13 percent of the total sample–were already “recovered” or “within the normal range” for fatigue or physical function even as they qualified as disabled enough for the study. (Almost all of these, 78 participants, achieved the threshold for physical function alone; four achieved it for fatigue, and three for both.)

But many other requests have been turned down. Anna Sheridan, a long-time patient with a doctorate in physics, requested data last year on how the patients deemed “recovered” by the investigators in the 2013 Psychological Medicine paper had performed on the six-minute walking test. Queen Mary University rejected the request as “vexatious.”

Sheridan asked for an internal review. “As a scientist, I am seeking to understand the full implications of the research,” she wrote. “As a patient, the distance that I can walk is of incredible concern…When deciding to undertake a treatment such as CBT and GET, it is surely not unreasonable to want to know how far the patients who have recovered using these treatments can now walk.”

The university re-reviewed the request and informed Sheridan that it was not, in fact, “vexatious.” But her request was again being rejected, wrote the university, because the resources needed to locate and retrieve the information “would exceed the appropriate limit” designated by the law. Sheridan appealed the university’s decision to the next level, the U.K. Information Commissioner’s Office, but was recently turned down.

The Information Commissioner’s Office also turned down a request from a plaintiff seeking meeting minutes for PACE oversight committees to understand when and why outcome measures were changed. The plaintiff appealed to a higher-level venue, the First-Tier Tribunal. The tribunal panel–a judge and two lay members—upheld the decision, declaring that it was “pellucidly clear” that release of the minutes would threaten academic freedom and jeopardize future research.

The tribunal panel defended the extensive protocol changes as “common to most clinical trials” and asserted that the researchers “did not engineer the results or undermine the integrity of the findings.” The panel framed the many requests for trial documents and data as part of a campaign of harassment against the researchers, and sympathetically cited the heavy time burdens that the patients’ demands placed on White. In conclusion, wrote the panel, the tribunal “has no doubt that properly viewed in its context, this request should have been seen as vexatious–it was not a true request for information–rather its function was largely polemical.”

To date, the PACE investigators have rejected requests to release raw data from the trial for independent analysis. Patients and other critics say the researchers have a particular obligation to release the data because the trial was conducted with public funds.

Since the Lancet publication, much media coverage of the PACE investigators and their colleagues has focused on what The Guardian has called the “campaign of abuse and violence” purportedly being waged by “militants…considered to be as dangerous and uncompromising as animal rights extremists.” In a news account in the BMJ, White portrayed the protestors as hypocrites. “The paradox is that the campaigners want more research into CFS, but if they don’t like the science they campaign to stop it,” he told the publication. While news reports have also repeated the PACE authors’ claims of treatment success and “recovery,” these accounts have not generally examined the study itself in depth or investigated whether patients’ complaints about the trial are valid.

Tom Kindlon has often heard these arguments about patient activists and says they are used to deflect attention away from the PACE trial’s flaws. “They’ve said that the activists are unstable, the activists have illogical reasons and they are unfair or prejudiced against psychiatry, so they’re easy to dismiss,” said Kindlon.

What patients oppose, he and others explain, is not psychiatry or psychiatrists, but being told that their debilitating organic disease requires treatments based on the hypothesis that they have false cognitions about it.

***

In January of this year, the PACE authors published their paper on mediators of improvement in The Lancet Psychiatry. Not surprisingly, they found that reducing participants’ presumed fears of activity was the main mechanism through which the rehabilitative interventions of graded exercise therapy and cognitive behavior therapy delivered their purported benefits. News stories about the findings suggested that patients with ME/CFS could get better if they were able to rid themselves of their fears of activity.

Unmentioned in the media reports was a tiny graph tucked into a page with 13 other tiny graphs: the results of the self-paced step-test, the fitness measure promised in the protocol. The small graph indicated no advantages for the two rehabilitative intervention groups on the step-test. In fact, it appeared to show that those in the other two groups might have performed better. However, the paper did not include the data on which the graph was based, and the graph was too small to extract any useful data from it.

After publication of the study, a patient filed a request to obtain the actual step-test results that were used to create the graph. Queen Mary University rejected the request as “vexatious.”

With the publication of the step-test graph, the study’s key “objective” outcomes—except for the still-unreleased data on wages–had now all failed to support the claims of “recovery” and treatment success from the two rehabilitative therapies. The Lancet Psychiatry paper did not mention this serious lack support for the study’s subjective findings from all its key objective measures.

Some scientific developments since the 2011 Lancet paper–such as this year’s National Institutes of Health and Institute of Medicine panel reports, the Columbia University findings of distinct immune system signatures, further promising findings from Norwegian research into the anti-inflammatory drug pioneered by rheumatoid arthritis expert Jonathan Edwards, and a growing body of evidence documenting patients’ abnormal responses to activity–have helped shift the focus to biomedical factors and away from PACE, at least outside Great Britain.

In the U.K. itself, the Medical Research Council, in a modest shift, has awarded some grants for biomedical research, but the PACE approach remains the dominant framework for treatment within the national health system. Two years ago, the disparate scientific and political factions launched the CFS/ME Research Collaborative, conceived as an umbrella organization representing a range of views. At the collaborative’s inaugural two-day gathering in Bristol in September of 2014, many speakers presented on promising biomedical research. Peter White’s talk, called “PACE: A Trial and Tribulations,” focused on the response to his study from disaffected patients.

According to the conference report, White cited the patient community’s “campaign against the PACE trial” for recruitment delays that forced the investigators to seek more time and money for the study. He spoke about “vexatious complaints” and demands for PACE-related data, and said he had so far fielded 168 freedom-of-information requests. (He’d received a freedom-of-information request asking how many freedom-of-information requests he’d received.) This type of patient activity “damages” research efforts, he said.

Jonathan Edwards, the rheumatoid arthritis expert now working on ME/CFS, filed a separate report on the conference for a popular patient forum. “I think I can only describe Dr. White’s presentation as out of place,” he wrote. After White briefly discussed the trial outcomes, noted Edwards, “he then spent the rest of his talk saying how unreasonable it was that patients did not gratefully accept this conclusion, indicating that this was an attack on science…

“I think it was unfortunate that Dr. White suggested that people were being unreasonable over the interpretation of the PACE study,” concluded Edwards. “Fortunately nobody seemed to take offence.”
Related Posts Plugin for WordPress, Blogger...