Showing posts with label PACE. Show all posts
Showing posts with label PACE. Show all posts

Wednesday, September 7, 2016

Tribunal Orders Release of PACE Trial Data: Is This the End of an Error?


On August 16, the First Tier Tribunal (UK) ordered the release of the PACE Trial data to Alem Matthees, marking the end of a two-year battle. (You can read the order HERE.)

Mr. Matthees is an Australian researcher, and ME/CFS patient, who has made repeated attempts under the Freedom of Information Act to obtain anonymized data from the PACE trial. Queen Mary University of Londom (QMUL) has managed to quash every request - until now.

In this historic ruling, the tribunal determined that:

1) The information Mr. Matthees requested is not personal, and therefore an exemption based on the possibility that people in the trial could be identified does not apply.

2) Because data are anonymized, invasion of privacy does not apply.

3) There is no indication that the release of anonymized data would discourage future research.

4) There is a strong public interest in releasing the data.

This last point is especially important, as it directly addresses the issue of transparency in research, a topic that has been much in the news lately.

In a 2005 article published in PLoS ONE, Stanford professor John Ionnides claimed that most published research findings were false. There are a number of reasons why research is falsified, including outright plagiarism, conflict of interests (especially true in cases where research is being paid for by pharmaceutical companies), poor methodology, scientific malfeasance, false premises, and general incompetence.

In the case of the PACE trial, conflict of interests led directly to scientific malfeasance. The conflict here stemmed from the unwillingness of NHS to pay for treatment for ME patients. In comparison to treatments such as Ampligen, IVIG, and other immunotherapies, cognitive behavior therapy (CBT) and graded exercise (GET) are relatively cheap to administer.

The PACE trial is not the first trial to make the claim that CBT and GET are beneficial for ME/CFS patients. Trudie Chalder, one of the principals in the PACE study, has been publishing articles since 1989 touting the benefits of CBT and exercise for ME/CFS patients. Nor is she a stranger to faulty methodology as the statistics on some of these studies were questionable.

The PACE trial was the crowning glory to over two decades of research for Chalder, as well as for several other psychiatrists involved in the study. While it is unlikely QMUL will spend any more money on challenging the tribunal's decision, it is equally unlikely that the PACE trial group will abandon its "research" into CBT and GET. In fact, a second PACE study involving adolescents is already under way.

_______________________

Press Release from ME Action:

Thursday, 18th August 2016, London, UK - A tribunal has ruled that data from a treatment trial into
Chronic Fatigue Syndrome (CFS) must be released, rejecting an appeal from Queen Mary University
of London (QMUL).

PACE was a £5 million, publicly-funded clinical trial of exercise and cognitive behavioural therapy for CFS. It has been highly influential in determining treatment in the UK and abroad, but has been
controversial. Academics and patients have both voiced concerns over “misleading” claims. Dr
Richard Smith, former editor of the British Medical Journal, said in December 2015 of QMUL’s failure to release the data, “…the inevitable conclusion is that they have something to hide”.

QMUL spent over £200,000 on legal fees in this case, to appeal the Information Commissioner’s
decision that they should release anonymised data from the trial. The request for data was made
under the Freedom of Information Act by Mr Alem Matthees, to allow analysis of the data according
to the study’s original published protocol.

QMUL made several arguments why the data should not be released, their main claims being that
the data was personally identifiable information, and was not sufficiently anonymised. However, the
tribunal rejected these arguments, noting that QMUL had already shared the data with a small
selection of other scientists, stating, "In our view, they are tacitly acknowledging that anonymization
is effective, or else they would be in breach of the consent agreement and the DPA principles."

The tribunal was satisfied that the data “...has been anonymised to the extent that the risk of
identification is remote.” The tribunal also noted the "strong public interest in releasing the data
given the continued academic interest" and "the seeming reluctance for Queen Mary University to
engage with other academics they thought were seeking to challenge their findings."

In his correspondence with the court, Mr Matthees expressed “concerns that QMUL are restricting
the registered researchers to whom they disclose the data upon request.” The tribunal said, “The
evidence before us is not clear but if QMUL are cherry-picking who analyses their data from within
the recognised scientific research sphere to only sympathetic researchers, there could be legitimate
concerns that they wish to suppress criticism and proper scrutiny of their trial.”

In its submissions QMUL made a number of accusations of harassment from patients, while QMUL’s
expert witness characterized PACE trial critics as "young men, borderline sociopathic or
psychopathic", remarks the Information Commissioner dismissed as "wild speculations".

When pushed to provide evidence of these threats and harassment under cross examination,
witnesses speaking for QMUL were unable to do so, and ultimately conceded that "no threats have
been made either to researchers or participants."

The tribunal found QMUL's assessment of activist behaviour to be, “grossly exaggerated” stating
that “the only actual evidence was that an individual at a seminar had heckled Professor Chalder.”
[Professor Chalder is a leading researcher in the PACE trial and a key witness for QMUL.]

Expert reaction to the decision

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London

“I think this is the right decision and I congratulate Mr Matthees on persevering with a very
reasonable request. The report indicates that the Tribunal considered arguments from both sides
very thoroughly. It has become clear that the reasons given for not providing the information
requested are essentially groundless. It is also clearly appreciated that critics of the PACE trial are
not young sociopaths - they include senior medical scientists like myself, concerned about poor
science!”

Bruce Levin, PhD
Professor and Past Chair
Department of Biostatistics
Columbia University
Mailman School of Public Health
722 West 168th Street
MSPH Box 12, Room 647
New York, NY 10032

“I am heartened by the Tribunal’s finding that the Commissioner had reached a correct decision in
ordering release of anonymized data for the PACE trial. The Tribunal’s assessment that the
perceived risks of data release were neither substantiated nor demonstrated in the evidence before
them and that such minimum risk as had been expressed to them would not in their view outweigh
the public interest in disclosure of the disputed information is quite important, not only for patients
in this trial and around the world, but also because it underscores how essential transparency and
open, critical review of clinical trials are to the scientific method.”

Keith Geraghty, PhD
Honorary Research Fellow
University of Manchester

"I read the tribunal decision with great interest. I was surprised that the PACE authors declared in
evidence that they had shared their trial data with other researchers. I contacted lead author Prof.
Peter White to request access to PACE data to run an independent analysis, but my request was first
ignored, then later refused. I now understand that the authors shared the data with a select few
academics who they picked to co-write papers, but they have failed to share the data with the
broader scientific community. Selectively sharing this publicly-funded data with collaborators but
refusing to share data with anyone else, is not in the best interests of patients or science, and it
creates a perception that the PACE team do not want independent critical analysis of this trial. I find
it regrettable that the Medical Research Council, who partly funded this very expensive study, did
not specify that the trial data be made available to other researchers.”

Dr Charles Shepherd
Hon Medical Advisor, ME Association

“The tribunal decision to firmly reject the QMUL case for not releasing anonymised PACE trial data
will be widely welcomed by the ME/CFS patient community.

This means that there can now be an independent analysis of data from the PACE trial that has been
used to support a number of conclusions and recommendations regarding the benefits of CBT and
GET in ME/CFS that are just not consistent with patient evidence for these interventions
Having attended the hearing, where a number of unsubstantiated and serious accusations were
made against the patient community, I am pleased to see that this 'red herring' was also rejected by
the tribunal. I hope that QMUL will now accept this judgement to release the data and do so without further delay and that they will not spend any more public money on an appeal.”

David Tuller, DrPH, Investigative journalist and public health expert
University of California, Berkeley

"This decision is a thorough repudiation of the efforts by the PACE investigators to protect their
claims and findings from being exposed as utter nonsense. You don't actually need the data to
determine that the trial is a piece of garbage, but having the data at last will make it clear to
everyone. They will likely appeal, but they will ultimately lose."

Alem Matthees
Patient and Second Respondent
Australia

I am very pleased with this outcome. Both the Tribunal’s decision and commentary are a long
overdue victory for the patient community, as well as for advocates of clinical trial transparency and
open data sharing. I want to thank everyone who gave support, advice or assistance, as well as
anyone who engaged in debate over the PACE trial and the sharing of clinical trial data. This case
ended up costing me greatly in time, energy, and health (currently bedridden).

I utilised the FOIA to loosen the vice grip control over the data and allow truly independent and open
analyses that do not rely on the approval of QMUL or the PACE trial investigators. All this came
about largely because of their refusal to publish or release the protocol-specified outcomes, and
their generally questionable and poorly or erroneously justified changes to the published trial
protocol, i.e. outcome switching, after the trial was over and/or after seeing trial data. Claims of
clinically significant improvement may be open to interpretation, but false or misleading claims of
recovery or remission from debilitating illness simply have no place in the scientific literature.

Tom Kindlon
Information Officer
Irish ME/CFS Association

I hope Queen Mary University of London won't appeal again and cause more public money and
resources to be spent on the case. Now that a court has ruled that the data is non-identifiable and
that releasing it will not contravene agreements with trial participants, there is no good reason to
continue to withhold it. If QMUL appeal, people may suspect this case was at least partly about
trying to hide inconvenient results. Indeed, the tribunal decision notice itself raised the question of
whether QMUL may wish to avoid proper scrutiny of their trial.

Patients want nothing more than to recover from this condition, so misleading claims about recovery
rates are a particularly serious matter. Many are very sceptical of suggestions they can recover with
talk therapy or by steadily increasing their levels of exercise. This is not their experience.

Extraordinary claims require extraordinary evidence but the researchers have not yet released such
evidence: they revised all four aspects of the recovery criteria to make it much, much easier to be
classed as recovered and have so far failed to provide valid justifications for these changes. Some of
the PACE Trial investigators have conflicts of interest, such as doing work for insurance companies,
which can make people concerned about bias.

This is a huge victory for patients, who have a right to examine the evidence for the treatments that
affect their lives. I expect that the recovery rate will only be a small fraction of what the PACE
researchers claimed, due to the dramatic changes they made to the criteria.

Jane Colby
Tymes Trust Executive Director

"Tymes Trust is pleased at the judge's ruling. We believe that, pending independent analysis of PACE
data, the MAGENTA (PACEstyle) study in children should be suspended immediately."

Leonard A. Jason, PhD
Professor of Psychology and Director
Center for Community Research
DePaul University
990 W. Fullerton Ave.
Suite 3100
Chicago, Il. 60614

“I believe that an independent analysis of the controversial trial would be in the best interest of
scientists, clinicians and patients.”

Thursday, February 4, 2016

HHS Ignores Request to Review PACE Trial

On February 3, 2016, a group of advocates wrote the Agency for Healthcare Research and Quality(AHRQ) asking them to reconsider the inclusion of the PACE trial in their review.

The AHRQ Evidence Review for ME/CFS formed the basis for the P2P report, in which GET and CBT were reported as beneficial treatments. This conclusion was based on the results of the PACE trial, a study which has been roundly criticized for its flawed methodology.

In November 2015, a group of U.S. organizations sent a letter to the U.S. Health and Human Services (HHS) asking them to address concerns raised in a series of articles about the PACE trial by journalist David Tuller. Based on these concerns and the call by the National Institute of Health (NIH) Pathways to Prevention report to retire the Oxford definition because it could “impair progress and cause harm,” the letter recommended the following steps as appropriate and necessary to protect patients:
  • The AHRQ revise its evidence review to reflect the issues with PACE and with studies using the Oxford case definition in general; 
  • The Centers for Disease Control and Prevention (CDC) remove findings based on PACE and other Oxford case definition studies from current and planned medical education; 
  • HHS use its leadership position to communicate these concerns to other medical education providers; 
  • HHS call for The Lancet to seek an independent reanalysis of PACE.
In the AHRQ’s response, the authors of the evidence review said that they had already considered some of the concerns raised by Tuller and that the additional information would not change the review’s conclusions. This is completely inconsistent with the published review. (The evidence review ranked PACE as a “good” study with “undetected” reporting bias.) AHRQ’s response failed to address the use of the Oxford case definition as the basis of clinical trials for ME/CFS patients. 

The CDC’s response stated that the IOM and P2P “have placed the findings of the PACE trial in an appropriate context for moving the field forward.” (That is bureaucratese for "we are doing nothing.") Like the AHRQ, the CDC failed to address the inclusion of studies based on the Oxford case definition.

HHS did not respond to the request to call on The Lancet to seek an independent review.

If you have not done so, please sign this petition calling for AHRQ and CDC to investigate the PACE trial.

Related Posts



__________________________________________

To: Dr. Arlene Bierman

CC: Dr. Suchitra Iyer, Dr. Wendy Perry

Subject: AHRQ response to community request on PACE and Oxford studies

Date: February 3, 2016

Thank you for your December 24 response to the November 15 patient community letter requesting that AHRQ and CDC investigate the concerns with the PACE trial raised by journalist Dr. David Tuller.1 As you know, this issue is of paramount importance because of the risk of harm to patients from inappropriate treatment recommendations based on flawed studies.

The patient community has requested that AHRQ investigate Dr. Tuller’s concerns and then revise its Evidence Review in light of those concerns and with Oxford definition studies more broadly. The evidence review authors responded that the provided information would not change the conclusions of the report. We disagree.

First, while the authors acknowledge some of the problems with the PACE trial in the full evidence review posted on the AHRQ site, they did not report these problems in the article published in Annals,2 leaving the journal readers unaware of these issues. Of greater concern, in spite of recognizing these issues and stating that they are considered in rating the evidence, the authors still rated PACE as a “good” study with “undetected” reporting bias.3 Such ratings are incompatible with the known flaws in this study and call into question the validity of the evidence based methods used for such a controversial evidence base. Even based on just the information available at the time of AHRQ’s evidence review, the rating of this study and its subsequent impact on the overall treatment conclusions need to be reassessed.

Second, the patient community had also raised concerns with the inclusion of treatment recommendations based on Oxford studies. This problem was highlighted to AHRQ staff when the evidence review protocol was first issued.4 The review itself acknowledged that the Oxford criteria are problematic because Oxford can include patients “with other fatiguing illnesses.” The Pathways to Prevention report stated that the Oxford criteria could “impair progress and cause harm” and called for it to be “retired.”

And yet, the evidence review made general conclusions about the benefits and harms of CBT and GET. For example, the report stated, “GET improved measures of function, fatigue, global improvement as measured by the clinical global impression of change score, and work impairment.” It also concluded that CBT resulted in improvement in physical function scores.

Notably, the evidence review did not qualify these conclusions on treatment effects by case definition. Such statements can reasonably be inferred to apply to all “ME/CFS” patients.5

The obvious question is whether these conclusions would still be true if the Oxford studies had been removed and analyzed separately. In a reply to a published comment on the Annals article raising this issue, the authors acknowledged the importance of analyzing treatment benefits by case definition. They then stated that the improvement in physical function following CBT was seen in Oxford studies, but not in Fukuda studies.6 However, this finding was not stated in the Annals article itself nor did the article report other differences in benefits and harms by case definition. Given that Oxford is acknowledged to include patients with other diseases and given P2P’s call to retire Oxford, the failure to report Oxford findings separate from findings with other case definitions is a serious flaw of this evidence review. The study limitation statements do not compensate for this flaw.

This has real world consequences for patients. The evidence review’s general conclusions about treatment benefits are already being incorporated into clinical guidelines. For instance, referencing AHRQ’s evidence review along with the PACE trial, UpToDate recommends CBT and GET for patients diagnosed by the IOM criteria.7 But CBT and GET have been studied in Oxford cohorts, where they are used to reverse presumed deconditioning, fear of activity, and false beliefs of having an organic disease. Such treatments are obviously inappropriate for the disease that the IOM said is organic, not deconditioning, and characterized by a systemic intolerance to exertion.8 Mixing and matching patient populations in both the evidence review and in clinical guidelines is of questionable medical ethics and creates a significant risk of harm for patients.

We strongly urge AHRQ to work with the evidence review authors to ensure that the PACE trial and its impact on the evidence review’s treatment recommendations is reassessed. Further, it is critical that the authors explicitly report findings for Oxford studies separately from findings in studies using other case definitions. Finally, to ensure that findings in one group of patients are not being harmfully applied to another group of patients, it is essential that the authors explicitly state that treatment conclusions based only on Oxford studies should not be applied to patients meeting other case definitions, particularly those that require post-exertional malaise.

This matter is of the utmost importance to patients. We hope you will give it the full attention that it deserves. Contact Mary Dimmock if you need additional clarification or background.

Signed

Massachusetts CFIDS/ME & FM Association
The Myalgic Encephalomyelitis Action Network (#MEAction)
MEadvocacy.org
New Jersey ME/CFS Association, Inc.
Open Medicine Foundation (OMF)
Phoenix Rising
Solve ME/CFS Initiative
Pandora Org
Wisconsin ME and CFS Association, Inc.
Mary Dimmock
Claudia Goodell
Denise Lopez-Majano (Speak Up About ME )
Matina Nicolson
Donna Pearson
Jennifer Spotila JD (OccupyCFS)
Meghan Shannon
Erica Verrillo (Onward Through the Fog)

1 Patient organizations’ letter to AHRQ and CDC on PACE and Oxford definitions. November 15, 2015. https://dl.dropboxusercontent.com/u/89158245/CDC-AHRQ%20Request%20PACE%20Nov%202015.pdf

AHRQ response to November 15, 2015 letter. December 24, 2015. https://dl.dropboxusercontent.com/u/89158245/AHRQ%20response%20to%20PACE%20request%20-%20Dec%2024%202015.pdf

David Tuller. “TRIAL BY ERROR: The Troubling Case of the PACE Chronic Fatigue Syndrome Study.” Virology Blog. October 21-23, 2015.

First installment: http://www.virology.ws/2015/10/21/trial-by-error-i/

Second installment: http://www.virology.ws/2015/10/22/trial-by-error-ii/

Third installment: http://www.virology.ws/2015/10/23/trial-by-error-iii/

2 Smith MB, Haney E, McDonagh M, Pappas M, Daeges M, Wasson N, et al. “Treatment of Myalgic

Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop.” Ann Intern Med. 2015; 162: 841-850. http://dx.doi.org/10.7326/M15-0114

The full report is posted on the AHRQ site at this location: http://effectivehealthcare.ahrq.gov/ehc/products/586/2004/chronic-fatigue-report-141209.pdf

3 Ibid.

• Appendix E: Quality Rating Criteria states that in a “good” study, for instance, “important outcomes are considered.” But as Tuller reported, outcomes were changed midtrial, criteria were changed, and some outcomes/analyses were not reported.

• Appendix F: Strength of Evidence Criteria defines reporting bias, which it states includes: ”Study publication bias, i.e., nonreporting of the full study; Selective outcome reporting bias, i.e., nonreporting (or incomplete reporting) of planned outcomes or reporting of unplanned outcomes; and Selective analysis reporting bias, i.e., reporting of one or more favorable analyses for a given outcome while not reporting other, less favorable analyses.”

• Appendix K: Strength of Evidence tables reported Reporting bias as “Undetected” in rows that included PACE, even when the only study covered in a particular outcome was PACE.

4 The authors state that they correctly included the Oxford criteria because the protocol specified it. However, the problems with inclusion of Oxford were raised immediately after the protocol was issued on May 1, 2014.

• Spotila, J. “Protocol for Disaster.” OccupyCFS. May 3, 2014. http://www.occupycfs.com/2014/05/02/protocolfor-disaster/

• M. Dimmock contacted Dr. Beth Collins Sharpe (of AHRQ and ex-officio to CFSAC at the time) by email on May 4, 2014 with this concern. On 6/4/2014, Collins-Sharpe responded that she didn’t “think that the different diagnoses will be lumped together for analysis.” She added, “You’re right that it would be comparing Oxford apples to CCC oranges.“ But then the authors lumped together all definitions in the analyses.

Additionally, this issue was raised directly with Dr. Collins

• Spotila J, Dimmock M. Letter submitted to Dr. Francis Collins, Director of NIH, regarding Pathways to Prevention Workshop on ME/CFS.” May 28, 2014.

https://dl.dropboxusercontent.com/u/89158245/Dr%20Collins%20P2P%20Letter%20Public%20052814%20w%20attachments.pdf

See attachment 2 (Page 20) discussed the Review Protocol and stated. “At least one case definition
[Oxford] requires no more than unexplained fatigue for a diagnosis of CFS, despite the mounting evidence that such case definitions capture a different study population than definitions that require post-exertional malaise, cognitive dysfunction or other multisystem impairments.“

5 UpToDate is one example where the conclusions of this evidence review have been interpreted as applying to all patients.

It is important to note that PACE trial publications have said that the PACE trial findings also apply to patients defined by the CDC CFS criteria and the London ME criteria. But this patient characterization was done after first selecting patients by Oxford. As Dr. Bruce Levin said in Dr. Tuller’s series on PACE, it is not correct to extrapolate from a subgroup selected from a group of Oxford patients to ME patients as a class. Further, the PACE recovery publication noted that CDC CFS criteria had been modified to require symptoms for just one week and that these modifications could result in inaccurate patient characterizations. The impact of PACE modifications to the ME criteria are unknown.

• White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M, PACE Trial Management Group. “Recovery from chronic fatigue syndrome after treatments given in the PACE trial.” Psychol Med. October 2013’ 43(10): 2226-2235. PMID: 23363640. http://dx.doi.org/10.1017/S0033291713000020

6 Smith MB, Haney E, McDonagh M, Pappas M, Daeges M, Wasson N, et al. “Treatment of Myalgic
Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop.” Ann Intern Med. 2015; 162: 841-850. http://dx.doi.org/10.7326/M15-0114

In the comments section, the authors stated, “Dr. Chu’s comment regarding the importance of analyzing data based on case definitions used for inclusion to trials is consistent with our approach. For example, in the trials of cognitive behavioral therapy (CBT) using the SF-36 physical function item as an outcome measure, the two studies using Oxford criteria indicated improvement, while the two using CDC criteria reported no improvement.”

7 UpToDate clinical guidelines, updated in July 2015, include:

o Gluckman, Stephen. “Clinical features and diagnosis of chronic fatigue syndrome (systemic exertion intolerance disease)” UpToDate. Deputy Editor Park, L. Last updated July 30, 2015. Literature review current through August 2015. http://www.uptodate.com/contents/clinical-features-and-diagnosis-of-chronic-fatiguesyndrome-systemic-exertion-intolerance-disease#H15

o Gluckman, Stephen. “Treatment of chronic fatigue syndrome (systemic exertion intolerance disease)” UpToDate. Deputy Editor Lee, P. Last updated July 30, 2015. Literature Review current through August 2015. http://www.uptodate.com/contents/treatment-of-chronic-fatigue-syndrome-systemic-exertion-intolerancedisease?source=see_link

8 Tucker, M. “IOM Gives Chronic Fatigue Syndrome a New Name and Definition” Medscape Multispecialty. February 10, 2015. http://www.medscape.com/viewarticle/839532

Dr. Clayton, IOM panel chair, noted, "The level of response is much more than would be seen with deconditioning," with reference to the belief voiced by some clinicians that physical abnormalities in these patients are merely a result of their lack of activity.

Wednesday, January 27, 2016

The Answer is No, A Thousand Times No

Last month, four respected researchers asked for data from the PACE trial. This is the umpteeth request, and, predictably, it was denied.

But this time the reason given was not that 1) the request was "vexatious," 2) the trial participants might somehow be harmed, 3) it might infringe on intellectual property rights, or 4) the study might be criticized.

None of the above. Their excuse this time was that "participants may be less willing to participate in a planned feasibility follow up study." In other words, if people with ME/CFS knew how bad the PACE trial really was, they might not be willing to participate in another trial.

Imagine a situation in which a drug is administered to a group of ill people, who then become more ill. But the authors of the trial hide the data and claim that the ill people appear to benefit. When asked for the data they refuse, because they don't want participants to know the drug is harmful in case they do future studies.

How illegal would that be?

The PACE trial authors have no moral compass. They are planning on rehashing their study endlessly to milk it for all its worth. They will continue to spin their "results" until someone in authority puts a stop to it.

__________________________


At least we’re not vexatious

19 JANUARY 2016, Virology Blog


On 17 December 2015, Ron Davis, Bruce Levin, David Tuller and I requested trial data from the PACE study of treatments for ME/CFS published in The Lancet in 2011. Below is the response to our request from the Records & Compliance Manager of Queen Mary University of London. The bolded portion of our request, noted in the letter, is the following: “we would like the raw data for all four arms of the trial for the following measures: the two primary outcomes of physical function and fatigue (both bimodal and Likert-style scoring), and the multiple criteria for “recovery” as defined in the protocol published in 2007 in BMC Neurology, not as defined in the 2013 paper published in Psychological Medicine. The anonymized, individual-level data for “recovery” should be linked across the four criteria so it is possible to determine how many people achieved “recovery” according to the protocol definition.”

_________________________________

Dear Prof. Racaniello

Thank you for your email of 17th December 2015. I have bolded your request below, made under the Freedom of Information Act 2000.

You have requested raw data, linked at an individual level, from the PACE trial. I can confirm that QMUL holds this data but I am afraid that I cannot supply it. Over the last five years QMUL has received a number of similar requests for data relating to the PACE trial. One of the resultant refusals, relating to Decision Notice FS50565190, is due to be tested at the First-tier Tribunal (Information Rights) during 2016. We believe that the information requested is similarly exempt from release in to the public domain. At this time, we are not in a position to speculate when this ongoing legal action will be concluded.

Any release of information under FOIA is a release to the world at large without limits. The data consists of (sensitive) personal data which was disclosed in the context of a confidential relationship, under a clear obligation of confidence. This is not only in the form of explicit guarantees to participants but also since this is data provided in the context of medical treatment, under the traditional obligation of confidence imposed on medical practitioners. See generally, General Medical Council, ‘Confidentiality’ (2009) available at http://www.gmc-uk.org/guidance/ethical_guidance/confidentiality.asp The information has the necessary quality of confidence and release to the public would lead to an actionable breach.

As such, we believe it is exempt from disclosure under s.41 of FOIA. This is an absolute exemption.

The primary outcomes requested are also exempt under s.22A of FOIA in that these data form part of an ongoing programme of research.

This exemption is subject to the public interest test. While there is a public interest in public authorities being transparent generally and we acknowledge that there is ongoing debate around PACE and research in to CFS/ME, which might favour disclosure, this is outweighed at this time by the prejudice to the programme of research and the interests of participants. This is because participants may be less willing to participate in a planned feasibility follow up study, since we have promised to keep their data confidential and planned papers from PACE, whether from QMUL or other collaborators, may be affected.

On balance we believe that the public interest in withholding this information outweighs the public interest in disclosing it.


In accordance with s.17, please accept this as a refusal notice.

For your information, the PACE PIs and their associated organisations are currently reviewing a data sharing policy.

If you are dissatisfied with this response, you may ask QMUL to conduct a review of this decision. To do this, please contact the College in writing (including by fax, letter or email), describe the original request, explain your grounds for dissatisfaction, and include an address for correspondence. You have 40 working days from receipt of this communication to submit a review request. When the review process has been completed, if you are still dissatisfied, you may ask the Information Commissioner to intervene. Please see www.ico.org.uk for details.

Yours sincerely

Paul Smallcombe
Records & Information Compliance Manager

Wednesday, January 20, 2016

Trial By Error, Continued: More Nonsense from The Lancet Psychiatry


In David Tuller's most recent post on Virology Blog, he  discusses the absurdity of the PACE authors' claims regarding bias.

To break it down:

1) The PACE trial team distributed a newsletter to the trial participants during the trial.

2) The newsletter contained glowing testimonials from patients and doctors about the success of their treatment, but did not specify which treatment was being administered.

3) The same newsletter stated that the government had endorsed GET and CBT as the best treatments.

4) The PACE trial authors claim that because they presented a positive view of ALL the treatments, this did not constitute bias, regardless of their pitch regarding government endorsement.

Imagine finding these statements in a newsletter.
"This treatment literally saved my life. I don't know what I would have done without it!" ~Jane Doe, patient.
"My patients have never felt better! From now on, I am going to give this treatment to all of my patients!" Dr. John Doe.
The FDA endorses Mutilen as the best treatment for insomnia based on all available evidence.
What reader isn't going to put two and two together?

I do not believe for one minute that the newsletter was "amateurish" mistake on the part of the researchers. It was a painfully obvious attempt to glean positive results from a trial that had clearly failed.

No matter how you look at it, the PACE trial is a poster child for fraudulent research.
____________________

Reprinted with permission.

Trial By Error, Continued: More Nonsense from The Lancet Psychiatry

19 JANUARY 2016


By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.


The PACE authors have long demonstrated great facility in evading questions they don’t want to answer. They did this in their response to correspondence about the original 2011 Lancet paper. They did it again in the correspondence about the 2013 recovery paper, and in their response to my Virology Blog series. Now they have done it in their answer to critics of their most recent paper on follow-up data, published last October in The Lancet Psychiatry.

(They published the paper just a week after my investigation ran. Wasn’t that a lucky coincidence?)

The Lancet Psychiatry follow-up had null findings: Two years or more after randomization, there were no differences in reported levels of fatigue and physical function between those assigned to any of the groups. The results showed that cognitive behavior therapy and graded exercise therapy provided no long-term benefits because those in the other two groups reported improvement during the year or more after the trial was over. Yet the authors, once again, attempted to spin this mess as a success.

In their letters, James Coyne, Keith Laws, Frank Twist, and Charles Shepherd all provide sharp and effective critiques of the follow-up study. I’ll let others tackle the PACE team’s counter-claims about study design and statistical analysis. I want to focus once more on the issue of the PACE participant newsletter, which they again defend in their Lancet Psychiatry response.

Here’s what they write: “One of these newsletters included positive quotes from participants. Since these participants were from all four treatment arms (which were not named) these quotes were [not]…a source of bias.”

Let’s recap what I wrote about this newsletter in my investigation. The newsletter was published in December 2008, with at least a third of the study’s sample still undergoing assessment. The newsletter included six glowing testimonials from participants about their positive experiences with the trial, as well as a seventh statement from one participant’s primary care doctor. None of the seven statements recounted any negative outcomes, presumably conveying to remaining participants that the trial was producing a 100 % satisfaction rate. The authors argue that the absence of the specific names of the study arms means that these quotes could not be “a source of bias.”

This is a preposterous claim. The PACE authors apparently believe that it is not a problem to influence all of your participants in a positive direction, and that this does not constitute bias. They have repeated this argument multiple times. I find it hard to believe they take it seriously, but perhaps they actually do. In any case, no one else should. As I have written before, they have no idea how the testimonials might have affected anyone in any of the four groups—so they have no basis for claiming that this uncontrolled co-intervention did not alter their results.

Moreover, the authors now ignore the other significant effort in that newsletter to influence participant opinion: publication of an article noting that a federal clinical guidelines committee had selected cognitive behavior therapy and graded exercise therapy as effective treatments “based on the best available evidence.” Given that the trial itself was supposed to be assessing the efficacy of these treatments, informing participants that they have already been deemed to be effective would appear likely to impact participants’ responses. The PACE authors apparently disagree.

It is worth remembering what top experts have said about the publication of this newsletter and its impact on the trial results. “To let participants know that interventions have been selected by a government committee ‘based on the best available evidence’ strikes me as the height of clinical trial amateurism,” Bruce Levin, a biostatistician at Columbia University, told me.

My Berkeley colleague, epidemiologist Arthur Reingold, said he was flabbergasted to see that the researchers had distributed material promoting the interventions being investigated, whether they were named or not. This fact alone, he noted, made him wonder if other aspects of the trial would also raise methodological or ethical concerns.

“Given the subjective nature of the primary outcomes, broadcasting testimonials from those who had received interventions under study would seem to violate a basic tenet of research design, and potentially introduce substantial reporting and information bias,” he said. “I am hard-pressed to recall a precedent for such an approach in other therapeutic trials. Under the circumstances, an independent review of the trial conducted by experts not involved in the design or conduct of the study would seem to be very much in order.”

Wednesday, January 6, 2016

Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

David Tuller has been relentless in his pursuit of the truth.

He has doggedly, insistently, and ceaselessly sought answers from PACE authors about the validity of their results, and about their research methods.

The authors of the PACE trial have turned down his efforts to speak with them, refusing him as they have every other person who has requested access to their data.

Recently, David Tuller posted the questions he has attempted, unsuccessfully, to pose to the authors of the PACE trial. All of these are reasonable questions about methodology, trial results, intent, possible conflicts of interest, and trial participant rights. All of these questions should have been asked by the various journals that published the PACE trial results. All of these questions should have been posed by the institutions that sponsored the trial.

And all of these questions - Every. Single. One - should have been asked by the researchers themselves.

That is how responsible scientists behave.

_____________________________________

Reprinted with permission.

Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

4 JANUARY 2016, Virology Blog, By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

I have been seeking answers from the PACE researchers for more than a year. At the end of this post, I have included the list of questions I’d compiled by last September, when my investigation was nearing publication. Most of these questions remain unanswered.

The PACE researchers are currently under intense criticism for having rejected as “vexatious” a request for trial data from psychologist James Coyne—an action called “unforgivable” by Columbia statistician Andrew Gelman and “absurd” by Retraction Watch. Several colleagues and I have filed a subsequent request for the main PACE results, including data for the primary outcomes of fatigue and physical function and for “recovery” as defined in the trial protocol. The PACE team has two more weeks to release this data, or explain why it won’t.

Any data from the PACE trial will likely confirm what my Virology Blog series has already revealed: The results cannot stand up to serious scrutiny. But the numbers will not provide answers to the questions I find most compelling. Only the researchers themselves can explain why they made so many ill-advised choices during the trial.

In December, 2014, after months of research, I e-mailed Peter White, Trudie Chalder and Michael Sharpe—the lead PACE researcher and his two main colleagues–and offered to fly to London to meet them. They declined to talk with me. In an email, Dr. White cited my previous coverage of the illness as a reason. (The investigators and I had already engaged in an exchange of letters in The New York Times in 2011, involving a PACE-related story I had written.) “I have concluded that it would not be worthwhile our having a conversation,” Dr. White wrote in his e-mail.

I decided to postpone further attempts to contact them for the story until it was near completion. (Dr. Chalder and I did speak in January 2015 about a new study from the PACE data, and I previously described our differing memories of the conversation.) In the meantime, I wrote and rewrote the piece and tweaked it and trimmed it and then pasted back in stuff that I’d already cut out. Last June, I sent a very long draft to Retraction Watch, which had agreed to review it for possible publication.

I still hoped Dr. White would relent and decide to talk with me. Over the summer, I drew up a list of dozens of questions that covered every single issue addressed in my investigation.

I had noticed the kinds of non-responsive responses Dr. White and his colleagues provided in journal correspondence and other venues whenever patients made cogent and incontrovertible points. They appeared to excel at avoiding hard questions, ignoring inconvenient facts, and misstating key details. I was surprised and perplexed that smart journal editors, public health officials, reporters and others accepted their replies without pointing out glaring methodological problems—such as the bizarre fact that the study’s outcome thresholds for improvement on its primary measures indicated worse health status than the entry criteria required to demonstrate serious disability.

So my list of questions included lots of follow-ups that would help me push past the PACE team’s standard portfolio of evasions. And if, as I suspected, I wouldn’t get the chance to pose the questions myself, I hoped the list would be a useful guide for anyone who wanted to conduct a rigorous interview with Dr. White or his colleagues about the trial’s methodological problems. (Dr. White never agreed to talk with me; I sent my questions to Retraction Watch as part of the fact-checking process.)

In September, Retraction Watch interviewed Dr. White in connection with my piece, as noted in a recent post about Dr. Coyne’s data request. Retraction Watch and I subsequently determined that we differed on the best approach and direction for the story. On October 21st to 23rd, Virology Blog ran my 14,000-word investigation.

But I still don’t have the answers to my questions.

*****************

List of Questions, September 1, 2015:

I am posting this list verbatim, although if I were pulling it together today I would add, subtract and rephrase some questions. (I might have misstated a statistical concept or two.) The list is by no means exhaustive. Patients and researchers could easily come up with a host of additional items. The PACE team seems to have a lot to answer for.

1) In June, a report commissioned by the National Institutes of Health declared that the Oxford criteria should be “retired” because the case definition impeded progress and possibly caused harm. As you know, the concern is that it is so non-specific that it leads to heterogeneous study samples that include people with many illnesses besides ME/CFS. How do you respond to that concern?

2) In published remarks after Dr. White’s presentation in Bristol last fall, Dr. Jonathan Edwards wrote: “What Dr White seemed not to understand is that a simple reason for not accepting the conclusion is that an unblinded trial in a situation where endpoints are subjective is valueless.” What is your response to Dr. Edward’s position?

3) The December 2008 PACE participants’ newsletter included an article about the UK NICE guidelines. The article noted that the recommended treatments, “based on the best available evidence,” included two of the interventions being studied–CBT and GET. (The article didn’t mention that PACE investigator Jessica Bavington also served on the NICE guidelines committee.) The same newsletter included glowing testimonials from satisfied participants about their positive outcomes from the trial “therapÃ¥y” and “treatment” but included no statements from participants with negative outcomes. According to the graph illustrating recruitment statistics in the same newsletter, about 200 or so participants were still slated to undergo one or more of their assessments after publication of the newsletter.

Were you concerned that publishing such statements would bias the remaining study subjects? If not, why not? A biostatistics professor from Columbia told me that for investigators to publish such information during a trial was “the height of clinical trial amateurism,” and that at the very least you should have assessed responses before and after disseminating the newsletter to ensure that there was no bias resulting from the statements. What is your response? Also, should the article about the NICE guidelines have disclosed that Jessica Bavington was on the committee and therefore playing a dual role?

4) In your protocol, you promised to abide by the Declaration of Helsinki. The declaration mandates that obtaining informed consent requires that prospective participants be “adequately informed” about “any possible conflicts of interest” and “institutional affiliations of the researcher.” In the Lancet and other papers, you disclosed financial and consulting ties with insurance companies as “conflicts of interest.” But trial participants I have interviewed said they did not find out about these “conflicts of interest” until after they completed the trial. They felt this violated their rights as participants to informed consent. One demanded her data be removed from the study after the fact. I have reviewed participant information and consent forms, including those from version 5.0 of the protocol, and none contain the disclosures mandated by the Declaration of Helsinki.

Why did you decide not to inform prospective participants about your “conflicts of interest” and “institutional affiliations” as part of the informed consent process? Do you believe this omission violates the Declaration of Helsinki’s provisions on disclosure to participants? Can you document that any PACE participants were told of your “possible conflicts of interest” and “institutional affiliations” during the informed consent process?

5) For both fatigue and physical function, your thresholds for “normal range” (Lancet) and “recovery” (Psych Med) indicated a greater level of disability than the entry criteria, meaning participants could be fatigued or physically disabled enough for entry but “recovered” at the same time. Thirteen percent of the sample was already “within normal range” on physical function, fatigue or both at baseline, according to information obtained under a freedom-of-information request.

Can you explain the logic of that overlap? Why did the Lancet and Psych Med papers not specifically mention or discuss the implication of the overlaps, or disclose that 13 percent of the study sample were already “within normal range” on an indicator at baseline? Do you believe that such overlaps affect the interpretation of the results? If not, why not? What oversight committee specifically approved this outcome measure? Or was it not approved by any committee, since it was a post-hoc analysis?

6) You have explained these “normal ranges” as the product of taking the mean value +/- 1 SD of the scores of  representative populations–the standard approach to obtaining normal ranges when data are normally distributed. Yet the values in both those referenced source populations (Bowling for physical function, Chalder for fatigue) are clustered toward the healthier ends, as both papers make clear, so the conventional formula does not provide an accurate normal range. In a 2007 paper, Dr. White mentioned this problem of skewed populations and the challenge they posed to calculation of normal ranges.

Why did you not use other methods for determining normal ranges from your clustered data sets from Bowling and Chalder, such as basing them on percentiles? Why did you not mention the concern or limitation about using conventional methods in the PACE papers, as Dr. White did in the 2007 paper? Is this application of conventional statistical methods for non-normally distributed data the reason why you had such broad normal ranges that ended up overlapping with the fatigue and physical function entry criteria?

7) According to the protocol, the main finding from the primary measures would be rates of “positive outcomes”/”overall improvers,” which would have allowed for individual-level. Instead, the main finding was a comparison of the mean performances of the groups–aggregate results that did not provide important information about how many got better or worse. Who approved this specific change? Were you concerned about losing the individual-level assessments?

8) The other two methods of assessing the primary outcomes were both post-hoc analyses. Do you agree that post-hoc analyses carry significantly less weight than pre-specified results? Did any PACE oversight committees specifically approve the post-hoc analyses?

9) The improvement required to achieve a “clinically useful benefit” was defined as 8 points on the SF-36 scale and 2 points on the continuous scoring for the fatigue scale. In the protocol, categorical thresholds for a “positive outcome” were designated as 75 on the SF-36 and 3 on the Chalder fatigue scale, so achieving that would have required an increase of at least 10 points on the SF-36 and 3 points (bimodal) for fatigue. Do you agree that the protocol measure required participants to demonstrate greater improvements to achieve the “positive outcome” scores than the post-hoc “clinically useful benefit”?

10) When you published your protocol in BMC Neurology in 2007, the journal appended an “editor’s comment” that urged readers to compare the published papers with the protocol “to ensure that no deviations from the protocol occurred during the study.” The comment urged readers to “contact the authors” in the event of such changes. In asking for the results per the protocol, patients and others followed the suggestion in the editor’s comment appended to your protocol. Why have you declined to release the data upon request? Can you explain why Queen Mary has considered requests for results per the original protocol “vexatious”?

11) In cases when protocol changes are absolutely necessary, researchers often conduct sensitivity analyses to assess the impact of the changes, and/or publish the findings from both the original and changed sets of assumptions. Why did you decide not to take either of these standard approaches?

12) You made it clear, in your response to correspondence in the Lancet, that the 2011 paper was not addressing “recovery.” Why, then, did Dr. Chalder refer at the 2011 press conference to the “normal range” data as indicating that patients got “back to normal”–i.e. they “recovered”? And since you had input into the accompanying commentary in the Lancet before publication, according to the press complaints commission, why did you not dissuade the writers from declaring a 30 percent “recovery” rate? Do you agree with the commentary that PACE used “a strict criterion for recovery,” given that in both of the primary outcomes participants could get worse and be counted as “recovered,” or “back to normal” in Dr. Chalder’s words?

13) Much of the press coverage focused on “recovery,” even though the paper was making no such claim. Were you at all concerned that the media was mis-interpreting or over-interpreting the results, and did you feel some responsibility for that, given that Dr. Chalder’s statement of “back to normal” and the commentary claim of a 30 percent “recovery” rate were prime sources of those claims?

14) You changed your fatigue outcome scoring method from bimodal to continuous mid-trial, but cited no references in support of this that might have caused you to change your mind since the protocol. Specifically, you did not explain that the FINE trial reported benefits for its intervention only in a post-hoc re-analysis of its fatigue data using continuous scoring.

Were the FINE findings the impetus for the change in scoring in your paper? If so, why was this reason not mentioned or cited? If not, what specific change prompted your mid-trial decision to alter the protocol in this way? And given that the FINE trial was promoted as the “sister study” to PACE, why were that trial and its negative findings not mentioned in the text of the Lancet paper? Do you believe those findings are irrelevant to PACE? Moreover, since the Likert-style analysis of fatigue was already a secondary outcome in PACE, why did you not simply provide both bimodal and continuous analyses rather than drop the bimodal scoring altogether?

15)  The “number needed to treat” (NNT) for CBT and GET was 7, as Dr. Sharpe indicated in an Australian radio interview after the Lancet publication. But based on the “normal range” data, the NNT for SMC was also 7, since those participants achieved a 15% rate of “being within normal range,” accounting for half of the rate experienced under the rehabilitative interventions.

Is that what Dr. Sharpe meant in the radio interview when he said: “What this trial wasn’t able to answer is how much better are these treatments and really not having very much treatment at all”? If not, what did Dr. Sharpe mean? Wasn’t the trial designed to answer the very question Dr. Sharpe cited? Since each of the rehabilitative intervention arms as well as the SMC arm had an NNT of 7, would it be accurate to interpret the “normal range” findings as demonstrating that CBT and GET worked as well as SMC, but not any better?

16) The PACE paper was widely interpreted, based on your findings and statements, as demonstrating that “pacing” isn’t effective. Yet patients describe “pacing” as an individual, flexible, self-help method for adapting to the illness. Would packaging and operationalizing it as a “treatment” to be administered by a “therapist” alter its nature and therefore its impact? If not, why not? Why do you think the evidence from APT can be extrapolated to what patients themselves call “pacing”? Also, given your partnership with Action4ME in developing APT, how do you explain the organization rejection of the findings in the statement issued after the study was published?

17) In your response to correspondence in the Lancet, you acknowledged a mistake in describing the Bowling sample as a “working age” rather than “adult” population–a mistake that changes the interpretation of the findings. Comparing the PACE participants to a sicker group but mislabeling it a healthier one makes the PACE results look better than they were; the percentage of participants scoring “within normal range” would clearly have been even lower had they actually been compared to the real “working age” population rather than the larger and more debilitated “adult” population. Yet the Lancet paper itself has not been corrected, so current readers are provided with misinformation about the measurement and interpretation of one of the study’s two primary outcomes.

Why hasn’t the paper been corrected? Do you believe that everyone who reads the paper also reads the correspondence, making it unnecessary to correct the paper itself? Or do you think the mistake is insignificant and so does not warrant a correction in the paper itself? Lancet policy calls for corrections–not mentions in correspondence–for mistakes that affect interpretation or replicability. Do you disagree that this mistake affects interpretation or replicability?

18) In our exchange of letters in the NYTimes four years ago, you argued that PACE provided “robust” evidence for treatment with CBT and GET “no matter how the illness is defined,” based on the two sub-group analyses. Yet Oxford requires that fatigue be the primary complaint–a requirement that is not a part of either of your other two sub-group case definitions. (“Fatigue” per se is not part of the ME definition at all, since post-exertional malaise is the core symptom; the CDC obviously requires “fatigue,” but not that it be the primary symptom, and patients can present with post-exertional malaise or cognitive problems as being their “primary” complaint.)

Given that discrepancy, why do you believe the PACE findings can be extrapolated to others “no matter how the illness is defined,” as you wrote in the NYTimes? Is it your assumption that everyone who met the other two criteria would automatically be screened in by the Oxford criteria, despite the discrepancies in the case definitions?

19) None of the multiple outcomes you cited as “objective” in the protocol supported the subjective outcomes suggesting improvement (excluding the extremely modest increase in the six-minute walking test for the GET group)? Does this lack of objective support for improvement and recovery concern you?  Should the failure of the objective measures raise questions about whether people have achieved any actual benefits or improvements in performance?

20) If wearing the actometer was considered too much of a burden for patients to wear at the end of the trial, when presumably many of them would have been improved, why wasn’t it too much of a burden for patients at the beginning of the trial? In retrospect, given that your other objective findings failed, do you regret having made that decision?

21) In your response to correspondence after publication of the Psych Med paper, you mentioned multiple problems with the “objectivity” of the six-minute walking test that invalidated comparisons with other studies. Yet PACE started assessing people using this test when the trial began recruitment in 2005, and the serious limitations–the short corridors requiring patients to turn around more than was standard, the decision not to encourage patients during the test, etc.–presumably become apparent quickly.

Why then, in the published protocol in 2007, did you describe the walking test as an “objective” measure of function? Given that the study had been assessing patients for two years already, why had you not already recognized the limitations of the test and realized that it was apparently useless as an objective measure? When did you actually recognize these limitations?

22) In the Psych Med paper, you described “recovery” as recovery only from the current episode of illness–a limitation of the term not mentioned in the protocol. Since this definition describes what most people would refer to as “remission,” not “recovery,” why did you choose to use the word “recovery”–in the protocol and in the paper–in the first place? Would the term “remission” have been more accurate and less misleading? Not surprisingly, the media coverage focused on “recovery,” not on “remission.” Were you concerned that this coverage gave readers and viewers an inaccurate impression of the findings, since few readers or viewers would understand that what the Psych Med paper examined was in fact “remission” and not “recovery,” as most people would understand the terms?

23) In the Psychological Medicine definition of “recovery,” you relaxed all four of the criteria. For the first two, you adopted the “normal range” scores for fatigue and physical function from the Lancet paper, with “recovery” thresholds lower than the entry criteria. For the Clinical Global Impression scale, “recovery” in the Psych Med paper required a 1 or 2, rather than just a 1, as in the protocol. For the fourth element, you split the single category of not meeting any of the three case definitions into two separate categories–one less restrictive (‘trial recovery’) than the original proposed in the protocol (now renamed ‘clinical recovery’).

What oversight committee approved the changes in the overall definition of recovery from the protocol, including the relaxation of all four elements of the definition? Can you cite any references for your reconsideration of the CGI scale, and explain what new information prompted this reconsideration after the trial? Can you provide any references for the decision to split the final “recovery” element into two categories, and explain what new information prompted this change after the trial?

24) The Psychological Medicine paper, in dismissing the original “recovery” threshold of 85 on the SF-36, asserted that 50 percent of the population would score below this mean value and that it was therefore not an appropriate cut-off. But that statement conflates the mean and median values; given that this is not a normally distributed sample and that the median value is much higher than the mean in this population, the statement about 50 percent performing below 85 is clearly wrong.

Since the source populations were skewed and not normally distributed, can you explain this claim that 50 percent of the population would perform below the mean? And since this reasoning for dismissing the threshold of 85 is wrong, can you provide another explanation for why that threshold needed to be revised downward so significantly? Why has this erroneous claim not been corrected?

25) What are the results, per the protocol definition of “recovery”?

26) The PLoS One paper reported that a sensitivity analysis found that the findings of the societal cost-effectiveness of CBT and GET would be “robust” even when informal care was measured not by replacement cost of a health-care worker but using alternative assumptions of minimum wage or zero pay. When readers challenged this claim that the findings would be “robust” under these alternative assumptions, the lead author, Paul McCrone, agreed in his responses that changing the value for informal care would, in fact, change the outcomes. He then criticized the alternative assumptions because they did not adequately value the family’s caregiving work, even though they had been included in the PACE statistical plan.

Why did the PLoS One paper include an apparently inaccurate sensitivity analysis that claimed the societal cost-effectiveness findings for CBT and GET were “robust” under the alternative assumptions, even though that wasn’t the case? And if the alternative assumptions were “controversial” and “restrictive, as the lead author wrote in one of his posted responses, then why did the PACE team include them in the statistical plan in the first place?

Saturday, December 19, 2015

The PACE Wars Continue: A request for data, and a denial

The PACE wars are continuing, with James Coyne, David Tuller, Ronald W. Davis, Vincent R. Racaniello, Bruce Levin, and even a former editor of the BMJ on one side, and the authors of the PACE trial (along with their supporting institutions) on the other.

When the former editor of the British Medical Journal characterizes the refusal to release PACE data as "defending the indefensible," you know the tides are turning. (The BMJ published a long-term follow-up on PACE claiming continued benefits of CBT and GET.) Other researchers who, like James Coyne, are not directly connected to ME/CFS are also weighing in on the side of transparency.

In an effort to verify the claims made by PACE, four researchers have now asked for PACE data. Their request is based solely on the basis of getting to the truth. Nothing could be more reasonable.

A similar request by James Coyne was denied on the basis that the data might be "misused" and patients' rights violated. However, in the same denial, the authors conceded that they had indeed released data, but only to researchers who had validated their study.

In short, the PACE authors seem to be saying that as long as you validate their research, patient rights will be protected.

This can't end well for the PACE team.

You can sign a petition to retract the PACE trial HERE.

__________________________________

A request for data from the PACE trial

Virology Blog, 17 DECEMBER 2015


Mr. Paul Smallcombe
Records & Information Compliance Manager
Queen Mary University of London
Mile End Road
London E1 4NS

Dear Mr Smallcombe:

The PACE study of treatments for ME/CFS has been the source of much controversy since the first results were published in The Lancet in 2011. Patients have repeatedly raised objections to the study’s methodology and results. (Full title: “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome: a randomized trial.”)

Recently, journalist and public health expert David Tuller documented that the trial suffered from many serious flaws that raise concerns about the validity and accuracy of the reported results. We cited some of these flaws in an open letter to The Lancet that urged the journal to conduct a fully independent review of the trial. (Dr. Tuller did not sign the open letter, but he is joining us in requesting the trial data.)

These flaws include, but are not limited to: major mid-trial changes in the primary outcomes that were not accompanied by the necessary sensitivity analyses; thresholds for “recovery” on the primary outcomes that indicated worse health than the study’s own entry criteria; publication of positive testimonials about trial outcomes and promotion of the therapies being investigated in a newsletter for participants; rejection of the study’s objective outcomes as irrelevant after they failed to support the claims of recovery; and the failure to inform participants about investigators’ significant conflicts of interest, and in particular financial ties to the insurance industry, contrary to the trial protocol’s promise to adhere to the Declaration of Helsinki, which mandates such disclosures.

Although the open letter was sent to The Lancet in mid-November, editor Richard Horton has not yet responded to our request for an independent review. We are therefore requesting that Queen Mary University of London to provide some of the raw trial data, fully anonymized, under the provisions of the U.K.’s Freedom of Information law.

In particular, we would like the raw data for all four arms of the trial for the following measures: the two primary outcomes of physical function and fatigue (both bimodal and Likert-style scoring), and the multiple criteria for “recovery” as defined in the protocol published in 2007 in BMC Neurology, not as defined in the 2013 paper published in Psychological Medicine. The anonymized, individual-level data for “recovery” should be linked across the four criteria so it is possible to determine how many people achieved “recovery” according to the protocol definition.

We are aware that previous requests for PACE-related data have been rejected as “vexatious.” This includes a recent request from psychologist James Coyne, a well-regarded researcher, for data related to a subsequent study about economic aspects of the illness published in PLoS One—a decision that represents a violation of the PLoS policies on data-sharing.

Our request clearly serves the public interest, given the methodological issues outlined above, and we do not believe any exemptions apply. We can assure Queen Mary University of London that the request is not “vexatious,” as defined in the Freedom of Information law, nor is it meant to harass. Our motive is easy to explain: We are extremely concerned that the PACE studies have made claims of success and “recovery” that appear to go beyond the evidence produced in the trial. We are seeking the trial data based solely on our desire to get at the truth of the matter.

We appreciate your prompt attention to this request.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley

Sunday, December 13, 2015

Scientific Misconduct and the PACE Trial

Dr. James Coyne has been outspoken in his criticism of the PACE trial. He has publicly described the study as"bad science" using "voodoo statistics" to cover up faulty methodology.

Recently, Dr. Coyne requested a release of the PACE trial data from King's College London, one of the sponsors of the study.

The answer Dr. Coyne received from King's College London (see below) is surprisingly honest. Their concern (as stated in the highlighted portions) is that they may be criticized for their research. In fact, King's College states quite explicitly that anyone who might be critical cannot have access to the data. They also cited an "active campaign" to discredit the PACE trial as justification for denying a perfectly legitimate request for data.

This letter exposes the increasingly obvious reason for refusing to release the PACE trial data. If the data were released, it would be revealed that the results were falsified. Considering that there were 19 institutions participating in the trial, a public examination of the results would be deeply embarrassing. An exposure of outright fraud would also open them up to censure. Rather than face that possibility, the institutions in question have closed ranks, shouted "harassment," and invoked conspiracy theories.

In the world of science, it is not only unacceptable to argue that withholding data is justified because it might be disproved, it is ludicrous. The hallmark of scientific inquiry is reproducibility. If the results of a study cannot be reproduced, its findings are called into question.

Taking that idea one step further, this type of secrecy could be interpreted as outright misconduct. In a post on the London School of Economics blog, Nicole Janz argues that given how few researchers are willing to share data, "classifying data secrecy as misconduct may be a harsh, but necessary step." She cites political science guidelines that state that “researchers have an ethical obligation to facilitate the evaluation of their evidence based knowledge claims through data access, production transparency, and analytic transparency.”  In like fashion, the American Psychological Association has stated that "sharing data within the larger scientific community encourages a culture of openness and accountability in scientific research." [Emphasis mine]

Between conflicts of interest and lack of reproducibility, scientific integrity has been badly damaged over the past decade - to the point that most research findings are demonstrably false. The only way to stem the tide of jerry-rigged results is to enact penalties. Any study that refuses to share its data should be subject to an inquiry. 

James Coyne has requested just that. He has asked that the PACE paper be "provisionally retracted" until the data are shared. Under the standards of good scientific conduct, nothing could be more reasonable.

You can sign the petition to retract the PACE trial here.
___________________________

Read the original letter here.

Governance & Legal Services
Information Management and Compliance
Room 2.32
Franklin Wilkins Building
150 Stamford Street
London
SE1 9NH
Tel: 020 7848 7816

Email: legal-compliance@kcl.ac.uk

Professor James Coyne
By email only to: jcoynester@gmail.com

11 December 2015

Dear Professor Coyne,

Request for information under the Freedom of Information Act 2000 (“the Act”)

Further to your recent request for information held by King’s College London, I am writing to confirm that the requested information is held by the university. The university is withholding the information in accordance with section 14(1) of the Act – Vexatious Request.

Your request
You initially requested the information in accordance with the data sharing policies of the Public Library of Science (PLOS). The university has decided to treat this as a request under section 1(1) of the Act, as the information is held by the university. We received your information request on 13 November 2015.

You requested the following information.

“I have read with interest your 2012 article in PLOS One, "Adaptive Pacing, Cognitive Behaviour Therapy, Graded Exercise, and Specialist Medical Care for Chronic Fatigue Syndrome..." 

I am interested in reproducing your empirical results, as well as conducting some additional exploratory sensitivity analyses. 

Accordingly, and consistent with PLOS journals' data sharing policies, I ask you to kindly provide me with a copy of the dataset in order to allow me to verify the substantive claims of your article through reanalysis. I can read files in SPSS, XLS[x], or any reasonable ASCII format.”

Our response

We have given careful consideration to your request and have determined that section 14(1) of the Act applies to your request.

The university has considered, in particular, the following guidance and decisions in coming to this conclusion:

  • Information Commissioner v Devon County Council and Dransfield [2012] UKUT 440 (AAC) (Dransfield)
  • John Mitchell v Information Commissioner EA/2013/0019 (Mitchell)
  • Decision of the Information Commissioner: FS50558352 – 18 March 2015
  • Guidance issued by the Information Commissioner’s Office

Background

The article referred to in the request is a cost-effectiveness analysis based on a medical paper titled “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial”. This is more commonly known as the PACE trial. This is a collaborative project between researchers from Queen Mary University of London, Oxford University and King’s College London. This was a large scale randomised clinical trial testing treatments for chronic fatigue syndrome (CFS) also known as myalgic encephalomyelitis (ME).

The university considers itself to be a joint holder of the requested information with Queen Mary University of London.

It is acknowledged that the project has led to controversy. There have been significant efforts to publically discredit the trial following the release of the first article in the Lancet journal in 2011. Among other public campaigns, there is a Wikipedia page dedicated to criticisms of this project. The campaign has included deeply personal criticism to the researchers involved as well as significant comment around the decisions not to disclose data and information about the project.

This request

The university considers that there is a lack of value or serious purpose to your request. The university also considers that there is improper motive behind the request. The university considers that this request has caused and could further cause harassment and distress to staff.

The university considers that the motive and purpose behind this request is polemical. The university notes the view of the Information Commissioner in decision FS50558352 that the request in that case was ‘more focussed on attacking and attempting to discredit the trial than in obtaining useful information on the topic.’ The requested information relates to economic analysis undertaken by academic staff with considerable experience in this field. External sources were used as part of that analysis and the process took approximately one year to complete. We would expect any replication of data to be carried out by a trained Health Economist.

The university acknowledges the general principle that requests should be considered both applicant and motive ‘blind’. “However, the proper application of section 14 cannot side-step the question of the underlying rationale or justification for the request.” (Dransfield p9 para. 34). The university considers that it is entitled to take into account the wider dealings and publicity surrounding the project when considering the motive behind this request.

The active campaign to discredit the project has caused distress to the university’s researchers who hold legitimate concerns that they will be subject to public criticism and reputational damage. The researchers based at the university were aware of the criticism of the trial and of the comments that have been made publically. They were also aware of the numerous requests for information received by Queen Mary University of London. The researchers involved in this project are experienced and well-used to scrutiny of their work. However, since the receipt of the request, they have reported concern that they will targeted with the same criticisms as their colleagues at Queen Mary University of London.

The university adopts the comments made in the Mitchell case at paragraphs 31-34 in relation to the need to protect academic freedom and protect academic staff who put forward ‘new ideas and controversial or unpopular opinions.’

In conclusion, the university considers that when applying a holistic approach, this request can properly be considered to be vexatious.

This completes the university’s response to your information request.

Your right to complain

If you are unhappy with the service you have received in relation to your information request or feel that it has not been properly handled you have the right to complain or request a review of our decision by contacting the Head of Information Management and Compliance within 60 days of the date of this letter.

Further information about our internal complaints procedure is available at the link below:
http://www.kcl.ac.uk/college/policyzone/assets/files/governance_and_legal/Freedom_of_Information_Policy_updated_Oct_%202011.pdf

In the event that you are not content with the outcome of your complaint you may apply to the Information Commissioner for a decision. Generally the Information Commissioner cannot make a decision unless you have exhausted the internal complaints procedure provided by King’s College London.

The Information Commissioner can be contacted at the following address:
The Information Commissioner’s Office
Wycliffe House
Water Lane
Wilmslow
Cheshire
SK9 5AF

Yours sincerely
Ben Daley
Information Compliance Manager

Monday, November 23, 2015

David Tuller Exposes Conflicts of Interest on PACE Team


David Tuller is on the warpath.

After the authors of the PACE trial brushed off Tuller's observation that conflicts of interest were not reported to trial participants, he did some research. Predictably, he found that not only were three of the four principal researchers hired as consultants for insurance companies, but that decisions made by those companies had direct bearing on PACE findings and on recommendations made by PACE authors.

Corporate Collusion?

Conflicts of interest have become a major problem in the medical field. With pharmaceutical and insurance companies hiring physicians and researchers to conduct and evaluate basic research, the results of those studies have become increasingly unreliable.

It was the editor of The Lancet, Richard Horton, who observed that  half of the research published was false. In a May 2015 editorial he stated that:

“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.” [Emphasis mine]

The irony of this statement cannot be lost on those who have followed the PACE trial.

Some of the main players in the push to have ME/CFS labeled as a psychiatric illness have advised insurance companies. These companies have consistently used this medical advice and "research" to turn down disability and medical claims. (For an excellent article on this topic as it relates to ME/CFS, read Corporate Collusion?)

Michael Sharpe, a principal investigator for PACE, provided expert testimony and advice to several insurance companies, including UNUM Provident, Windsor Life, Bupa, Reassure, and Aegon UK.

Dr. Peter Manu, a psychiatrist employed as a consultant with Metlife, was instrumental in securing denials of disability payments to patients with CFS. In a chapter published in Disability and Chronic Fatigue Syndrome he argued that patients with CFS are really suffering from psychiatric illnesses, but are misdiagnosed by physicians who express "strong convictions" that CFS is produced by an immune dysfunction, a virus, or "other physical cause."

Dr. Manu went on to write a book, The Psychopathology of Functional Somatic Syndromes: Neurobiology and Illness Behavior in Chronic Fatigue Syndrome, Fibromyalgia, Gulf War Illness, Irritable Bowel, and Premenstrual Dysphoria, in which he argues that all of the aformentioned are psychosomatic. (Manu drew liberally upon Sharpe to defend his statement that "most physicians have understood [these conditions] to be mental illnesses.") His recommendation was to treat the illnesses with antidepressants and CBT.

ME/CFS is not the only illness that has been the object of insurance company manipulation. As a case in point, the existence of Chronic Lyme disease has been consistently denied by the IDSA (Infectious Diseases Society of America), an organization whose ties to industry led to an investigation by the Connecticut Attorney General. The investigation concluded that "The conflicts of interest of the IDSA guidelines panel members were significant and represent the type of ties to industry and patents that are common to key academic researchers in the field."

David Tuller has done an excellent job of exposing some of the industry ties that formed the rationale for, and determined the results of, the PACE trial. It is up to us to insist on an investigation.

You can sign the petition to retract the PACE trial HERE.

You can sign the petition demanding an HHS investigation of the PACE trial HERE.

________________________________


THIS ARTICLE ORIGINALLY APPEARED ON VIROLOGY BLOG

Trial by Error, Continued: PACE Team’s Work for Insurance Companies Is “Not Related” to PACE. Really?

By David Tuller, Wednesday, November 18, 2015 at 06:45 AM EST

In my initial story on Virology Blog, I charged the PACE investigators with violating the Declaration of Helsinki, developed in the 1950s by the World Medical Association to protect human research subjects. The declaration mandates that scientists disclose "institutional affiliations" and "any possible conflicts of interest" to prospective trial participants as part of the process of obtaining informed consent.

The investigators promised in their protocol to adhere to this foundational human rights document, among other ethical codes. Despite this promise, they did not tell prospective participants about their financial and consulting links with insurance companies, including those in the disability sector. That ethical breach raises serious concerns about whether the "informed consent" they obtained from all 641 of their trial participants was truly "informed," and therefore legitimate.

The PACE investigators do not agree that the lack of disclosure is an ethical breach. In their response to my Virology Blog story, they did not even mention the Declaration of Helsinki or explain why they violated it in seeking informed consent. Instead, they defended their actions by noting that they had disclosed their financial and consulting links in the published articles, and had informed participants about who funded the research--responses that did not address the central concern.

"I find their statement that they disclosed to The Lancet but not to potential subjects bemusing," said Jon Merz, a professor of medical ethics at the University of Pennsylvania. "The issue is coming clean to all who would rely on their objectivity and fairness in conducting their science. Disclosure is the least we require of scientists, as it puts those who should be able to trust them on notice that they may be serving two masters."

In their Virology Blog response, the PACE team also stated that no insurance companies were involved in the research, that only three of the 19 investigators "have done consultancy work at various times for insurance companies," and that this work "was not related to the research." The first statement was true, but direct involvement in a study is of course only one possible form of conflict of interest. The second statement was false. According to the PACE team's conflict of interest disclosures in The Lancet, the actual number of researchers with insurance industry ties was four—along with the three principal investigators, physiotherapist Jessica Bavington acknowledged such links.

But here, I'll focus on the third claim--that their consulting work "was not related to the research." In particular, I'll examine an online article posted by Swiss Re, a large reinsurance company. The article describes a "web-based discussion group" held with Peter White, the lead PACE investigator, and reveals some of the claims-assessing recommendations arising from that presentation. White included consulting work with Swiss Re in his Lancet disclosure.

The Lancet published the PACE results in February, 2011; the undated Swiss Re article was published sometime within the following year or so. The headline: "Managing claims for chronic fatigue the active way." (Note that this headline uses "chronic fatigue" rather than "chronic fatigue syndrome," although chronic fatigue is a symptom common to many illnesses and is quite distinct from the disease known as chronic fatigue syndrome. Understanding the difference between the two would likely be helpful in making decisions about insurance claims.)

The Swiss Re article noted that the illness "can be an emotive subject" and then focused on the implications of the PACE study for assessing insurance claims. It started with a summary account of the findings from the study, reporting that the "active rehabilitation" arms of cognitive behavioral therapy and graded exercise therapy "resulted in greater reduction of patients' fatigue and larger improvement in physical functioning" than either adaptive pacing therapy or specialist medical care, the baseline condition. (The three intervention arms also received specialist medical care.)

The trial's "key message," declared the article, was that "pushing the limits in a therapeutic setting using well described treatment modalities is more effective in alleviating fatigue and dysfunction than staying within the limits imposed by the illness traditionally advocated by ‘pacing.'"

Added the article: "If a CFS patient does not gradually increase their activity, supported by an appropriate therapist, then their recovery will be slower. This seems a simple message but it is an important one as many believe that ‘pacing' is the most beneficial treatment."

This understanding of the PACE research—presumably based on information from Peter White's web-based discussion—was wrong. Pacing is not and has never been a "treatment." It is also not one of the "four most commonly used therapies," as the newsletter article declared, since it has never been a "therapy" either. It is a self-help method practiced by many patients seeking the best way to manage their limited energy reserves.

The PACE investigators did not test pacing. Instead, the intervention they dubbed "adaptive pacing therapy" was an operationalized version of "pacing" developed specifically for the study. Many patients objected to the trial's form of pacing as overly prescriptive, demanding and unlike the version they practiced on their own. Transforming an intuitive, self-directed approach into a "treatment" administered by a "therapist" was not a true test of whether the self-help approach is effective, they argued--with significant justification. Yet the Swiss Re article presented "adaptive pacing therapy" as if it were identical to "pacing."

The Swiss Re article did not mention that the reported improvements from "active rehabilitation" were based on subjective outcomes and were not supported by the study's objective data. Nor did it report any of the major flaws of the PACE study or offer any reasons to doubt the integrity of the findings.

The article next asked, "What can insurers and reinsurers do to assist the recovery and return to work of CFS claimants?" It then described the conclusions to be drawn from the discussion with White about the PACE trial—the "key takeaways for claims management."

First, Swiss Re advised its employees, question the diagnosis, because "misdiagnosis is not uncommon."

The second point was this: "It is likely that input will be required to change a claimant's beliefs about his or her condition and the effectiveness of active rehabilitation...Funding for these CFS treatments is not expensive (in the UK, around £2,000) so insurers may well want to consider funding this for the right claimants."

Translation: Patients who believe they have a medical disease are wrong, and they need to be persuaded that they are wrong and that they can get better with therapy. Insurers can avoid large payouts by covering the minimal costs of these treatments for patients vulnerable to such persuasion, given the right "input."

Finally, the article warned that private therapists might not provide the kinds of "input" required to convince patients they were wrong. Instead of appropriately "active" approaches like cognitive behavior therapy and graded exercise therapy, these therapists might instead pursue treatments that could reinforce claimants' misguided beliefs about being seriously ill, the article suggested.

"Check that private practitioners are delivering active rehabilitation therapies, such as those described in this article, as opposed to sick role adaptation," the Swiss RE article advised. (The PACE investigators, drawing on the concept known as "the sick role" in medical sociology, have long expressed concern that advocacy groups enabled patients' condition by bolstering their conviction that they suffered from a "medical disease," as Michael Sharpe, another key PACE investigator, noted in a 2002 UNUMProvident report. This conviction encouraged patients to demand social benefits and health care resources rather than focus on improving through therapy, Sharpe wrote.)

Lastly, the Swiss Re article addressed "a final point specific to claims assessment." A diagnosis of chronic fatigue syndrome, stated the article, provided an opportunity in some cases to apply a mental health exclusion, depending upon the wording of the policy. In contrast, a diagnosis of myalgic encephalomyelitis did not.

The World Health Organization's International Classification for Diseases, or ICD, which clinicians and insurance companies use for coding purposes, categorizes myalgic encephalomyelitis as a neurological disorder that is synonymous with the terms "post-viral fatigue syndrome" and "chronic fatigue syndrome." But the Swiss Re article stated that, according to the ICD, "chronic fatigue syndrome" can also "alternatively be defined as neurasthenia which is in the mental health chapter."

The PACE investigators have repeatedly advanced this questionable idea. In the ICD's mental health section, neurasthenia is defined as "a mental disorder characterized by chronic fatigue and concomitant physiologic symptoms," but there is no mention of "chronic fatigue syndrome" as a discrete entity. The PACE investigators (and Swiss Re newsletter writers) believe that the neurasthenia entry encompasses the illness known as "chronic fatigue syndrome," not just the common symptom of "chronic fatigue."

This interpretation, however, appears to be at odds with an ICD rule that illnesses cannot be listed in two separate places—a rule confirmed in an e-mail from a WHO official to an advocate who had questioned the PACE investigators' argument. "It is not permitted for the same condition to be classified to more than one rubric as this would mean that the individual categories and subcategories were no longer mutually exclusive," wrote the official to Margaret Weston, the pseudonym for a longtime clinical manager in the U.K. National Health Service.

Presumably, after White disseminated the good news about the PACE results at the web-based discussion, Swiss Re's claims managers felt better equipped to help ME/CFS claimants. And presumably that help included coverage for cognitive behavior therapy and graded exercise therapy so that claimants could receive the critical "input" they needed in order to recognize and accept that they didn't have a medical disease after all.

In sum, contrary to the investigators' argument in their response to Virology Blog, the PACE research and findings appear to be very much "related to" insurance industry consulting work. The claim that these relationships did not represent "possible conflicts of interest" and "institutional affiliations" requiring disclosure under the Declaration of Helsinki cannot be taken seriously.

Update 11/17/15 12:22 PM: I should have mentioned in the story that, in the PACE trial, participants in the cognitive behavior therapy and graded exercise therapy arms were no more likely to have increased their hours of employment than those in the other arms. In other words, there was no evidence for the claims presented in the Swiss Re article, based on Peter White's presentation, that these treatments were any more effective in getting people back to work.

The PACE investigators published this employment data in a 2012 paper in PLoS One. It is unclear whether Peter White already knew these results at the time of his Swiss Re presentation on the PACE results.
Related Posts Plugin for WordPress, Blogger...