Researchers at Monash University in Australia are calling on the global medical community to adopt their new six-step approach to detect and eliminate fraudulent studies before they are incorporated into metanalyses, clinical guidelines, and “best practices.”
The RIGID (Research Integrity in Guidelines and evIDence) system is positioned as an international framework for addressing the massive worldwide problem of research fraud, which has grown to epidemic proportions in recent years. The RIGID guidelines were recently posted on The Lancet’s eClinicalMedicine site.
RIGID co-lead author, Ben Mol, head of the Women’s Healthcare Research Group at the Monash, Melbourne campus, estimates roughly 25% of all studies that go into clinical practice guidelines are questionable at best, fraudulent at worst. The peer-review process at medical journals is failing, and a lot of bad research is getting out into circulation.
Dr. Mol notes there were more than 10,000 published studies retracted by journals in 2023—a new, shameful record.
Retractions are important, but the process for getting a dodgy paper retracted is tedious, time-consuming, and inadequate for confronting the tidal waves of bad data flooding all clinical disciplines. And even if bad papers are retracted, they often live on in metanalyses and systematic reviews because the authors of these reviews seldom revise their papers to reflect subsequent retractions.
Dr. Mol and his colleagues argue that the international scientific community needs to do a much better job of proactively detecting and rejecting bogus studies before they are reified and disseminated via national and international clinical practice guidelines.
Six Key Steps
The RIGID guidelines define six steps for eliminating fraudulent science from practice guidelines and metanalyses:
- Review: This is the standard systematic review process, which should be the starting point for any clinical review article, metanalysis, or clinical practice consensus statement;
- Exclude: Studies which have been retracted should be categorically excluded; papers for which “expressions of concern” have been listed should be flagged for further evaluation;
- Assess: Remaining studies are assessed for integrity using an appropriate tool such as the Research Integrity Assessment (RIA) tool or Trustworthiness of Randomized Controlled Trials (TRACT) checklist and allocated an initial integrity risk rating of low, moderate, or high risk;
- Discuss: Integrity assessment results should be discussed among review committee members, with votes to determine final integrity risk ratings for each study under consideration;
- Establish contact: For studies ranked as moderate or high-risk, reviewers should contact the authors for clarification;
- Reassess: Reviewers should reassess studies for inclusion using the RIGID author response algorithm. Questionable papers can be reclassified as ‘included’ when their primary authors have provided a satisfactory response, as ‘awaiting classification’ where authors have engaged but have not yet addressed the stated concerns, or as ‘not included,’ when authors do not respond to multiple contact attempts.
RIGID is a first step toward providing a systematic framework for protecting clinical guidelines from contamination by misleading data.
Ideally, the pre-publication peer-review process would be robust enough to catch and discard questionable studies before they’re published in journals of record. That would prevent bad data from ever entering into metanalyses and clinical guidelines. But it is obvious that we live in a far-from-ideal world.
Peer Review is Broken
The RIGID authors underscore the shortcomings of the peer-review process, noting that “The perpetuation of problematic research is underpinned by complex systemic shortcomings, including inadequate application of quality research reporting processes or detection systems; lack of time and resources to investigate claims; lack of incentives for journals, institutions and whistle-blowers; and barriers around reputational or legal implications.”
This means that the task of sorting out good from bad research, unfortunately, falls on the clinical community, as well as on healthcare policymakers. RIGID is a first step toward providing a systematic framework for protecting clinical guidelines from contamination by misleading data.
Proof of Concept
Last year, the RIGID review system was put to the test, as part of the development process for the International Evidence-Based Guidelines for Polycystic Ovary Syndrome, published by the Royal Society of Obstetricians & Gynecologists. The development of these guidelines involved 39 ob/gyn societies and over 80 multidisciplinary experts who evaluated more than 6,000 pages of published studies.
Dr. Mol, who was on the development team, reported that roughly 45% of all the studies under review for potential inclusion in the guidelines, ranked as moderate to high-risk for lack of scientific integrity.
“That’s a shockingly high number. Those potentially untrustworthy papers might have completely skewed the guidelines,” Dr. Mol stated in an article on the MedicalXpress website.
Bad guidelines can lead to bad clinical practice. Authoritative guidelines such as the international PCOS guidelines are widely read and widely referenced. They influence medical decision-making and often factor into large-scale healthcare policy changes.
Roughly 45% of all the studies under review for potential inclusion in the guidelines, ranked as moderate to high-risk for lack of scientific integrity.
Helena Teede, a co-author of the RIGID framework, says that it has now been embraced by other guidelines development committees in the women’s health field, including the Premature Ovarian Insufficiency (POI) International Guideline, and the Australian adaptation of the European Society of Human Reproduction and Embryology (ESHRE) Unexplained Infertility Guideline.
A Worldwide Problem
The Monash team are certainly not alone in calling attention to the problem of fraudulent research. Back in 2021, Richard Smith, a former editor at the British Medical Journal, posted an editorial stating that practitioners, scientists, policymakers and the general publish should, “stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported.”
Many other prominent scientific and medical leaders have voiced similar opinions.
The RIGID authors note that gaps in research integrity are not always intentional. Problems sometimes arise through accident or error, inaccurate analysis, lack of oversight, or lack of experience. That said, one need not look far to find examples of data fabrication, falsification, manipulation, and plagiarism. Retraction Watch, an independent scientific watchdog group does an excellent job of chronicling scientific malfeasance across many disciplines.
In their paper, the RIGID authors stress that, “Integrity in scientific research is predicated on the four pillars of honesty, accuracy, efficiency and objectivity. Research integrity is particularly important in fields such as medicine, where results from clinical studies…directly inform clinical guidelines, in turn shaping routine patient care.” They add that “RCTs with concerns around integrity can compromise patient care, both directly through unnecessary or harmful treatments, or indirectly through wasted resources and misdirected future medical research.”
The RIGID framework is not a fail-safe solution to the publication and proliferation of fraudulent or questionable science. But if widely embraced, it could go a long way toward ensuring that clinical practice guidelines are not contaminated by dodgy data.
END