Reducing Research Waste in (Health-Related) Quality of Life Research
What is research waste?
Research waste refers to the avoidable inappropriate conduct and dissemination of research. As identified by Altman in 1995, the causes of research waste can be found in:
inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation.
(as cited in Glasziou and Chalmers (2018))
How can we reduce research waste in health-related quality of life research?
A special section titled "Reducing research waste in (health-related) quality of life research" was published in the journal Quality of Life Research in July 2022.
This section contains eight papers, which considers research waste under three headings which align with key stages of research production:
- Research design and conduct;
- Publication and reporting; and
- Usability of results (3).
These papers also provide resources, recommendations and possible solutions for reducing research waste while maximising patient-related outcome (PRO) data. We highlight an article of interest under each of these headings which address research waste in studies utilising PRO measures (PROMs).
Careful sample size planning in HRQL studies can reduce research waste
POWER(FUL) MYTHS: MISCONCEPTIONS REGARDING SAMPLE SIZE IN QUALITY OF LIFE RESEARCH (4)
Adequate sample sizes are required for a study to have a reasonably high chance of detecting a meaningful effect on HRQL. However, some misconceptions pertaining to sample size can result in the collection of data from too many or too few participants, thus contributing to research waste. These misconceptions include:
- researchers should use rules of thumb or the largest sample size possible,
- sample size planning should always focus on power,
- planned power = actual power,
- there is only one level of power per study, and
- power is only relevant for the individual researcher.
This article further discusses non-technical corrections to these misconceptions, and also includes a sample size reporting checklist to help researchers take a nuanced approach to sample size planning in order to minimise research waste. Tutorials and additional guidance are provided for the (a) Minimally Interesting Effect Size (or, Minimally Important Difference), (b) literature review, and (c) Accuracy in Parameter Estimation approaches commonly used in sample size planning.
Clear reporting of PROs can maximise research impact
The CONsolidated Standards Of Reporting Trials (CONSORT)-PRO extension provides guidelines specifically related to PRO data that should be included in clinical trials publications.
This systematic review examined the use of the CONSORT-PRO extension by previous reviews as a tool to evaluate the quality of PRO reporting. In most of the included reviews, CONSORT-PRO items were omitted or largely modified in their evaluations. This may have caused readers to believe that certain CONSORT-PRO items are non-essential or optional, which is not the case.
The authors recommend future studies using the CONSORT-PRO guidance to include all CONSORT-PRO items (elaborations and extensions) to improve the completeness and transparency of PRO reporting. Further strategies to improve PRO reporting are recommended, including registering trials and facilitating data sharing.
It is better to recycle an existing PROM than create a new one
USING VALIDITY THEORY AND PSYCHOMETRICS TO EVALUATE AND SUPPORT EXPANDED USES OF EXISTING SCALES (6)
Many studies have requirements that can’t be met by an existing PROM.
This article discusses how validity theory and psychometrics can be used to decide whether it might be appropriate to adapt an existing PROM for a new use, rather than create a new one. Four examples are presented, including changing the mode of administration (e.g. from a paper-and-pen measure to an electronically administered version) and adapting a "general" PROM to capture disease-specific aspects.
The authors propose that when modifying or adapting a score, we must first determine the nature of the proposed modification, and use a validity theory framework to assess the existing validity evidence and determine what logical gaps are created by using the PROM scores in the new, proposed way. Doing so builds a consensus on how and when to select, modify, or develop a measure, as well as provides a structure for generating and evaluating validity evidence.
References:
- Altman DG. The scandal of poor medical research. BMJ. 1994;308(6924):283-4.
- Glasziou P, Chalmers I. Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers. BMJ. 2018;363:k4645.
- Rutherford C, Boehnke JR. Introduction to the special section "Reducing research waste in (health-related) quality of life research". Qual Life Res. 2022;31(10):2881-7.
- Anderson SF. Power(ful) myths: misconceptions regarding sample size in quality of life research. Qual Life Res. 2022;31(10):2917-29.
- Mercieca-Bebber R, Aiyegbusi OL, King MT, Brundage M, Snyder C, Calvert M. Knowledge translation concerns for the CONSORT-PRO extension reporting guidance: a review of reviews. Qual Life Res. 2022;31(10):2939-57.
- Houts CR, Bush EN, Edwards MC, Wirth RJ. Using validity theory and psychometrics to evaluate and support expanded uses of existing scales. Qual Life Res. 2022;31(10):2969-75.