- Home
- News & Events
- News
- The Role of QuesTReview™ in evaluating Patient Reported Outcome (PRO) measures
The Role of QuesTReview™ in evaluating Patient Reported Outcome (PRO) measures
By Keith Meadows, PhD, CMRS, HealthSurveySolutions, Banbury, UK
Introduction
No research paper on the development of a patient-reported outcome (PRO) measure would rightly be complete or acceptable without a detailed description of its psychometric properties in support of its reliability and validity. However, this information is only half the story of the instrument’s quality and ability to provide insightful information.
Following good questionnaire design practice is an essential component in the development of a health survey questionnaire if the data collected is to be reliable and valid, whether it’s a PRO measure or not.
Writing good survey questions is a complex task. Every item can be worded in different ways and it is not always clear which is optimal. For example, the following question would appear at face value to be acceptable.
When sitting down do you ever feel short of breath?
However when compared to:
Do you ever feel short of breath when sitting down?
The latter question has a right-branching structure in which the question starts with the subject followed by the modifier, resulting in the respondent having to hold less information in memory to give a correct response. However, design issues such as this are rarely presented as part of the PRO’s developmental process.
Question characteristics, such as wording, word length, question type, reading ease, choice of appropriate response options are well known among survey designers and over time researchers have developed recommendations for writing questions based on results on how these characteristics affect outcome.
There are less obvious design issues that can also impact on the validity of respondents’ answers to a survey questionnaire. For example, the appropriate use of balanced and unbalanced rating scales, left and right branching question structure discussed above and placing a specific question after a general question to avoid biased responses are just three examples that can impact on the validity of respondents answers. However, these features will not necessarily be reflected in the psychometric properties of the instrument, as respondents will generally provide a response to a question despite its design qualities. Furthermore, there are other factors to take account of in the design of a questionnaire, such as how to maintain respondent engagement, reducing respondent burden and long response times, avoiding straight lining and satisficing.
As mobile surveys are increasingly becoming mainstream, with about 20% of all surveys being taken on a mobile device, it is important to understand the best practice for mobile survey design and the expectations for mobile respondents. Issues can include the design of motivational welcome screens, the number of questions per screen, inappropriate question formats, use of scrolling and motivational techniques such as inclusion of a progress bar are all issues to be considered in terms of increasing the quality of information obtained from the respondent.
There are a number of methods to understand the quality of questions prior to fielding which include expert review, laboratory based e.g. cognitive interviews (CI) and field based methods. 1,2,3
Many factors can influence the choice of methods, including budget and development timelines which are the major factors in determining the method to be used. Expert review which requires no data collection is the least expensive, while field-based methods such as pre-testing are more costly and time consuming.
Laboratory methods such as CI can provide evidence of the comprehensibility problems respondents might have, however, CI will not necessarily pick up all issues of comprehensibility that expert review will. However, research evidence indicates that a combination of expert review and CI provides the best predictions of question accuracy.4
This article introduces QuesTReviewTM our expert review tool that evaluates a survey questionnaire in terms of its ability to collect reliable and unbiased data prior it going into the field.
What is QuesTReview™?
QuesTReview is our structured standardized questionnaire evaluation tool, developed to evaluate the structure and effectiveness of a questionnaire and identify question features that are likely to lead to response error.
Grounded in recognized psychological and behavioral models of how respondents complete survey questionnaires, QuesTReviewTM benchmark the questionnaire against key parameters of good questionnaire design practice in a step-wise manner e.g. wording, question length, knowledge and memory demands etc. and rate whether each question exhibits features that are likely to cause problems. These ratings are, where required, combined with recommendations for correcting each potential problem as feedback to the instrument author / developer.
Each parameter comprises a number of sub-categories. For example, the clarity parameter comprises sub-categories including, word length, ambiguity, use of technical terminology, question wording (including the use of low-frequency wording), sensitivity and social desirability. The resulting quantitative and qualitative feedback is provided in the following formats:
- Traffic light rating system for each question/item by parameter combined with a detailed qualitative description of the identified weaknesses for each question/item.
- Average rating score across all parameters by individual question providing an overview of each question/item performance and need for revision.
- Checklist for eCOA application
- Survey completion time
- Overall performance rating
Examples of QuesTReview™ feedback
- Figure 1 illustrates that for question number 6 the sub categories of the clarity parameter have been rated as having major defects. The qualitative feedback identifies the problems and provides suggestions on how to address the issues.
- Based on our proprietary QuesTAnalyzerTM scoring algorithm, Figure 2 provides an overall view of the performance of each question across all parameters with questions 6, 8, 10 and 11 requiring major revisions in line with the qualitative feedback.
- With mobile surveys becoming mainstream in collecting patient reported outcomes and other health, quality of life assessments and particularly when applying ‘Bring your own device’ (BOYD) solutions, it is essential that issues specific to online and mobile surveys are included in questionnaire evaluation in addition to comprehensibility problems. Figure 3 is a check list of some of the do’s and don’ts specific to the design of online and mobile survey questionnaires.
- Survey completion time can have a significant impact on respondent burden, motivation to complete the survey, item non-response and drop-outs. Using a scoring algorithm based on word count and characters, feedback is provided on completion time for the questionnaire across the main administration modes. In Figure 4 an example of the feedback is shown, where, apart from the smartphone the administration modes fall within their respective completion timeframes.
What distinguishes QuesTReview™?
We believe that what sets QuesTReview™ apart from other questionnaire evaluation methods is first, it provides both quantitative and comprehensive qualitative feedback on a range of less well known good questionnaire design practices which are less likely to be identified during cognitive interviews. Secondly, adapting a paper questionnaire for or developing a mobile survey calls for a different set of design criteria. QuesTReview™ undertakes an evaluation against a checklist of key design features to ensure the survey is mobile friendly.
For further information on QuesTReview™ contact the author: kmeadows@dhpresearch.com
References
Presser S and Blair J. Survey Pretesting: Do Different Methods Produce Different Results? Sociological Methodology 1994; Vol. 24 73-104.
Rothgeb J, Willis G, Forsyth S. Questionnaire Pretesting Methods: Do Different Techniques and Different Organizations Produce Similar Results? Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 2007; 96 no. 1 5-31.
Yan T, Kreuter F Tourangeau R. Evaluating Survey Questions: A Comparison of Methods. Journal of Official Statistics. 2012; 28.4
Westat AM, Presser S. Using Pretest Results to Predict Survey Question Accuracy https://www.amstat.org/meetings/qdet2/OnlineProgram/AbstractDetails.cfm? International Conference on Questionnaire Design, Development, Evaluation and Testing. 2016