This page is optimized for a taller screen.
Please rotate your device or increase the size of your browser window.
Asking the Right Questions: “Stay-at-Home” Orders, Survey Data, and COVID-19
August 6, 2020
As the coronavirus has spread, most of us have become all too familiar with “stay-at-home” orders. This seemingly simple concept is actually quite complex in the context of the COVID-19 outbreak.
Public messaging around “lock down,” “quarantine,” “shelter-in-place,” or “stay-at-home” varies across federal, state, and local agencies as well as the many non-governmental groups and news media focusing on this topic. Even within a single community, there may be competing messages and differences in understandings of these concepts – is it all day, every day? Are there exemptions for going to places such as the grocery store, gas station, or work? Is it mandatory or an advisory?
Lack of clear and consistent messaging makes measuring public behavior difficult. As a result, collecting accurate data about respondents’ actions—24 hours a day, 7 days a week—has become increasingly important to discerning behaviors resulting from varying messages. How frequently have people left their homes? When and for how long did they leave? And why did they decide to leave? Clearly understanding these questions provides greater insight into how effective stay-at-home orders are, the underlying reasons why some might not adhere to them, and the effects these orders are having. Mapping effective survey questions to different locales and understandings of “stay at home” orders requires serious deliberation.
Asking the Right Questions
It is a well-established principle within survey research that changes in question wording – even minor changes–can impact respondents’ answers. “Question interpretation” is the process an individual goes through to comprehend the meaning of a question. From a cognitive standpoint, this involves memory retrieval, judgement, and response selection. How a respondent interprets a question from the start will frame these three steps. Clearly written and easily understood questions are critical components of minimizing interpretation errors and ensuring the most accurate possible data is collected.
So how are survey researchers seeking to measure the concept of “staying-at-home”? To examine this issue in more detail, we drew upon a database we developed of more than 1,500 COVID-19 related questions asked in 50 publicly released questionnaires and/or reports across a number of sectors – media, higher education, commercial, think tank, and government sources. We leveraged this database to explore how questions about “stay-at-home” orders are being asked across various COVID-19-realted surveys.
We found that questions aimed at measuring issues around stay-at-home orders are best classified in three basic groupings.
The problem is, not all questions are created equal:
Relative – The phrasing in these types of questions and their response options is quite general, with no objective point of reference. For example, “Are you staying at home as much as possible?” with a “yes or no” response option is a question without an objective point of reference. This can make analysis of the findings challenging, as respondents’ interpretation of the question can vary dramatically.
Qualified – These questions and associated response options provide a “mental anchor” or objective reference points for the respondent. Asserting a specific timeframe for staying-at-home (“… in the past 7 days …”); or specifying the conditions under which an individual is now staying at home when they otherwise would not (“…stayed at home instead of going to work, school, or other regular activities”) creates an objective measure. The specificity within the question helps respondents frame, interpret, and respond to the question in the same manner, which results in findings that are easily understood and communicated.
Detailed – Measuring multiple, detailed dimensions of questions related to stay-at-home orders is critical to pandemic response. Collecting data to better predict where new outbreaks may occur or where public health messaging may need to be targeted requires detailed questions. A battery of questions to probe various aspects of the concept is more informative than asking a single question about whether an individual is following the orders. As shown in the example in Table 1, information about “stay-at-home” (or the converse, not staying at home) is captured with a battery of three questions, each with specific reference points: leaving the house yesterday; number of hours out of the house; and main reasons for leaving the house. These questions provide a specific focus for the respondent and a richer set of contextual data for understanding the reason for the activity. Moreover, by focusing on a short-term time frame, the question avoids some of the recall issues associated with accurately recalling longer or open-ended timeframes.
How well survey questions are framed, worded, and, most importantly, understood by the respondent is a key driver in determining the quality of the information derived from a question (or set of questions). Questions with fewer specific referents or anchors can lead to broader interpretation of the question and greater variability in how people may respond. More specific questions, in contrast, can–when worded correctly–help to narrow this variability and ideally allow different respondents to answer within a similar context. When it comes to understanding public behavior during the COVID-19 pandemic, it stands to reason that refining the focus of questions through the use of referents –such as day of the week, time of day, specific activities we’re interested in, etc. –can help to produce information that can better inform decisionmakers regarding adherence to public health safety measures.
Public opinion can provide unique insights into individuals’ experiences, reactions, and needs across an array of issues caused by the pandemic. This data is critical to effective policymaking, marshaling resources, and measuring progress to help those in need. Accurately collecting and tracking movement, as well as understanding the reasons individuals decided to leave their home, is the type of information local, state and federal officials and others need to enhance critical messaging and strengthen remedial actions. How researchers ask respondents to quantify a concept can have a profound impact on the data collected and, in this case, on the course and outcome of the pandemic. Which means we have to make sure we’re asking the right questions.