Skip to Content

Sponsors

No results

Tags

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Michigan Program in Survey and Data Science

Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series

Michigan Program in Survey and Data Science and the Joint Program in Survey Methodology Seminar Series

Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series
Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series
Jennifer Sinibaldi, Research Director, National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF)

Adam Kaderabek, Research Assistant and Masters Student, Michigan Program in Survey and Data Science

Assessing Measurement Error in Hypothetical Questions
Although most opinion survey questions use factual, behavioral, and attitudinal questions that ask what the respondent is, has done, or currently feels, sometimes surveys use hypothetical questions that ask the respondent to imagine a particular situation and state what they would do or feel. We expect these questions to suffer from measurement error but without an objectively observable truth for reference, we cannot evaluate it. Psychologists and economists who have studied the error in respondents’ “stated” preferences have labeled this error hypothetical bias. Survey methodologists have conducted very little research in the area of hypothetical bias but understanding this error could be important for correcting stated preferences on topics like: mode preference, consent to record linkage, and whether the respondent would download an app with certain features.

We use data from an experiment that asked participants assigned to the treatment group to report their reaction to the experimental survey protocol they received. Meanwhile, those in the control group were asked hypothetically how they would react to the protocol if they had experienced it. We compare the responses to ten respondent reaction questions presented upon completion of the survey, identifying the presence of hypothetical bias across all 10 questions using both categorical and quantitative methods. We found that the Control group consistently exhibited less preference for the hypothetical protocol than the Treatment group. We believe these differences are representative of the measurement error resulting from the poor correspondence between the Control group’s hypothetical attitudes and the Treatment group’s reported experience. This work shows that hypothetical bias is a problem in opinion research and is the first to quantify the extent of that problem.
Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series
Assessing Measurement Error in Hypothetical Questions – Jennifer Sinibaldi and Adam Kaderabek - JPSM MPSDS Seminar Series

Livestream Information

 Zoom
September 29, 2021 (Wednesday) 12:00pm
Meeting ID: 97702415176
Meeting Password: 1070

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content