Happening @ Michigan https://events.umich.edu/list/rss RSS Feed for Happening @ Michigan Events at the University of Michigan. MPSDS JPSM Seminar Series - Assessing Cross-Cultural Comparability of Self-Rated Health and Its Conceptualization through Web Probing (April 5, 2023 12:00pm) https://events.umich.edu/event/103497 103497-21807352@events.umich.edu Event Begins: Wednesday, April 5, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
April 5, 2022
12:00 - 1:00 EST

Stephanie Morales is a second-year Ph.D. student at the University of Michigan's Program in Survey and Data Science. She holds a BA in Psychology and an MA in Sociology. She is interested in cross-cultural surveys, measurement error in data collection with racial/ethnic minorities, and adaptive survey design.

Assessing Cross-Cultural Comparability of Self-Rated Health and Its Conceptualization through Web Probing

Self-rated health (SRH) is a widely used question across different fields, as it is simple to administer yet has been shown to predict mortality. SRH asks respondents to rate their overall health typically using Likert-type response scales (i.e., excellent, very good, good, fair, poor). Although SRH is commonly used, few studies have examined its conceptualization from the respondents’ point of view and even less so for differences in its conceptualization across diverse populations. We aim to assess the comparability of SRH across different cultural groups by investigating the factors that respondents consider when responding to the SRH question. We included an open-ended probe asking what respondents thought when responding to SRH in web surveys conducted in five countries: Great Britain, Germany, the U.S., Spain, and Mexico. In the U.S., we targeted six racial/ethnic and linguistic groups: English-dominant Koreans, Korean-dominant Koreans, English-dominant Latinos, Spanish-dominant Latinos, non-Latino Black Americans, and non-Latino White Americans. One novelty of our study is allowing multiple attribute codes (e.g., health behaviors, illness) per respondent and tone (e.g., in the direction of positive or negative health or neutral) of the probing responses for each attribute, allowing us 1) to assess respondents’ thinking process holistically and 2) to examine whether and how respondents mix attributes. Our study compares the number of reported attributes and tone by cultural groups and integrates SRH responses in the analysis. This study aims to provide a deeper understanding of SRH by revealing the cognitive processes among diverse populations and is expected to shed light on its cross-cultural comparability.

Michigan Program in Survey and Data Science (MPSDS)
The University of Michigan Program in Survey Methodology was established in 2001 seeking to train future generations of survey and data scientists. In 2021, we changed our name to the Michigan Program in Survey and Data Science. Our curriculum is concerned with a broad set of data sources including survey data, but also including social media posts, sensor data, and administrative records, as well as analytic methods for working with these new data sources. And we bring to data science a focus on data quality — which is not at the center of traditional data science. The new name speaks to what we teach and work on at the intersection of social research and data. The program offers doctorate and master of science degrees and a certificate through the University of Michigan. The program's home is the Institute for Social Research, the world's largest academically-based social science research institute.

Summer Institute in Survey Research Techniques (SISRT)
The mission of the Summer Institute is to provide rigorous and high quality graduate training in all phases of survey research. The program teaches state-of-the-art practice and theory in the design, implementation, and analysis of surveys. The Summer Institute in Survey Research Techniques has presented courses on the sample survey since the summer of 1948, and has offered such courses every summer since. Graduate-level courses through the Program in Survey and Data Science are offered from June 5 through July 28 and available to enroll in as a Summer Scholar.

The Summer Institute uses the sample survey as the basic instrument for the scientific measurement of human activity. It presents sample survey methods in courses designed to meet the educational needs of those specializing in social and behavioral research such as professionals in business, public health, natural resources, law, medicine, nursing, social work, and many other domains of study.

]]>
Lecture / Discussion Mon, 16 Jan 2023 17:00:12 -0500 2023-04-05T12:00:00-04:00 2023-04-05T13:00:00-04:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
On the use of weights when analyzing survey data: Descriptive Statistics vs. Regression Modeling (June 22, 2023 3:00pm) https://events.umich.edu/event/108505 108505-21819870@events.umich.edu Event Begins: Thursday, June 22, 2023 3:00pm
Location: Off Campus Location
Organized By: Inter-university Consortium for Political and Social Research

This ~45 minute talk will provide a practical overview of key considerations underlying the use of survey weights when performing common analyses of complex sample survey data sets collected from probability samples. Topics to be covered will including unequal weighting effects on variance estimates, the importance of descriptive assessments of the survey weights, whether weights are informative, to weight or not to weight when fitting regression models, testing the need for weights when fitting regression models, and more general weighting approaches for non-probability samples. Example software code will be included where relevant. Time will also be reserved for a question-and-answer session.

]]>
Lecture / Discussion Mon, 05 Jun 2023 16:14:32 -0400 2023-06-22T15:00:00-04:00 2023-06-22T16:00:00-04:00 Off Campus Location Inter-university Consortium for Political and Social Research Lecture / Discussion Webinar: Brady West, Using Weights when Analyzing Survey Data: Descriptive Statistics vs. Regression Modeling
MPSDS JPSM Seminar Series - Everything You Need to Know When Utilizing Probability Panels: Best Practices in Planning, Fielding, and Analysis (September 27, 2023 12:00pm) https://events.umich.edu/event/112696 112696-21829462@events.umich.edu Event Begins: Wednesday, September 27, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS SEMINAR SERIES
September 27, 2023
12:00 - 1:00 pm

IN PERSON AND VIA ZOOM
- In person, room 1070 Institute for Social Research.
- Via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

EVERYTHING YOU NEED TO KNOW WHEN UTILIZING PROBABILITY PANELS: BEST PRACTICES IN PLANNING, FIELDING, AND ANALYSIS

Speakers: David Dutwin & Ipek Bilgen

Probability-based panel survey research is more widespread than ever, as the continuing decline in survey response rates makes cross-sectional sample surveys less and less accessible both in terms of fit for purpose data quality and cost. The attraction of probability panels for surveys is their ability to attain, dependent upon their recruiting methods, comparable response rates to cross-section polls, but at a lower cost and more expeditious execution. Panels are a unique type of survey research platform: Unlike cross-sections, panels recruit respondents specifically for future participation in surveys. In return, panelists are financially compensated, typically to join the panel in the first place, and then secondarily for each survey in which they participate.

These differences to cross-sectional surveys have a range of potential implications. How does the method and effort of recruiting impact who joins, and as a consequence what is best practice? What do panels do to retain panelists over time and which strategies are more successful than others? How much of a concern is panel conditioning, that is, the impact of persons repetitively taking surveys over time, and what are the implications for how frequently panelists should take surveys? How do panels, which exclusively request that panelists take surveys on the Internet, deal with people who do not have or are not comfortable using the Internet? What is the impact of panelist attrition and what are best efforts to replenish retired panelists? How successful are panels are executing true longitudinal surveys? And, given the additional layers of complexity, how are panel surveys properly weighted and estimated?

This seminar is meant to serve two purposes. First, it will serve as a guide for consumers of probability-based panels to understand what, in short, they are working with: What questions to ask and what features to understand about probability panels in evaluating their use for data collections, and how to best use probability-based panel data. Second, it will serve as an exploration of best practices for the practitioners of surveys: Raising issues of data quality, cost, and execution.

Learning Objectives:

1. For consumers of panel data: Understanding the features of panels with which to be knowledgeable; to know the important questions to ask panel vendors when assessing their fit for purpose of your research.
2. For researchers and practitioners: To understand the many dimensions and decision points in the building, maintenance, deployment, and delivery of multi-client panels and panel data.

Bios:

David Dutwin, PhD, is Senior Vice President for Strategic Initiatives, Business Ventures and Initiatives and Chief Scientist of AmeriSpeak at NORC at the University of Chicago. David provides scientific and programmatic thought leadership in support of NORC’s ongoing innovations. In addition to identifying new business opportunities, he lends expertise on research design conceptualization, methodological innovation, and product development. He leads the panel operations and the statistics and methods divisions of AmeriSpeak. David assists in NORC strategic vision and strategy, project acquisition and management of advance research methods. Prior research has focused on election methodology, surveying of low-incidence populations, the use of big data in survey research, and data quality in survey panels. He is a senior fellow of the Program for Opinion Research and Election Studies at the University of Pennsylvania. An avid member of the AAPOR community, David served as president from 2018-2019. He previously served on AAPOR’s Executive Council as conference chair and has served full terms on several committees. For over twenty years, he has taught courses in survey research and design, political polling, research methods, rhetorical theory, media effects, and other courses as an adjunct professor at the University of Pennsylvania, the University of Arizona, and West Chester University.

Ipek Bilgen, PhD, is a Principal Research Methodologist in the Methodology and Quantitative Social Sciences Department at NORC at the University of Chicago. Ipek is the Deputy Director of NORC’s Center for Panel Survey Sciences. Additionally, she oversees AmeriSpeak’s methodological research and innovations. As part of her role within AmeriSpeak, she also provides survey design expertise, questionnaire development and review support, and leads cognitive interview and usability testing efforts for client studies. Ipek received both her Ph.D. and M.S. from the Survey Research and Methodology (SRAM) Program at the University of Nebraska-Lincoln. She has published and co-authored articles in Journal of Official Statistics, Public Opinion Quarterly, Journal of Survey Statistics and Methodology, Survey Practice, Social Currents, Social Science Computer Review, Field Methods, Journal of Quantitative Methods, SAGE Research Methods, and Quality and Quantity on issues related to interviewing methodology, web surveys, online panels, internet sampling and recruitment approaches, nonresponse and measurement issues in surveys. In the past, she has served on AAPOR’s and MAPOR’s Executive Councils. Ipek is currently teaching at the Irving B. Harris Graduate School of Public Policy Studies at the University of Chicago and serving as Associate Editor of Public Opinion Quarterly (POQ).

]]>
Lecture / Discussion Tue, 19 Sep 2023 15:29:30 -0400 2023-09-27T12:00:00-04:00 2023-09-27T13:00:00-04:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - New data, new questions, old problems? Online behavioral data in social science research (October 11, 2023 12:00pm) https://events.umich.edu/event/113445 113445-21831024@events.umich.edu Event Begins: Wednesday, October 11, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
October 11, 2023
12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

New data, new questions, old problems? Online behavioral data in social science research

Records of individuals’ online activities obtained from devices like personal computers and smartphones have received a lot of interest in the social sciences in recent years. Many have praised such data for allowing fine-grained observations of individuals’ online activities which would be impossible with more traditional data sources such as surveys. Recent work, however, warns that many data quality aspects of these novel data are so far poorly under- stood. As the number of observations can quickly reach several millions, researchers seem tempted to treat online behavioral data as gold standard, ignore what their data may be missing, and which other systematic biases may be present. In this talk, I present both applied and methodological work using online behavioral data in a typical social science setting. First, using within-between random effects models, I show how online behavioral data combined with a panel survey allows us to understand the effects of news media consumption from populist alternative news platforms on individuals’ political attitudes. Second, I show that online behavioral data, although containing detailed records of individuals’ social media use, are far from being complete. Using hidden Markov models, combined online behavioral data, survey records, and donated social media data, I show that the online behavioral data seem to completely fail in capturing social media use for about one third of the sample. I emphasize the need for researchers to navigate the complexities of online behavioral data, highlighting potentials and limitations.

Ruben Bach is a Research Fellow at the Mannheim Centre for European Social Research, University of Mannheim, Germany. His research is concerned with data quality in social science data products and applied computational social science (media consumption, political attitudes, socially responsible AI). In the fall of 2023, he is a visitor with the Department of Statistics and Actuarial Science, University of Waterloo, Ontario.

]]>
Lecture / Discussion Tue, 10 Oct 2023 12:29:09 -0400 2023-10-11T12:00:00-04:00 2023-10-11T13:00:00-04:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - Implementing and Adjusting a Non-probability Web Survey: Experiences of EVENs (Survey on the Impact of COVID19 on Ethnic Minorities in the United Kingdom) (October 18, 2023 12:00pm) https://events.umich.edu/event/113847 113847-21831814@events.umich.edu Event Begins: Wednesday, October 18, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
October 11, 2023
12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research and via Zoom.
The Zoom call will be locked 10 minutes after the start of the presentation.

Implementing and Adjusting a Non-probability Web Survey: Experiences of EVENs (Survey on the Impact of COVID19 on Ethnic Minorities in the United Kingdom)

Natalie Shlomo
Professor of Social Statistics, University of Manchester

This is joint work with Andrea Aparcio-Castro, Daniel Ellingworth, Angelo Moretti, Harry Taylor, Nissa Finney and James Nazroo

We discuss the challenges of implementing and adjusting a large-scale non-probability web survey. For the application, we focus on the 2021 Evidence for Equality National Survey (EVENS) which was led by the Centre on Dynamics of Ethnicity (CoDE) at the University of Manchester in the United Kingdom, in partnership with Ipsos-MORI. The aim was to understand the impact of the COVID19 pandemic on ethnic and religious minority groups in the UK. Standard probability-based surveys, even with ethnic minority group boosts, do not have the sample sizes required to obtain reliable estimates for small group statistics. We therefore designed a non-probability web survey of ethnic minority groups to overcome these limitations. We formed partnerships with community organizations and used innovative recruitment strategies, including digital and social media. Daily monitoring of the data collection against desired sample sizes and R-indicator calculations allowed the team to focus attention on the recruitment of specific groups in a responsive data collection mode. We also supplemented the sample with existing members in both established non-probability and probability-based panels in the UK. We describe the measures applied to improve the quality of the collected data and the statistical adjustments to correct for selection and coverage biases based on estimating the probability of participation in the non-probability sample using combined probability reference samples followed by calibration to auxiliary information from the UK Census 2021. We demonstrate how a pseudo-population bootstrap approach can be designed to obtain bootstrap weights to allow for statistical analyses and inference.

Natalie Shlomo is Professor of Social Statistics at the University of Manchester and publishes widely in the area of survey statistics, including small area estimation, adaptive survey designs, non-probability sampling, confidentiality and privacy, data linkage and integration. She has over 70 publications and refereed book chapters and a track record of generating external funding for her research. She is an elected member of the International Statistical Institute (ISI), a fellow of the Royal Statistical Society, a fellow of the Academy of Social Sciences and President 2023-2025 of the International Association of Survey Statisticians. She also serves on national and international Methodology Advisory Boards at National Statistical Institutes.

Homepage: https://www.research.manchester.ac.uk/portal/natalie.shlomo.html

]]>
Lecture / Discussion Wed, 11 Oct 2023 14:15:21 -0400 2023-10-18T12:00:00-04:00 2023-10-18T13:00:00-04:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - Investigating the quality of digital trace and data donation (October 25, 2023 12:00pm) https://events.umich.edu/event/114041 114041-21832242@events.umich.edu Event Begins: Wednesday, October 25, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
October 25, 2023
12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

Investigating the quality of digital trace and data donation

Challenges to traditional survey data collection such as increased costs and decreasing non-response are leading survey researchers to explore new forms of data. Recently, two types of data have received increased focus as a possible replacements or enhancements of surveys: digital trace data and data donation. Digital trace data refers to data produced while individuals interact with digital platforms, such as apps and websites. Data donation, on the other hand, refers to the acquisition of data from online platforms, such as Facebook or Google, directly from users. In a recent study we use an experimental design in a non-probability panel in Germany to explore non-response bias in data donated from Facebook as well measurement error in digital trace data from PCs and mobile phones.

Alexandru Cernat is an associate professor in the social statistics department at the University of Manchester. He has a PhD in survey methodology from the University of Essex and was a post-doc at the National Centre for Research Methods and the Cathie Marsh Institute. His research and teaching focus on: survey methodology, longitudinal data, measurement error, latent variable modelling, new forms of data and missing data. You can find out more about him and his research at: www.alexcernat.com

]]>
Lecture / Discussion Mon, 16 Oct 2023 14:39:03 -0400 2023-10-25T12:00:00-04:00 2023-10-25T13:00:00-04:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - Flexible Formal Privacy for Public Data Curation (November 1, 2023 12:00pm) https://events.umich.edu/event/114344 114344-21832762@events.umich.edu Event Begins: Wednesday, November 1, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
November 1, 2023
12;00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research, and via Zoom.
The Zoom call will be locked 10 minutes after the start of the presentation.

Flexible Formal Privacy for Public Data Curation

Researchers rely extensively on public datasets disseminated by official statistics agencies, universities, non-governmental organizations, and other data curators. With the increasing availability of data and computing power comes increased threats to privacy, as published statistics can more easily be used to reconstruct sensitive personal data. Formal privacy (FP) methods, like differential privacy (DP), provably limit such information leakage by injecting carefully chosen randomized noise into published statistics. However, the way DP accounts for privacy degradation requires this noise be injected into every statistic dependent on the confidential dataset. This fails to reflect data curator needs, social, legal or ethical requirements, and complex dependency structures between public and confidential datasets. In this talk, I'll discuss statistical methodology that addresses these problems. We propose a FP framework with novel characterizations of disclosure risk when assessing collections of statistics wherein only some statistics are published with DP guarantees. We demonstrate FP properties maintained by our proposed framework, propose data release mechanisms which satisfy our proposed definition, and prove the optimality properties of downstream statistical estimators based on these mechanism outputs. For this talk, I'll discuss a few end-to-end data analysis examples in public health and surveys, showing how theoretical trade-offs between privacy, utility, and computation time manifest in practice when assessing disclosure risks and statistical utility. I'll conclude with a discussion on the implications of this work for survey researchers, focusing on opportunities to incorporate privacy by design in survey planning, experimental design, and other data collection operations.

Jeremy Seeman is a Michigan Data Science Fellow at the Michigan Institute for Data Science (MIDAS) and MPSDS. He recently graduated with his PhD in statistics from Penn State University. Jeremy's research focuses on statistical data privacy, quantitative methods in the social sciences, and social values in data governance. He is the recipient of the U.S Census Bureau Dissertation Fellowship and the ASA Pride Scholarship. Prior to joining Penn State, Jeremy completed his BS in Physics and MS in Statistics at the University of Chicago, where he was a research fellow at the Center for Data Science and Public Policy.

]]>
Lecture / Discussion Mon, 23 Oct 2023 14:35:41 -0400 2023-11-01T12:00:00-04:00 2023-11-01T13:00:00-04:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - 2020 California Neighborhoods Count: A validation of U.S. Census Population Counts and Housing Characteristic Estimates within California (November 8, 2023 12:00pm) https://events.umich.edu/event/114648 114648-21833254@events.umich.edu Event Begins: Wednesday, November 8, 2023 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
November 8, 2023
12:00 - 1:00 pm EST
The seminar will be locked 10 minutes after the start of the presentation.

2020 California Neighborhoods Count: A validation of U.S. Census Population Counts and Housing Characteristic Estimates within California

In response to long-standing concerns about the accuracy of census data and about a possible undercount, we conducted the California Neighborhoods Count (CNC) study — the first-ever independent, survey-based enumeration to directly evaluate the accuracy of the U.S. Census Bureau's population totals for a subset of California census blocks. This 2020 research was intended to produce parallel estimates of the 2020 Census population and housing unit totals at the census block level, employing the same survey items as the census and using enhanced data collection strategies and exploration of imputation methods. The CNC block-level population estimates were sensitive to the imputation method used to account for non-responding households, likely in part due to limited availability of administrative data to assist the imputations. CNC identified more housing units than Census (23,929 versus 22,668), which may be due to CNC’s in-person address canvassing. Despite advancements in geospatial imaging software, as well as many other approaches used by the U.S. Census Bureau to assess coverage and validate addresses, in-field address verification might yield a more complete accounting of inhabited housing units than partially conducting address canvassing with in-office approaches.

Lane Burgette is a Senior Statistician at the RAND Corporation. Dr. Burgette’s applied research is primarily focused on health policy, especially Medicare’s physician payment policies. Other recent research projects include an evaluation of the 2020 Census in California, gun policy research, and recidivism risk estimation for employer background checks. Dr. Burgette’s methodological research focuses on causal inference, methods for missing data, and Bayesian modeling. Prior to RAND, he earned his Ph.D. in Statistics at the University of Wisconsin, and was a post-doctoral researcher in the Department of Statistical Science at Duke University.

]]>
Lecture / Discussion Mon, 30 Oct 2023 14:38:46 -0400 2023-11-08T12:00:00-05:00 2023-11-08T13:00:00-05:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - Using Synergies Between Survey Statistics and Causal Inference to Improve Transportability of Clinical Trials (January 17, 2024 12:00pm) https://events.umich.edu/event/108977 108977-21820671@events.umich.edu Event Begins: Wednesday, January 17, 2024 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS Seminar Series
January 17, 2024
12:00 - 1:00 pm
In person, room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

Using Synergies Between Survey Statistics and Causal Inference to Improve Transportability of Clinical Trials

Medical researchers have understood for many years that treatment effect estimates obtained from a randomized clinical trial (RCT) -- termed efficacy'' -- can differ from those obtained in a general population -- termed effectiveness''. Only in the past decade has extensive work begun in the statistical literature to bridge this gap using formal quantitative methods. As noted by Rod Little in a letter to the editor in the New Yorker ...randomization in randomized clinical trials concerns the allocation of the treatment, not the selection of individuals for the study. The latter can have an important impact on the average size of a treatment effect,'' with RCT samples often designed, sometimes explicitly, to be more likely to include individuals for whom the treatment may be more effective.

This issue has been various termed generalizability'' or transportability." Why do we care about transportability? In RCTs we are in the happy situation were treatment assignment is randomized, so confounding due to either observed or unobserved (pre-treatment) covariates is not an issue. But while randomization of treatment eliminates the effect of unobserved confounders, at least net of non-compliance, it does not eliminate the effect of unobserved effect modifiers, which can impact the causal effect of treatment in a population that differs from the RCT sample population. The impact of these interactions on the marginal effect of treatment thus can differ between the RCT population and the final population of interest.

Concurrent with research into transportability has been research into making population inference from non-probability samples. There is a close overlap between these two approaches, particularly with respect to the non-probability inference methods that rely on information from a relevant probability sample of the target population to reduce selection bias effects. When there are relevant censuses or probability samples of the target patient population of interest, these methods can be adapted to transport information from the RCT to the patient population. Because the RCT setting focuses on causal inference, this adaptation involves extensions to estimate counterfactuals. Thus approaches that treat population inference as a missing data problem are a natural fit to connect these two strands of methodological innovation.

In particular, we propose to extend a pseudo-weighting'' methodology from other non-probability settings to a doubly robust'' estimator that treats sampling probabilities or weights as regression covariates to achieve consistent estimation of population quantities. We explore our proposed approach and compare with some standard existing methods in a simulation study to assess the effectiveness of the approach under differing degrees of selection bias and model misspecification, and compare it with results obtained using the RT data only and with existing methods that use inverse probability weights. We apply it to a study of pulmonary artery catheterization in critically ill patients where we believe differences between the trial sample and the larger population might impact overall estimates of treatment effects.

]]>
Lecture / Discussion Mon, 03 Jul 2023 13:46:14 -0400 2024-01-17T12:00:00-05:00 2024-01-17T13:00:00-05:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - A Novel Methodology for Improving Applications of Modern Predictive Modeling Tools to Linked Data Sets Subject to Mismatch Error (January 24, 2024 12:00pm) https://events.umich.edu/event/116026 116026-21836083@events.umich.edu Event Begins: Wednesday, January 24, 2024 12:00pm
Location: Off Campus Location
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
January 24, 2024
12:00 - 1:00 EST

In person, Room 1070, Institute for Social Research and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

A Novel Methodology for Improving Applications of Modern Predictive Modeling Tools to Linked Data Sets Subject to Mismatch Error

In recent years, the rise of social media platforms such as Twitter/X has provided social scientists with a wealth of user-content data, and there has been renewed interest in the utility of administrative records for increasing survey efficiency. Combining social media data, administrative records, and survey data has the potential to produce a comprehensive source of information for social research. These data are often collected from multiple sources and combined by probabilistic record linkage. For the analysis of these linked data files, advanced machine learning techniques, such as random forests, boosting, and related ensemble methods, have become essential tools for survey methodologists and data scientists. There is, however, a potential pitfall in the widespread application of these techniques to linked data sets that needs more attention. Linkage errors such as mismatch and missed-match errors can distort the true relationships between variables and adversely alter the performance metrics routinely output by predictive modeling techniques, such as variable importance, confusion matrices, RMSE, etc. Thus, the actual predictive performance of these machine-learning techniques may not be realized. In this presentation, I will describe a new general methodology designed to adjust modern predictive modeling techniques for the presence of mismatch errors in linked data sets. The proposed approach, based on mixture modeling, is general enough to accommodate various predictive modeling techniques in a unified fashion. I evaluate the performance of the new methodology with simulations implemented in R. I will conclude with recommendations for future work in this area.

Brady T. West is a Research Professor in the Survey Methodology Program, located within the Survey Research Center at the Institute for Social Research on the University of Michigan-Ann Arbor (U-M) campus. He earned his PhD from the Michigan Program in Survey and Data Science in 2011. Before that, he received an MA in Applied Statistics from the U-M Statistics Department in 2002, being recognized as an Outstanding First-year Applied Masters student, and a BS in Statistics with Highest Honors and Highest Distinction from the U-M Statistics Department in 2001. His current research interests include the implications of measurement error in auxiliary variables and survey paradata for survey estimation, selection bias in surveys, responsive/adaptive survey design, interviewer effects, and multilevel regression models for clustered and longitudinal data. He is the lead author of a book comparing different statistical software packages in terms of their mixed-effects modeling procedures (Linear Mixed Models: A Practical Guide using Statistical Software, Third Edition, Chapman Hall/CRC Press, 2022), and he is a co-author of a second book entitled Applied Survey Data Analysis (with Steven Heeringa and Pat Berglund), the second edition of which was published by CRC Press in June 2017. He was elected as a Fellow of the American Statistical Association in 2022.

]]>
Lecture / Discussion Tue, 12 Dec 2023 16:19:25 -0500 2024-01-24T12:00:00-05:00 2024-01-24T13:00:00-05:00 Off Campus Location Michigan Program in Survey and Data Science Lecture / Discussion Flyer
Sixth Annual Likert Symposium - Meeting Respondents Where They Are (February 19, 2024 10:00am) https://events.umich.edu/event/119057 119057-21842138@events.umich.edu Event Begins: Monday, February 19, 2024 10:00am
Location: Institute For Social Research
Organized By: Michigan Program in Survey and Data Science

SIXTH ANNUAL LIKERT SYMPOSIUM
Meeting Respondents Where They Are

March 8, 2024
10:00 - 2:00 pm EST

Survey measurement is complicated. We strive to ask questions that respondents understand and can answer as we intend them to. But often there is a gulf between the ideal respondents we design questions for and the real respondents who answer them. The Sixth Annual Likert symposium features four presentations about the speakers' experiences incorporating the respondents’ perspective and circumstances into the measurement process and its design. They present the methods they pioneered to do this and the positive impact this has had on the quality of survey data. Please join us for this hybrid event on March 8th.

REGISTRATION IS REQUIRED
Registration is required for onsite attendance and attendance via Zoom. Details given upon registration.

ZOOM ATTENDANCE: Open registration deadline.

ONSITE ATTENDANCE:: The registration deadline is March 1st. A luncheon will be provided at noon to those attending onsite, Institute for Social Research, Room 1430.

SPEAKERS

Laura Wilson and Emma Dickinson, ONS
Respondent Centred Surveys: Putting Respondents at the Heart of Survey Design

Tammy Chang, University of Michigan
MyVoice: Elevating Youth Voice to Impact Policy and Practice

Chris Antoun, University of Maryland
Developing a Modular Survey App using Co-Design Principles

Emily Geisen, Qualtrics
Improving Web Surveys through Visual Design

]]>
Workshop / Seminar Thu, 22 Feb 2024 08:19:30 -0500 2024-02-19T10:00:00-05:00 2024-02-19T14:00:00-05:00 Institute For Social Research Michigan Program in Survey and Data Science Workshop / Seminar Flyer
MPSDS JPSM Seminar Series - Recent Developments and Open Problems in Post-Linkage Data Analysis (February 21, 2024 12:00pm) https://events.umich.edu/event/118499 118499-21841151@events.umich.edu Event Begins: Wednesday, February 21, 2024 12:00pm
Location: Institute For Social Research
Organized By: Michigan Program in Survey and Data Science

MPSDS JPSM Seminar Series
February 7, 2024
12:00 - 1:00

In person, room 1070 Institute for Social Research, and via Zoom.
The Zoom call will be locked 10 minutes after the start of the presentation.

Recent Developments and Open Problems in Post-Linkage Data Analysis

Record linkage and subsequent data analysis of the linked file with suitable propagation of uncertainty can be performed if the analyst also happens to be the linker or at least has comprehensive information about how the data were linked. However, it is rather common that the two processes are considered in a separate fashion, with the analyst being handed a linked file that is possibly subject to substantial linkage error (false matches and missed matches). Ignoring such error can render statistical analysis invalid. At the same time, accounting for linkage error with limited information about the linkage process poses a variety of challenges. This talk will outline a framework based on a mixture model for addressing mismatch error in the secondary analysis of linked files. Its use will be demonstrated in several case studies. Finally, we will present recent extensions, future directions and open problems.

Martin Slawski is an Associate Professor in the Department of Statistics at George
Mason University. His research on data analysis after record linkage is currently
supported by NSF. His research interests concern topics in computational statistics and applications in various domains. He serves as an associate editor of the Electronic Journal of Statistics. He received his Ph.D. in Computer Science from Saarland University, Germany, and was a postdoctoral associate in Statistics and Computer
Science at Rutgers University prior to joining his current institution.

]]>
Lecture / Discussion Tue, 06 Feb 2024 15:43:04 -0500 2024-02-21T12:00:00-05:00 2024-02-21T13:00:00-05:00 Institute For Social Research Michigan Program in Survey and Data Science Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - When “representative” surveys fail: Can a non-ignorable missingness mechanism explain bias in estimates of COVID-19 vaccine uptake? (March 13, 2024 12:00pm) https://events.umich.edu/event/118936 118936-21841912@events.umich.edu Event Begins: Wednesday, March 13, 2024 12:00pm
Location: Off Campus Location
Organized By: Summer Institute in Survey Research Techniques

MPSDS JPSM Seminar Series
March 13, 2024
12:00 - 1:00 EDT

In person, room 1070, Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

When “representative” surveys fail: Can a non-ignorable missingness mechanism explain bias in estimates of COVID-19 vaccine uptake?

Recently, attention was drawn to the failure of two very large internet-based probability surveys to correctly estimate COVID-19 vaccine uptake in the U.S. in early 2021. Both the Delphi-Facebook COVID-19 Trends and Impact Survey (CTIS) and Census Household Pulse Survey (HPS) overestimated vaccine uptake substantially (14 and 17 points in May 2021) compared to retroactively available CDC benchmark data. These surveys had large numbers of respondents but very low response rates (<10%), and thus non-ignorable nonresponse could have substantially impacted estimates. Specifically, it is plausible that “anti-vaccine” individuals were less likely to complete a survey about COVID-19; we might also hypothesize that “anti-vaccine” individuals could be suspicious of the government and thus less likely to respond to an official government-sponsored survey. In this talk we use proxy pattern-mixture models (PPMMs) to retrospectively estimate the proportion of adults (18+) who received at least one dose of a COVID-19 vaccine, using data from the CTIS and HPS, under a non-ignorable nonresponse assumption. We compare these estimates to the true benchmark uptake numbers and show that the PPMM could have detected the direction of the bias and have provided meaningful bias bounds. We also use the PPMM to estimate vaccine hesitancy, a measure without a benchmark truth, and compare to the direct survey estimates. We conclude with discussion of how the PPMM could be prospectively as part of an assessment of nonresponse and/or selection bias, factors that would facilitate such analyses in the future, and ongoing work to extend the PPMM to novel areas.

Rebecca Andridge is an Associate Professor of Biostatistics at The Ohio State University College of Public Health. She conducts methodologic work in imputation methods for missing data, primarily in large-scale probability samples, and measures of selection bias for nonprobability samples. In particular, she works on methods for imputing data when missingness is driven by the missing values themselves (missing not at random). She teaches introductory graduate and undergraduate biostatistics and won the College's Outstanding Teaching Award in 2011 and is a Fellow of the American Statistical Association.

]]>
Lecture / Discussion Thu, 15 Feb 2024 14:09:03 -0500 2024-03-13T12:00:00-04:00 2024-03-13T23:00:00-04:00 Off Campus Location Summer Institute in Survey Research Techniques Lecture / Discussion Flyer
MPSDS JPSM Seminar Series - When “representative” surveys fail: Can a non-ignorable missingness mechanism explain bias in estimates of COVID-19 vaccine uptake? (March 13, 2024 12:00pm) https://events.umich.edu/event/118936 118936-21841913@events.umich.edu Event Begins: Wednesday, March 13, 2024 12:00pm
Location: Off Campus Location
Organized By: Summer Institute in Survey Research Techniques

MPSDS JPSM Seminar Series
March 13, 2024
12:00 - 1:00 EDT

In person, room 1070, Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

When “representative” surveys fail: Can a non-ignorable missingness mechanism explain bias in estimates of COVID-19 vaccine uptake?

Recently, attention was drawn to the failure of two very large internet-based probability surveys to correctly estimate COVID-19 vaccine uptake in the U.S. in early 2021. Both the Delphi-Facebook COVID-19 Trends and Impact Survey (CTIS) and Census Household Pulse Survey (HPS) overestimated vaccine uptake substantially (14 and 17 points in May 2021) compared to retroactively available CDC benchmark data. These surveys had large numbers of respondents but very low response rates (<10%), and thus non-ignorable nonresponse could have substantially impacted estimates. Specifically, it is plausible that “anti-vaccine” individuals were less likely to complete a survey about COVID-19; we might also hypothesize that “anti-vaccine” individuals could be suspicious of the government and thus less likely to respond to an official government-sponsored survey. In this talk we use proxy pattern-mixture models (PPMMs) to retrospectively estimate the proportion of adults (18+) who received at least one dose of a COVID-19 vaccine, using data from the CTIS and HPS, under a non-ignorable nonresponse assumption. We compare these estimates to the true benchmark uptake numbers and show that the PPMM could have detected the direction of the bias and have provided meaningful bias bounds. We also use the PPMM to estimate vaccine hesitancy, a measure without a benchmark truth, and compare to the direct survey estimates. We conclude with discussion of how the PPMM could be prospectively as part of an assessment of nonresponse and/or selection bias, factors that would facilitate such analyses in the future, and ongoing work to extend the PPMM to novel areas.

Rebecca Andridge is an Associate Professor of Biostatistics at The Ohio State University College of Public Health. She conducts methodologic work in imputation methods for missing data, primarily in large-scale probability samples, and measures of selection bias for nonprobability samples. In particular, she works on methods for imputing data when missingness is driven by the missing values themselves (missing not at random). She teaches introductory graduate and undergraduate biostatistics and won the College's Outstanding Teaching Award in 2011 and is a Fellow of the American Statistical Association.

]]>
Lecture / Discussion Thu, 15 Feb 2024 14:09:03 -0500 2024-03-13T12:00:00-04:00 2024-03-13T13:00:00-04:00 Off Campus Location Summer Institute in Survey Research Techniques Lecture / Discussion Flyer