I will describe a construction of algebras of observables associated with local subregions in quantum gravity in the small G_N limit. This algebra consists of operators dressed to a semiclassical observer degree of freedom which serves as an anchor defining the subregion. I will argue that properly implementing the gravitational constraints on this algebra results in a type II von Neumann algebra, which possesses a well-defined notion of entropy. Up to a state-independent constant, this entropy agrees with the UV-finite generalized entropy of the subregion, consisting of a Bekenstein-Hawking area term and a bulk entropy term. This gives an algebraic explanation for the finiteness of the generalized entropy, and provides a number of tools for investigating aspects of semiclassical gravitational entropy, including the generalized second law, the quantum focusing conjecture, and the quantum extremal surface prescription in holography.

]]>While there is an abundance of experiments searching for axion dark matter (DM) via its electromagnetic coupling, there are fewer utilizing its derivative coupling to electrons and nucleons. This direct coupling generates dynamical effects through the fermion spin, and therefore spin-polarized targets are a naturally useful target. We find that spin-polarized or magnetized analogs of layered dielectric haloscopes can be powerful probes at both radio frequencies, with sensitivity to currently unexplored parameter space, and optical frequencies, with sensitivity comparable to current astrophysical bounds.

]]>Families of conformal field theories are naturally endowed with a Riemannian geometry which is locally encoded by correlation functions of exactly marginal operators. We show that the curvature of such conformal manifolds can be computed using Euclidean and Lorentzian inversion formulae, which combine the operator content of the conformal field theory into an analytic function. Analogously, operators of fixed dimension define bundles over the conformal manifold whose curvatures can also be computed using inversion formulae. These results relate curvatures to integrated four-point correlation functions which are sensitive only to the behavior of the theory at separated points. We apply these inversion formulae to derive convergent sum rules expressing the curvature in terms of the spectrum of local operators and their three-point function coefficients. We further show that the curvature can smoothly diverge only if a conserved current appears in the spectrum, or if the theory develops a continuum. We verify our results explicitly in 2d examples.

]]>Obtaining the low-energy EFT of a given large-N confining gauge theory is in general a very difficult problem. Instead, one can proceed by carving out the space of allowed EFTs using the constraints on scattering amplitudes that follow e.g. from unitarity and crossing symmetry. In this talk I will review how to do this in the context of pion physics, with large-N QCD as our target. I will discuss what bounds this imposes on the chiral Lagrangian, and what theories saturate the bounds. I will end by discussing how a mixed system of pions and photons allows us to input symmetries and anomalies into the bootstrap, paving the way for bootstrapping large N QCD.

]]>MPSDS SEMINAR SERIES

September 27, 2023

12:00 - 1:00 pm

IN PERSON AND VIA ZOOM

- In person, room 1070 Institute for Social Research.

- Via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

EVERYTHING YOU NEED TO KNOW WHEN UTILIZING PROBABILITY PANELS: BEST PRACTICES IN PLANNING, FIELDING, AND ANALYSIS

Speakers: David Dutwin & Ipek Bilgen

Probability-based panel survey research is more widespread than ever, as the continuing decline in survey response rates makes cross-sectional sample surveys less and less accessible both in terms of fit for purpose data quality and cost. The attraction of probability panels for surveys is their ability to attain, dependent upon their recruiting methods, comparable response rates to cross-section polls, but at a lower cost and more expeditious execution. Panels are a unique type of survey research platform: Unlike cross-sections, panels recruit respondents specifically for future participation in surveys. In return, panelists are financially compensated, typically to join the panel in the first place, and then secondarily for each survey in which they participate.

These differences to cross-sectional surveys have a range of potential implications. How does the method and effort of recruiting impact who joins, and as a consequence what is best practice? What do panels do to retain panelists over time and which strategies are more successful than others? How much of a concern is panel conditioning, that is, the impact of persons repetitively taking surveys over time, and what are the implications for how frequently panelists should take surveys? How do panels, which exclusively request that panelists take surveys on the Internet, deal with people who do not have or are not comfortable using the Internet? What is the impact of panelist attrition and what are best efforts to replenish retired panelists? How successful are panels are executing true longitudinal surveys? And, given the additional layers of complexity, how are panel surveys properly weighted and estimated?

This seminar is meant to serve two purposes. First, it will serve as a guide for consumers of probability-based panels to understand what, in short, they are working with: What questions to ask and what features to understand about probability panels in evaluating their use for data collections, and how to best use probability-based panel data. Second, it will serve as an exploration of best practices for the practitioners of surveys: Raising issues of data quality, cost, and execution.

Learning Objectives:

1. For consumers of panel data: Understanding the features of panels with which to be knowledgeable; to know the important questions to ask panel vendors when assessing their fit for purpose of your research.

2. For researchers and practitioners: To understand the many dimensions and decision points in the building, maintenance, deployment, and delivery of multi-client panels and panel data.

Bios:

David Dutwin, PhD, is Senior Vice President for Strategic Initiatives, Business Ventures and Initiatives and Chief Scientist of AmeriSpeak at NORC at the University of Chicago. David provides scientific and programmatic thought leadership in support of NORC’s ongoing innovations. In addition to identifying new business opportunities, he lends expertise on research design conceptualization, methodological innovation, and product development. He leads the panel operations and the statistics and methods divisions of AmeriSpeak. David assists in NORC strategic vision and strategy, project acquisition and management of advance research methods. Prior research has focused on election methodology, surveying of low-incidence populations, the use of big data in survey research, and data quality in survey panels. He is a senior fellow of the Program for Opinion Research and Election Studies at the University of Pennsylvania. An avid member of the AAPOR community, David served as president from 2018-2019. He previously served on AAPOR’s Executive Council as conference chair and has served full terms on several committees. For over twenty years, he has taught courses in survey research and design, political polling, research methods, rhetorical theory, media effects, and other courses as an adjunct professor at the University of Pennsylvania, the University of Arizona, and West Chester University.

Ipek Bilgen, PhD, is a Principal Research Methodologist in the Methodology and Quantitative Social Sciences Department at NORC at the University of Chicago. Ipek is the Deputy Director of NORC’s Center for Panel Survey Sciences. Additionally, she oversees AmeriSpeak’s methodological research and innovations. As part of her role within AmeriSpeak, she also provides survey design expertise, questionnaire development and review support, and leads cognitive interview and usability testing efforts for client studies. Ipek received both her Ph.D. and M.S. from the Survey Research and Methodology (SRAM) Program at the University of Nebraska-Lincoln. She has published and co-authored articles in Journal of Official Statistics, Public Opinion Quarterly, Journal of Survey Statistics and Methodology, Survey Practice, Social Currents, Social Science Computer Review, Field Methods, Journal of Quantitative Methods, SAGE Research Methods, and Quality and Quantity on issues related to interviewing methodology, web surveys, online panels, internet sampling and recruitment approaches, nonresponse and measurement issues in surveys. In the past, she has served on AAPOR’s and MAPOR’s Executive Councils. Ipek is currently teaching at the Irving B. Harris Graduate School of Public Policy Studies at the University of Chicago and serving as Associate Editor of Public Opinion Quarterly (POQ).

**Special Seminar** Please note time and location

In the summer of 1925, Heisenberg wrote the paper Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen, which laid the foundations of quantum mechanics. For a long time, this paper was considered to be inscrutable. This talk will show how one can make sense both of Heisenberg's formal manipulations and of his philosophical rhetoric, in particular by studying the letters he wrote in months leading up to his breakthrough. A particular emphasis will be placed on how different the theory that Heisenberg originally aimed to construct was from modern quantum mechanics

Little is known about non-perturbative quantum gravity in de Sitter spacetimes. As a useful low-dimensional model, we consider de Sitter Jackiw-Teitelboim (dS JT) gravity and solve it non-perturbatively in the genus expansion. This amounts to the first non-perturbatively solvable model of de Sitter cosmology. We find that dS JT gravity has an effective string coupling which is pure imaginary, rendering the S-matrix genus expansion Borel resummable. We further establish that dS JT gravity is dual to a formal matrix integral with a negative number of degrees of freedom. More broadly, our analysis unveils new ingredients in the de Sitter holographic dictionary, which may be applicable in more general contexts.

]]>MPSDS JPSM Seminar Series

October 4, 2023

12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

Using Partially Synthetic Frames to Evaluate Alternative Sample Designs for Estimating a Rare Business Characteristic

Katherine Jenny Thompson, U.S. Census Bureau

Hang Joon Kim (University of Cincinnati)

Stephen Kaputa (U.S. Census Bureau)

In the “traditional'” finite population sampling framework, the sample designer has a complete list (frame) of eligible units with classification information and auxiliary variables related to surveyed characteristics. In our setting, the frame auxiliary variables are weakly related to the survey characteristic, which is not present for most units. Hence, using frame auxiliary variables to assess survey design efficacy can be misleading. Instead, we propose generating multiple partially synthetic frames, modeling characteristic values for each unit on the frame, then drawing repeated samples from each synthetic frame using the candidate sample design(s) to assess finite sample performance for each design within and between the synthetic frames. Focusing on establishment survey data, we illustrate our proposed approach on a subset of industries surveyed annually by the Business Enterprise Research and Development Survey.

Katherine Jenny Thompson is the Senior Mathematical Statistician in the Economic Directorate of the Census Bureau. Jenny holds a masters of science degree in Applied Statistics from the George Washington University and an bachelor or arts degree in Mathematics from Oberlin College. She is an American Statistical Association (ASA) Fellow, an elected member of the International Statistics Institute, and the Vice President Elect of the ASA. She is the Survey Statistics Editor-in-Chief of the Journal of Survey Statistics and Methodology and an Associate Editor for the Journal of Official Statistics. She has published papers on a variety of topics related to complex surveys in several journals, including the Journal of Official Statistics, Journal of the Royal Statistical Society (Series A), Survey Methodology, Annals of Applied Statistics, International Statistical Review, Journal of Survey Sampling and Methodology, and Public Opinion Quarterly.

MPSDS JPSM Seminar Series

October 11, 2023

12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

New data, new questions, old problems? Online behavioral data in social science research

Records of individuals’ online activities obtained from devices like personal computers and smartphones have received a lot of interest in the social sciences in recent years. Many have praised such data for allowing fine-grained observations of individuals’ online activities which would be impossible with more traditional data sources such as surveys. Recent work, however, warns that many data quality aspects of these novel data are so far poorly under- stood. As the number of observations can quickly reach several millions, researchers seem tempted to treat online behavioral data as gold standard, ignore what their data may be missing, and which other systematic biases may be present. In this talk, I present both applied and methodological work using online behavioral data in a typical social science setting. First, using within-between random effects models, I show how online behavioral data combined with a panel survey allows us to understand the effects of news media consumption from populist alternative news platforms on individuals’ political attitudes. Second, I show that online behavioral data, although containing detailed records of individuals’ social media use, are far from being complete. Using hidden Markov models, combined online behavioral data, survey records, and donated social media data, I show that the online behavioral data seem to completely fail in capturing social media use for about one third of the sample. I emphasize the need for researchers to navigate the complexities of online behavioral data, highlighting potentials and limitations.

Ruben Bach is a Research Fellow at the Mannheim Centre for European Social Research, University of Mannheim, Germany. His research is concerned with data quality in social science data products and applied computational social science (media consumption, political attitudes, socially responsible AI). In the fall of 2023, he is a visitor with the Department of Statistics and Actuarial Science, University of Waterloo, Ontario.

MPSDS JPSM Seminar Series

October 11, 2023

12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research and via Zoom.

The Zoom call will be locked 10 minutes after the start of the presentation.

Implementing and Adjusting a Non-probability Web Survey: Experiences of EVENs (Survey on the Impact of COVID19 on Ethnic Minorities in the United Kingdom)

Natalie Shlomo

Professor of Social Statistics, University of Manchester

This is joint work with Andrea Aparcio-Castro, Daniel Ellingworth, Angelo Moretti, Harry Taylor, Nissa Finney and James Nazroo

We discuss the challenges of implementing and adjusting a large-scale non-probability web survey. For the application, we focus on the 2021 Evidence for Equality National Survey (EVENS) which was led by the Centre on Dynamics of Ethnicity (CoDE) at the University of Manchester in the United Kingdom, in partnership with Ipsos-MORI. The aim was to understand the impact of the COVID19 pandemic on ethnic and religious minority groups in the UK. Standard probability-based surveys, even with ethnic minority group boosts, do not have the sample sizes required to obtain reliable estimates for small group statistics. We therefore designed a non-probability web survey of ethnic minority groups to overcome these limitations. We formed partnerships with community organizations and used innovative recruitment strategies, including digital and social media. Daily monitoring of the data collection against desired sample sizes and R-indicator calculations allowed the team to focus attention on the recruitment of specific groups in a responsive data collection mode. We also supplemented the sample with existing members in both established non-probability and probability-based panels in the UK. We describe the measures applied to improve the quality of the collected data and the statistical adjustments to correct for selection and coverage biases based on estimating the probability of participation in the non-probability sample using combined probability reference samples followed by calibration to auxiliary information from the UK Census 2021. We demonstrate how a pseudo-population bootstrap approach can be designed to obtain bootstrap weights to allow for statistical analyses and inference.

Natalie Shlomo is Professor of Social Statistics at the University of Manchester and publishes widely in the area of survey statistics, including small area estimation, adaptive survey designs, non-probability sampling, confidentiality and privacy, data linkage and integration. She has over 70 publications and refereed book chapters and a track record of generating external funding for her research. She is an elected member of the International Statistical Institute (ISI), a fellow of the Royal Statistical Society, a fellow of the Academy of Social Sciences and President 2023-2025 of the International Association of Survey Statisticians. She also serves on national and international Methodology Advisory Boards at National Statistical Institutes.

Homepage: https://www.research.manchester.ac.uk/portal/natalie.shlomo.html

We consider asymptotic observables in quantum field theories in which the S-matrix makes sense. We argue that in addition to scattering amplitudes, a whole compendium of inclusive observables exists where the time ordering is relaxed. These include expectation values of electromagnetic or gravitational radiation fields as well as out-of-time-order amplitudes. We explain how to calculate them in different ways: by relating them to amplitudes and products of amplitudes and by using a generalization of the LSZ reduction formula. Finally, we discuss how to relate them to one another through new versions of crossing symmetry. As an application, we discuss one-loop contributions to gravitational radiation in the post-Minkowski expansion, emphasizing the role of classical cut contributions and highlighting the infrared physics of in-in observables.

]]>In the last few years, a remarkable link has been established between the soft theorems and asymptotic symmetries of quantum field theories: soft theorems are Ward identities of the asymptotic symmetry generators. In quantum electrodynamics, Weinberg's soft photon theorem is nothing but the Ward identity of a gauge transformation whose parameter is non-trivial at infinity. Likewise, Low's tree-level subleading soft photon theorem is the Ward identity of a gauge transformation whose parameter diverges linearly at infinity. More recently, it has been shown that Low's theorem receives loop corrections that are logarithmic in soft photon energy. Then, it is natural to ask whether such corrections are associated with some asymptotic symmetry of the S-matrix. There have been proposals for conserved charges whose Ward identities yield the loop-corrected soft theorems, but a clear symmetry interpretation remains elusive. We explore this question in the context of scalar QED, in hopes of shedding light on the connection between asymptotic symmetries and loop-corrected soft theorems.

]]>MPSDS JPSM Seminar Series

November 1, 2023

12;00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research, and via Zoom.

The Zoom call will be locked 10 minutes after the start of the presentation.

Flexible Formal Privacy for Public Data Curation

Researchers rely extensively on public datasets disseminated by official statistics agencies, universities, non-governmental organizations, and other data curators. With the increasing availability of data and computing power comes increased threats to privacy, as published statistics can more easily be used to reconstruct sensitive personal data. Formal privacy (FP) methods, like differential privacy (DP), provably limit such information leakage by injecting carefully chosen randomized noise into published statistics. However, the way DP accounts for privacy degradation requires this noise be injected into every statistic dependent on the confidential dataset. This fails to reflect data curator needs, social, legal or ethical requirements, and complex dependency structures between public and confidential datasets. In this talk, I'll discuss statistical methodology that addresses these problems. We propose a FP framework with novel characterizations of disclosure risk when assessing collections of statistics wherein only some statistics are published with DP guarantees. We demonstrate FP properties maintained by our proposed framework, propose data release mechanisms which satisfy our proposed definition, and prove the optimality properties of downstream statistical estimators based on these mechanism outputs. For this talk, I'll discuss a few end-to-end data analysis examples in public health and surveys, showing how theoretical trade-offs between privacy, utility, and computation time manifest in practice when assessing disclosure risks and statistical utility. I'll conclude with a discussion on the implications of this work for survey researchers, focusing on opportunities to incorporate privacy by design in survey planning, experimental design, and other data collection operations.

Jeremy Seeman is a Michigan Data Science Fellow at the Michigan Institute for Data Science (MIDAS) and MPSDS. He recently graduated with his PhD in statistics from Penn State University. Jeremy's research focuses on statistical data privacy, quantitative methods in the social sciences, and social values in data governance. He is the recipient of the U.S Census Bureau Dissertation Fellowship and the ASA Pride Scholarship. Prior to joining Penn State, Jeremy completed his BS in Physics and MS in Statistics at the University of Chicago, where he was a research fellow at the Center for Data Science and Public Policy.

A generic pathology one encounters when computing the thermal entropy of a black hole is that it becomes negatively divergent as the temperature goes to zero, and only those whose extremal limit preserve some supersymmetry yield a sensible low-temperature entropy. The physics relevant to these phenomena are all captured by Jackiw-Teitelboim theories of gravity, which have been rather explicitly shown to be dual to various matrix ensembles. The issues and features mentioned above can all be precisely understood from this perspective: traditional gravitational calculations are computing annealed quantities, which give inherently wrong approximations near extremality.

We use the matrix integral formulation to show how quenched quantities do in fact behave sensibly and yield non-negative entropies at all temperatures. By using a suitable replica trick, this is done for a completely general matrix ensemble, thus settling the question for any black hole whose near-extremal spectrum is captured by such ensembles. Crucially, this result only requires working perturbatively to leading order in the size of the matrices, which hints at the possibility of an analogous semiclassical gravitational computation where one just needs to account for wormhole contributions appropriately (and not for doubly non-perturbative effects in 1/G).

MPSDS JPSM Seminar Series

November 8, 2023

12:00 - 1:00 pm EST

The seminar will be locked 10 minutes after the start of the presentation.

2020 California Neighborhoods Count: A validation of U.S. Census Population Counts and Housing Characteristic Estimates within California

In response to long-standing concerns about the accuracy of census data and about a possible undercount, we conducted the California Neighborhoods Count (CNC) study — the first-ever independent, survey-based enumeration to directly evaluate the accuracy of the U.S. Census Bureau's population totals for a subset of California census blocks. This 2020 research was intended to produce parallel estimates of the 2020 Census population and housing unit totals at the census block level, employing the same survey items as the census and using enhanced data collection strategies and exploration of imputation methods. The CNC block-level population estimates were sensitive to the imputation method used to account for non-responding households, likely in part due to limited availability of administrative data to assist the imputations. CNC identified more housing units than Census (23,929 versus 22,668), which may be due to CNC’s in-person address canvassing. Despite advancements in geospatial imaging software, as well as many other approaches used by the U.S. Census Bureau to assess coverage and validate addresses, in-field address verification might yield a more complete accounting of inhabited housing units than partially conducting address canvassing with in-office approaches.

Lane Burgette is a Senior Statistician at the RAND Corporation. Dr. Burgette’s applied research is primarily focused on health policy, especially Medicare’s physician payment policies. Other recent research projects include an evaluation of the 2020 Census in California, gun policy research, and recidivism risk estimation for employer background checks. Dr. Burgette’s methodological research focuses on causal inference, methods for missing data, and Bayesian modeling. Prior to RAND, he earned his Ph.D. in Statistics at the University of Wisconsin, and was a post-doctoral researcher in the Department of Statistical Science at Duke University.

It is now widely believed that black holes should be described by ordinary (though complicated) quantum systems. This can be made precise for supersymmetric (BPS) black holes in Anti de-Sitter space, where the AdS/CFT correspondence may be used to reliably count black hole microstates. We will review this proposal for 4d superconformal field theories dual to AdS5 black holes and explain the challenges in characterizing these microstates directly in terms of gravitational variables. Surprisingly, a gravitational path integral calculation predicts certain universal features of the spectrum, including a large number of exactly degenerate states and a "mass gap" between the BPS and non-BPS states.

If BPS black holes are described by ordinary quantum systems, we should be able to act with operators which probe the microstates. We find one such probe is a certain generalization of the supersymmetric Wilson line in 4d N=4 SYM; holographically dual to a D-brane which wraps the horizon, and further demonstrate a matching of these descriptions when the spacetime description is valid. In addition to detecting the familiar deconfinement transition in conformal gauge theories, this provides an example of a system in which a black hole interacts with other degrees of freedom but has an exact microscopic description.

The production of electromagnetically interacting particles in the early Universe is a generic prediction of many extensions of both the standard models of particle physics and cosmology. In this talk, I will give an overview of recent progress in understanding how injected particles deposit their energy into regular matter, and highlight some novel signatures of new physics that are well within current and near-future experimental reach.

]]>With the direct detection of gravitational waves (GWs) from LIGO in 2016, and recent evidence from the NANOGrav collaboration for a stochastic GW background, GW astronomy is becoming an important tool for understanding the universe. Recently it has been shown that axion dark matter (DM) experiments can extend the search for GWs to much higher frequencies, kHz < f < GHz. In this talk we'll discuss how light DM detectors utilizing single phonon excitations in crystal targets, previously shown to be sensitive to a wide variety of DM candidates, are also sensitive to GWs in the frequency range, THz < f < 100 THz, corresponding to the range of optical phonon energies, meV < \omega < 100 meV. We'll discuss the mechanism by which high frequency GWs can generate single phonons, and consider the detector sensitivity of different target materials. Lastly, we'll discuss how these high frequency GWs may be produced in processes such as black hole inspirals and superradiance.

]]>MPSDS Seminar Series

January 17, 2024

12:00 - 1:00 pm

In person, room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

Using Synergies Between Survey Statistics and Causal Inference to Improve Transportability of Clinical Trials

Medical researchers have understood for many years that treatment effect estimates obtained from a randomized clinical trial (RCT) -- termed efficacy'' -- can differ from those obtained in a general population -- termed effectiveness''. Only in the past decade has extensive work begun in the statistical literature to bridge this gap using formal quantitative methods. As noted by Rod Little in a letter to the editor in the New Yorker ...randomization in randomized clinical trials concerns the allocation of the treatment, not the selection of individuals for the study. The latter can have an important impact on the average size of a treatment effect,'' with RCT samples often designed, sometimes explicitly, to be more likely to include individuals for whom the treatment may be more effective.

This issue has been various termed generalizability'' or transportability." Why do we care about transportability? In RCTs we are in the happy situation were treatment assignment is randomized, so confounding due to either observed or unobserved (pre-treatment) covariates is not an issue. But while randomization of treatment eliminates the effect of unobserved confounders, at least net of non-compliance, it does not eliminate the effect of unobserved effect modifiers, which can impact the causal effect of treatment in a population that differs from the RCT sample population. The impact of these interactions on the marginal effect of treatment thus can differ between the RCT population and the final population of interest.

Concurrent with research into transportability has been research into making population inference from non-probability samples. There is a close overlap between these two approaches, particularly with respect to the non-probability inference methods that rely on information from a relevant probability sample of the target population to reduce selection bias effects. When there are relevant censuses or probability samples of the target patient population of interest, these methods can be adapted to transport information from the RCT to the patient population. Because the RCT setting focuses on causal inference, this adaptation involves extensions to estimate counterfactuals. Thus approaches that treat population inference as a missing data problem are a natural fit to connect these two strands of methodological innovation.

In particular, we propose to extend a pseudo-weighting'' methodology from other non-probability settings to a doubly robust'' estimator that treats sampling probabilities or weights as regression covariates to achieve consistent estimation of population quantities. We explore our proposed approach and compare with some standard existing methods in a simulation study to assess the effectiveness of the approach under differing degrees of selection bias and model misspecification, and compare it with results obtained using the RT data only and with existing methods that use inverse probability weights. We apply it to a study of pulmonary artery catheterization in critically ill patients where we believe differences between the trial sample and the larger population might impact overall estimates of treatment effects.

MPSDS JPSM Seminar Series

January 24, 2024

12:00 - 1:00 EST

In person, Room 1070, Institute for Social Research and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

A Novel Methodology for Improving Applications of Modern Predictive Modeling Tools to Linked Data Sets Subject to Mismatch Error

In recent years, the rise of social media platforms such as Twitter/X has provided social scientists with a wealth of user-content data, and there has been renewed interest in the utility of administrative records for increasing survey efficiency. Combining social media data, administrative records, and survey data has the potential to produce a comprehensive source of information for social research. These data are often collected from multiple sources and combined by probabilistic record linkage. For the analysis of these linked data files, advanced machine learning techniques, such as random forests, boosting, and related ensemble methods, have become essential tools for survey methodologists and data scientists. There is, however, a potential pitfall in the widespread application of these techniques to linked data sets that needs more attention. Linkage errors such as mismatch and missed-match errors can distort the true relationships between variables and adversely alter the performance metrics routinely output by predictive modeling techniques, such as variable importance, confusion matrices, RMSE, etc. Thus, the actual predictive performance of these machine-learning techniques may not be realized. In this presentation, I will describe a new general methodology designed to adjust modern predictive modeling techniques for the presence of mismatch errors in linked data sets. The proposed approach, based on mixture modeling, is general enough to accommodate various predictive modeling techniques in a unified fashion. I evaluate the performance of the new methodology with simulations implemented in R. I will conclude with recommendations for future work in this area.

Brady T. West is a Research Professor in the Survey Methodology Program, located within the Survey Research Center at the Institute for Social Research on the University of Michigan-Ann Arbor (U-M) campus. He earned his PhD from the Michigan Program in Survey and Data Science in 2011. Before that, he received an MA in Applied Statistics from the U-M Statistics Department in 2002, being recognized as an Outstanding First-year Applied Masters student, and a BS in Statistics with Highest Honors and Highest Distinction from the U-M Statistics Department in 2001. His current research interests include the implications of measurement error in auxiliary variables and survey paradata for survey estimation, selection bias in surveys, responsive/adaptive survey design, interviewer effects, and multilevel regression models for clustered and longitudinal data. He is the lead author of a book comparing different statistical software packages in terms of their mixed-effects modeling procedures (Linear Mixed Models: A Practical Guide using Statistical Software, Third Edition, Chapman Hall/CRC Press, 2022), and he is a co-author of a second book entitled Applied Survey Data Analysis (with Steven Heeringa and Pat Berglund), the second edition of which was published by CRC Press in June 2017. He was elected as a Fellow of the American Statistical Association in 2022.

When do two different looking quantum field theories describe the same physics? This is essentially asking when the quantum field theories are isomorphic. In the case of topological quantum field theories, there are sometimes a way to determine them via topological invariants. For a superconformal field theory, what would be the minimal set of “invariants” to determine when they are isomorphic? I will discuss some approaches to this question in the context of superconformal field theories in four and six dimensions. Utilizing 4d class S theories that also admits 6d (1,0) SCFT origins, I will explain how a certain class of 4d N=2 SCFTs, which a priori look like distinct theories, can be shown to describe the same physics. I will further explain how the 6d (1,0) origin sheds light on the 3d duality.

]]>Locality and unitary forces scattering amplitudes to factorize when taking the momentum of one of the external particles to zero. This factorization has proven very useful for recursion relations for amplitudes at high multplicities. The recursion can break down, however, when the amplitude contains a pole at infinity. In this talk we are going to make modest step towards a prescription of “unitarity at infinity”. We do this by studying on-shell diagrams, which are on-shell gauge invariant objects that appear as cuts of loop integrands in the context of generalized unitarity and serve as building blocks for amplitudes in recursion relations. In the dual formulation, they are associated with cells of the positive Grassmannian. We will describe on-shell diagrams in N<4 supersymmetric Yang-Mills (SYM) theory and show that there exists a diagrammatic operation that corresponds to sending one of the momenta to infinity.

]]>Given that axions are both a promising candidate to solve problems in the Standard Model and are ubiquitous in quantum gravity, it is crucial to accurately determine their signatures. In this talk, we discuss how the axion's compact field space leads to interesting interactions with topological defects, specifically monopoles and strings. In the case of monopoles, we show that, due to the Witten effect, axions interacting with abelian gauge fields generate a potential for the axion from loops of magnetic monopoles, and discuss a simple phenomenological example where this potential is the dominant contribution to the axion mass. In the case of strings, we discuss superconductivity from massless chiral excitations along the string. We show that bulk fermions do not need to become massless in the core of the string for there to be trapped massless excitations, and explore the counterintuitive phase structure of these zero modes, which become less localized to the string as the mass is increased.

]]>MPSDS JPSM Seminar Series

February 7, 2024

12:00 - 1:00

In person, room 1070 Institute for Social Research, and via Zoom.

The Zoom call will be locked 10 minutes after the start of the presentation.

Recent Developments and Open Problems in Post-Linkage Data Analysis

Record linkage and subsequent data analysis of the linked file with suitable propagation of uncertainty can be performed if the analyst also happens to be the linker or at least has comprehensive information about how the data were linked. However, it is rather common that the two processes are considered in a separate fashion, with the analyst being handed a linked file that is possibly subject to substantial linkage error (false matches and missed matches). Ignoring such error can render statistical analysis invalid. At the same time, accounting for linkage error with limited information about the linkage process poses a variety of challenges. This talk will outline a framework based on a mixture model for addressing mismatch error in the secondary analysis of linked files. Its use will be demonstrated in several case studies. Finally, we will present recent extensions, future directions and open problems.

Martin Slawski is an Associate Professor in the Department of Statistics at George

Mason University. His research on data analysis after record linkage is currently

supported by NSF. His research interests concern topics in computational statistics and applications in various domains. He serves as an associate editor of the Electronic Journal of Statistics. He received his Ph.D. in Computer Science from Saarland University, Germany, and was a postdoctoral associate in Statistics and Computer

Science at Rutgers University prior to joining his current institution.

I will describe ongoing work on the thermodynamics of quantum fields in far-from-equilibrium states. The key tool is modular flow, a nonstandard time-evolution map defined relative to a choice of state, which makes that state "look thermal." Famously, the modular flow for the Minkowski vacuum in the Rindler wedge is a geometric boost, which is one way of stating the Unruh effect. In this talk, I will outline a characterization of the settings in which modular flow is geometrically local, i.e., a complete list of "generalized Unruh effects" in arbitrary spacetimes and for arbitrary quantum field theories. The arguments involve analytic manipulations of position-space correlators, which may be of independent interest to those of you working on amplitudes.

]]>MPSDS JPSM Seminar Series

March 13, 2024

12:00 - 1:00 EDT

In person, room 1070, Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

When “representative” surveys fail: Can a non-ignorable missingness mechanism explain bias in estimates of COVID-19 vaccine uptake?

Recently, attention was drawn to the failure of two very large internet-based probability surveys to correctly estimate COVID-19 vaccine uptake in the U.S. in early 2021. Both the Delphi-Facebook COVID-19 Trends and Impact Survey (CTIS) and Census Household Pulse Survey (HPS) overestimated vaccine uptake substantially (14 and 17 points in May 2021) compared to retroactively available CDC benchmark data. These surveys had large numbers of respondents but very low response rates (<10%), and thus non-ignorable nonresponse could have substantially impacted estimates. Specifically, it is plausible that “anti-vaccine” individuals were less likely to complete a survey about COVID-19; we might also hypothesize that “anti-vaccine” individuals could be suspicious of the government and thus less likely to respond to an official government-sponsored survey. In this talk we use proxy pattern-mixture models (PPMMs) to retrospectively estimate the proportion of adults (18+) who received at least one dose of a COVID-19 vaccine, using data from the CTIS and HPS, under a non-ignorable nonresponse assumption. We compare these estimates to the true benchmark uptake numbers and show that the PPMM could have detected the direction of the bias and have provided meaningful bias bounds. We also use the PPMM to estimate vaccine hesitancy, a measure without a benchmark truth, and compare to the direct survey estimates. We conclude with discussion of how the PPMM could be prospectively as part of an assessment of nonresponse and/or selection bias, factors that would facilitate such analyses in the future, and ongoing work to extend the PPMM to novel areas.

Rebecca Andridge is an Associate Professor of Biostatistics at The Ohio State University College of Public Health. She conducts methodologic work in imputation methods for missing data, primarily in large-scale probability samples, and measures of selection bias for nonprobability samples. In particular, she works on methods for imputing data when missingness is driven by the missing values themselves (missing not at random). She teaches introductory graduate and undergraduate biostatistics and won the College's Outstanding Teaching Award in 2011 and is a Fellow of the American Statistical Association.

MPSDS JPSM Seminar Series

March 13, 2024

12:00 - 1:00 EDT

In person, room 1070, Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

When “representative” surveys fail: Can a non-ignorable missingness mechanism explain bias in estimates of COVID-19 vaccine uptake?

Recently, attention was drawn to the failure of two very large internet-based probability surveys to correctly estimate COVID-19 vaccine uptake in the U.S. in early 2021. Both the Delphi-Facebook COVID-19 Trends and Impact Survey (CTIS) and Census Household Pulse Survey (HPS) overestimated vaccine uptake substantially (14 and 17 points in May 2021) compared to retroactively available CDC benchmark data. These surveys had large numbers of respondents but very low response rates (<10%), and thus non-ignorable nonresponse could have substantially impacted estimates. Specifically, it is plausible that “anti-vaccine” individuals were less likely to complete a survey about COVID-19; we might also hypothesize that “anti-vaccine” individuals could be suspicious of the government and thus less likely to respond to an official government-sponsored survey. In this talk we use proxy pattern-mixture models (PPMMs) to retrospectively estimate the proportion of adults (18+) who received at least one dose of a COVID-19 vaccine, using data from the CTIS and HPS, under a non-ignorable nonresponse assumption. We compare these estimates to the true benchmark uptake numbers and show that the PPMM could have detected the direction of the bias and have provided meaningful bias bounds. We also use the PPMM to estimate vaccine hesitancy, a measure without a benchmark truth, and compare to the direct survey estimates. We conclude with discussion of how the PPMM could be prospectively as part of an assessment of nonresponse and/or selection bias, factors that would facilitate such analyses in the future, and ongoing work to extend the PPMM to novel areas.

Rebecca Andridge is an Associate Professor of Biostatistics at The Ohio State University College of Public Health. She conducts methodologic work in imputation methods for missing data, primarily in large-scale probability samples, and measures of selection bias for nonprobability samples. In particular, she works on methods for imputing data when missingness is driven by the missing values themselves (missing not at random). She teaches introductory graduate and undergraduate biostatistics and won the College's Outstanding Teaching Award in 2011 and is a Fellow of the American Statistical Association.

In the last two years, Coon amplitudes have received a burst of renewed interest in the context of the modern S-matrix bootstrap program. The four-point Coon amplitude was first discovered in 1969 by D.D. Coon and is a deformation of string theory’s famous Veneziano amplitude with a free deformation parameter q. At q = 1, Coon amplitudes become tree-level open string amplitudes. Recently, several groups have studied the low-energy expansion of Coon amplitudes, the unitarity properties of Coon amplitudes, various extensions and generalizations of Coon amplitudes, and possible physical models realizing the Coon spectrum. In this seminar, I will survey these recent developments along with some new results on N-point Coon amplitudes. By studying the N-point Coon amplitude, we will discover a particular limit which reproduces the tree-level amplitudes of a particular field theory with an infinite set of non-derivative single-trace interaction terms. This correspondence is the first definitive realization of the Coon amplitude (in any limit) from a field theory described by an explicit Lagrangian.

]]>The near equality of the dark matter and baryon energy densities is a remarkable coincidence, especially when one realizes that the baryon mass is exponentially sensitive to UV parameters in the form of dimensional transmutations. We explore a new dynamical mechanism, where in the presence of an arbitrary number density of baryons and dark matter, a scalar adjusts the masses of dark matter and baryons until the two energy densities are comparable. In this manner, the coincidence is explained regardless of the microscopic identity of dark matter and how it was produced. This new scalar causes a variety of experimental effects such as a new force and a (dark) matter density dependent proton mass.

]]>MPSDS JPSM Seminar Series

March 27, 2024

12:00 - 1:00 pm EDT

In person, room 1070 Institute for Social Research and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

Leveraging AI for Survey Research

This presentation scrutinizes the transformative potential of Large Language Models (LLMs) in survey research, focusing on three critical areas: questionnaire design, synthetic data creation, and the role of LLMs as qualitative interviewers. In the domain of questionnaire design, the lecture delves into if and how LLMs can construct contextually accurate and highly effective survey items. However, there are valid concerns about the model’s understanding and potential biases, which we will critically evaluate. She also discusses LLMs’ ability to fabricate synthetic data, preserving core statistical properties whilst ensuring privacy. Here too, the ethical implications and the potential for misuse of this capability pose challenges that need to be addressed. Lastly, the lecture explores how LLMs, with their human-like conversational ability, can act as qualitative interviewers, allowing in-depth information gathering at scale. Yet, questions about their ability to fully capture the complexity and subtleties of human interaction and response also remain. The underlying theme of this talk is the question on how research in this space should be structured.

Frauke Kreuter is a professor in the Joint Program in Survey Methodology (JPSM), Co-director of the Social Data Science Center (SoDa) at the University of Maryland, and chair of Statistics and Data Science at LMU Munich. She is an elected fellow of the American Statistical Association, and received the Warren Mitofsky Innovators Award of the American Association for Public Opinion Research in 2020. In addition to her academic work, Professor Kreuter is the Founder of the International Program for Survey and Data Science (IPSDS), developed in response to the increasing demand from researchers and practitioners for the appropriate methods and right tools to face a changing data environment; Co-Founder of the Coleridge Initiative, whose goal is to accelerate data-driven research and policy around human beings and their interactions for program management, policy development, and scholarly purposes by enabling efficient, effective, and secure access to sensitive data about society and the economy.

Our new Lunchtime Learning Sessions feature the latest in technology topics from our community at Michigan IT. Grab your lunch and sign into to Zoom to catch up on the latest IT news around campus. Please register for the event at Sessions: https://sessions.studentlife.umich.edu/track/event/session/76898

Curious about our campus cellular technology upgrades? Learn from the experts in Cellular at ITS Infrastructure.

This month's featured topic is about cellular technology on campus with Michael Leach and John Simpkins from ITS Infrastructure.

MPSDS JPSM Seminar Series

April 3, 2024

12:00-1:00 pm

In person, Room 1070 Institute for Social Research, and via Zoom. The Zoom call will be locked 10 minutes after the start of the presentation.

Texting in Mixed-Mode Studies: Results from Recent Research Experimentation

Using multiple modes of contact has been found to increase participation in surveys over a single contact mode. Text messaging has emerged as a new mode to contact survey participants in mixed-mode survey designs, especially for surveys that include web and/or phone data collection. However, it has been unclear how to best combine text messages with mailings and other outreach contacts to improve response rates and data quality. At NORC, we launched a program to explore the effectiveness of using text messaging as a contact mode and embedded experiments in multiple NORC studies to better understand the impact and benefits of texting. In this seminar, we highlight results from our recent experimental research, including the effectiveness of text invitations and text reminders at different points in the contact sequence, the time-of-day text messages are sent, and whether text reminders are more effective than postcard reminders. Objectives for this seminar include: to understand how texting is used for contacting potential survey respondents at NORC, share examples from texting on specific NORC projects, and discuss results from recent studies on how best to use texting.

Leah Christian is Senior Vice President directing the Methodological and Quantitative Social Sciences department. Prior to joining NORC, Christian worked at Nielsen and the Pew Research Center. Christian brings over 20 years of experience in survey methodology and panel research, including work with federal, academic, commercial, and nonprofit organizations. She is an expert in data collection modes, mixed-mode survey and panel designs, and questionnaire design and measurement error. Her research also focuses on evaluating big data for use in social science research, integrating survey and big data, and using survey data to correct for errors in big data. Christian is co-author of Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method and has published 15 articles on research methodology in a variety of journals and presented more than 50 conference, webinar and short courses on research methodology and data science. Christian was a recipient of AAPOR’s Warren J. Mitofsky’s Innovators Award in 2017 for her work with a research team developing web-push data collection methodologies. Christian holds a PhD in sociology from Washington State University.

Christopher Hansen is a Research Methodologist with over 10 years of experience in applied research. During his time at NORC, he has worked in the capacity of methodologist on numerous projects in the areas of survey and questionnaire design and cognitive and usability testing. These projects include the National Domestic Workers Alliance (NDWA) Labs Methodology Review, the Project on Human Development in Chicago Neighborhoods with Harvard University, the CDC’s Survey of Today’s Adolescent Relationships and Transitions, and the CDC’s COVID Experiences Longitudinal Surveys. Hansen teaches courses in research methodology at Loyola University Chicago and DePaul University and has presented on topics related to survey design and measurement at national conferences, including the American Association for Public Opinion Research and the American Association for Public Policy Analysis and Management. Hansen holds a master’s degree in sociology from the University of Chicago.

Martha McRoy is a Senior Research Methodologist who specializes in survey sample design, questionnaire development, fieldwork monitoring and implementation, quality control, and data weighting for web, mail, telephone, face-to-face, and mixed-mode surveys. McRoy provides rigor in all stages of survey research with project work covering a breadth of topics including public opinion, religion, health, justice, and transportation. Her experimental work focuses on mode shifts and bridge studies, predicting response propensities and survey outcome dispositions of sampled respondents, increasing response rates for hard-to-reach populations, and developing tools to monitor data collection activity. McRoy brings over ten years of experience in survey statistics and methodology, including working at Abt Associates, Westat, the Pew Research Center, and the Organization of Economic Cooperation and Development. She holds a master’s degree in survey methodology from the University of Michigan.

Zoe Slowinski is a Research Methodologist with over 12 years of applied research experience in the U.S. and internationally, including work with federal, academic, and nonprofit organizations. Her work focuses on survey design, measurement error, and qualitative data collection. At NORC, she develops questionnaires, conducts cognitive and usability interviews, plans and moderates focus groups, and analyzes data for clients such as the Bureau of Labor Statistics, National Science Foundation, and Federal Deposit Insurance Corporation. Slowinski's experimental work focuses on the use of texting and email as survey contact modes. She holds a master’s degree in public policy from George Mason University.

The classical formulation of the weak cosmic censorship conjecture (WCCC) – the statement that singularities resulting from gravitational collapse are generically hidden behind event horizons – is most probably false. However, I will argue that there is compelling evidence that some version of it should be true in quantum gravity. Working towards a quantum gravitational formulation of the WCCC, I will prove “Cryptographic Censorship”, a theorem that provides a general condition for the formation of event horizons in AdS/CFT: sufficiently (pseudo)random boundary dynamics. I will also provide a classification of sizes of singularities, and show that “large”, “classical” singularities – the ones that the WCCC should rule out – are compatible with sufficiently (pseudo)random dynamics. Thus, if such singularities are indeed described by (pseudo)random dynamics, then they cannot exist in the absence of event horizons.

]]>A clear outcome of Snowmass 2021 and now the US P5 report was the community support for R&D towards a future muon collider. In this talk we will discuss the general physics program that becomes available to the community during the construction and completion of the future collider. We will review not only the main challenges and advantages of such a collider compared to other possibilities, but also the projected reach of several specific models. Additionally, we consider the physics possibilities at the necessary demonstrator facilities along the way. For example, a beam dump would be an economical and effective way to increase the discovery potential of the collider complex in a complementary regime.

]]>Determining the long-range phase of matter from its microscopic description has long been one of the central topics in physics. Simple microscopic systems often become strongly coupled at long range where we usually rely on clever approximations. Bootstrap is an alternative approach that uses positivity and equations of motion to make predictions in quantum many-body systems without making approximations. In this talk, I will show my recent work on using bootstrap to compute the ground state energy, local observables and gaps of a quantum many-body system in the thermodynamic limit with rigorous error bars.

]]>Join the Michigan IT community for a review of 2 case studies where researchers have been using Generative AI as a tool for artistic inquiry, presented by John Thiels, a Storage Engineer in LSA High-Performance Computing.

https://umich.zoom.us/j/91790876875?pwd=NHZjTDJxdEFxMzArbC9KM1ordytjQT09