Though the University of Michigan aspires to cultivate a climate that is welcoming to the members of their diverse student, faculty, and staff bodies, we know that the lived experiences of many in our communities don’t always align with these aspirations. Join the CRLT Players for "Cuts: Responding to Student Climate Concerns" which invites participants to think together about the many forces that can shape campus climate both positively and negatively. Comprised of a series of vignettes focused on a Muslim student over a year as she encounters multiple issues of bias, the sketch depicts how such incidents build up over time to create a negative climate for targeted students. Discussion focuses on exploring the issues, as well as potential responses to them.

This workshop is open to all staff, faculty, and graduate students of the Physics/Applied Physics Department. Please RSVP by clicking the link below by Wednesday, February 6 to attend.

Thermodynamics is a remarkably successful theoretical framework, with wide ranging applications across the natural sciences. Unfortunately, thermodynamics is limited to equilibrium or near-equilibrium situations, whereas most of the natural world, especially life, operates very far from thermodynamic equilibrium. Without a robust nonequilibrium thermodynamics, we cannot address a whole host of pressing research questions regarding the energetic requirements to operate outside of equilibrium, like the energetic cost to form a pattern, replicate an organism, or sense an environment, to name a few. Research in nonequilibrium statistical thermodynamics is beginning to shed light on these questions. In this talk, I will present two such recent predictions. The first is a novel linear-response-like bound that quantifies how dissipation shapes fluctuations far from equilibrium. Besides its intrinsic allure as a universal relation, I will discuss how it can be used to probe the energetic efficiency of molecular motors, offer energetic constraints on chemical clocks, and bound the dissipation in complex materials, both biological and synthetic, allowing us to gain insight into the fundamental energetic requirements to operate out of thermodynamic equilibrium. The second is an extended second law of thermodynamics with information that quantifies the precise energetic costs to process information, make a measurement, and implement feedback.

Details to be announced

]]>Recently, rapid advances in science and technology have brought extraordinary amount of data that cannot be analyzed by traditional statistical or machine learning approaches and algorithms. These advances provide unprecedented opportunities and challenges to tackle much larger and more complicated data in academics and industry. To overcome these difficulties, massive computing frameworks such as MapReduce and Spark are becoming increasingly important. However, statistical challenges have not been paid much attention to in the implementation of these frameworks. Recently, we have proposed to use sufficient statistics instead of the whole data in the analysis. We have investigated the concept of sufficient statistics under the framework of a variety of statistical approaches, including linear regression and generalized linear models. The current talk will focus on linear regression problems. It will briefly mention the idea to generalized linear models.

]]>The AdS/CFT correspondence relates a theory of gravity in anti-de Sitter space to a CFT on the boundary. A natural question is how local fields in AdS can be expressed in terms of the CFT. In the 1/N expansion this can be done by (i) identifying suitable building blocks - free bulk fields - in the CFT, (ii) assembling the building blocks to make interacting bulk fields. I'll present an approach where the first step is carried out using modular flow in the CFT and the second step is driven by requiring bulk causality.

]]>The AdS/CFT correspondence relates a theory of gravity in anti-de Sitter space to a CFT on the boundary. A natural question is how local fields in AdS can be expressed in terms of the CFT. In the 1/N expansion this can be done by (i) identifying suitable building blocks - free bulk fields - in the CFT, (ii) assembling the building blocks to make interacting bulk fields. I'll present an approach where the first step is carried out using modular flow in the CFT and the second step is driven by requiring bulk causality.

]]>"Nepali Sign Language (NSL) has primarily been represented in print through pictorial images of signing persons. This talk draws on long-term ethnographic research with Nepali signers to explore the affordances of drawings for representing and generating linguistic form, reference, connotation, and entanglement with other modes of semiosis. I focus specifically on post-Maoist Civil War changes in visual representations of the figures of personhood portrayed performing signs in NSL texts; the role of both drawings and the act of drawing in recent initiatives to include previously marginal elderly novice signers into deaf life; and my own efforts to follow deaf artists in incorporating drawings into my toolkit for recording, analyzing, and sharing representations of signing practices. Across these contexts, how does the production and interpretation of pictorial images function as a resource for creating indexical icons that can performatively call forth new conditions? In addition to analyzing social change among deaf networks in Nepal, this talk shows that ethnographic attention to drawing can contribute to conversations about how linguistic anthropology can forge connections with visual anthropology in order to help our research processes and products embody our commitment to analyzing multimodal total semiotic facts."

The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

The Dark Energy Survey (DES) is the state-of-the art imaging survey for dark energy. Since its first observing campaign, in 2013, DES has produced many exciting results, including: the most precise cosmological measurements from weak gravitational lensing of 400M galaxies, the first ever observation of the optical transient associated with a gravitational wave emitting astrophysical event (the binary neutron star merger GW1708117), and the first ever measurement of the rate of expansion of the universe using a dark gravitational wave standard siren (the binary black hole merger GW170814). After six years of data taking, on January 9 2019 DES completed its main survey observations. The collaboration now focuses on obtaining the most precise cosmological measurements, and prepare for target of opportunity observations of upcoming gravitational wave events. In this talk, I present an overview of the most exciting science produced by DES so far and discuss the prospects for the next few years before the start of the next-generation survey with the upcoming LSST instrument.

Atomic detectors for sensing and measurement of AC electric fields show certain advantages over traditional dipole antenna, such as the capability to measure absolute electric field strengths, and a higher physical resolution. Here I will present experimental detection of incident RF fields, using electromagnetically induced transparency (EIT) spectroscopy on Rydberg states within an atomic vapor. The small (5.5 x 5.5 mm cross-section) Rubidium vapor cell is used to image the field the near-field from a microwave horn, to a spatial resolution of lambda/10, covering a field-amplitude range from 50 to 350 V/m. Results are compared to finite-element field simulations, and further experiments demonstrating the ability to record absolute field amplitude and frequency values will be discussed.

A long-standing theme of atomic physics is a continual striving to gain ever greater control over single quantum objects, starting with their internal degrees of freedom and now extending to their external degrees of freedom. Having learned to exert nearly complete control over single atoms, what are the new frontiers? One direction is to now exert similar levels of control over the interactions and correlations between atoms, with examples including quantum computing with trapped ions, quantum many-body simulations in degenerate atomic gases, and the deterministic assembly of molecules. Our lab has been asking the question: is it also possible to exploit atom-atom correlations and entanglement to advance the field of precision measurement beyond the single-atom paradigm that dominates the field? Using laser-cooled and trapped rubidium and strontium atoms inside of high finesse optical cavities, we have explored this question along two fronts by surpassing the standard quantum limit on quantum phase estimation by a factor of 60 and overcoming thermal limits on laser frequency stability. If time permits, I will also discuss the emergence of spin-exchange interactions between atoms mediated by the optical cavity. Possible future impacts include robust millihertz linewidth optical lasers, advanced optical lattice clocks, and searches for new physics.

*To Be Confirmed*

]]>Mixed effects models are used routinely to share information across groups and to account for data dependence. The statistical properties of such models are often quite good on average across groups, but may be poor for any specific group. For example, commonly-used confidence interval procedures may maintain a target coverage rate on average across groups, but

have near zero coverage rate for a group that differs substantially from the others. In this talk, we review some basic mixed effects modeling tools, discuss their group-specific properties, and present some new tools for multiple testing and inference problems that permit information sharing across groups while controlling group-specific frequentist error rates.

"This talk reflects on the controversy on the Navajo Nation of changing the name of Kit Carson Drive to the Navajo place name Tséhootsooí. I outline the structure and use of traditional Navajo place names and then show that Navajo place names have had a renaissance in signage for shopping centers and elsewhere on the Navajo Nation. I then detail the controversy over a proposal to change a street name in Fort Defiance. Place names are not neutral, but fully implicated in concerns about who has and does not have the right (and power) to name. In debates about linguistic relativity, questions of the inequalities of language need to be engaged. This, I argue, is linguistic relativity with an attitude--taken out of the free-floating ahistorical itemizable lexical unit and put back--where it has always been--in the lives of people."

The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

Abstract: Classical online learning techniques enforce a prior distribution on the objective to be optimized in order to induce model sparsity. Such prior distributions are chosen with mathematical convenience in mind, but not necessarily for being the best priors. The Minimum Description Length (MDL) principle is usually used with two pass strategies, one for feature selection, and a second one for optimization with the selected features.

An approach inspired by the Minimum Description Length principle is proposed for adaptively selecting and regularizing features during online learning based on their usefulness in improving the objective. The approach eliminates noisy or useless features from the optimization process, leading to improved loss. By utilizing the MDL principle, this approach enables an optimizer to reduce the problem dimensionality to the subspace of the feature space for which the smallest loss is obtained. The approach can be tuned for trading off between model sparsity and accuracy. Empirical results on large scale practical real-world systems demonstrate how it improves such tradeoffs. Huge model size reductions can be achieved with no loss in performance relative to standard techniques, while moderate loss improvements (which can translate to large regret improvements) are achieved with moderate size reductions. The results also demonstrate that overfitting is mitigated by this approach. Analysis shows that the approach can achieve the loss of optimizing with the best feature subset.

Bio: Gil Shamir received the B.Sc. (Cum Laude), and M.Sc. degrees from the Technion, Israel – Institute of Technology, Haifa, Israel in 1990 and 1997, respectively, and the Ph.D. degree from the University of Notre Dame, Notre Dame, IN, U.S.A. in 2000, all in electrical engineering.

From 1990 to 1995 he participated in research and development of signal processing and communication systems. From 1995 to 1997 he was with the Electrical Engineering Department at the Technion – Israel Institute of Technology, as a graduate student and teaching assistant. From 1997 to 2000 he was a Ph.D. student and a research assistant in the Electrical Engineering Department at the University of Notre Dame, and then a post-doctoral fellow until 2001. During his tenure at Notre Dame he was a fellow of the Center for Applied Mathematics of the university. Between 2001 and 2008 he was with the Electrical and Computer Engineering Department at the University of Utah, and between 2008 and 2009 with Seagate Research. Since 2009 he has been with Google. His main research interests include information theory, machine learning, coding and communication theory. Dr. Shamir received an NSF CAREER award in 2003.

For more information on MIDAS or the Seminar Series, please contact midas-contact@umich.edu. MIDAS gratefully acknowledges Wacker Chemie AG for its generous support of the MIDAS Seminar Series.

I will discuss, using my research in physics education, how research can be used as a guide to develop curricula and pedagogies to reduce student difficulties. My research has focused on improving student understanding of introductory and advanced concepts, for example, in learning quantum mechanics. We are developing research-based learning tools such as tutorials and peer instruction tools that actively engage students in the learning process. I will discuss how we evaluate their effectiveness using a variety of methodologies. I will also discuss our research studies that provide guidelines for how to enhance physics by making it inclusive.

The Systeme Internationale d’Unites is being redefined, in an interesting way. This redefinition has fundamental implications for electrical standards, including standards of current based on the charge of the electron. One mode of semiconducting single-electron pump is the single-gate ratchet mode, based on the concept of a Brownian motor – this fact makes the mode quite subtle in operation. We show experimentally that, in the same devices, we can demonstrate multiple two-gate pumping modes but not the single-gate mode. We propose three mechanisms to explain the lack of plateaus in the single-gate ratchet mode. Educators/textbook writers: I will also discuss a proposal on how to introduce the new SI to students.

Statistical change-point analysis is concerned with detecting and localizing abrupt changes in the data generating distribution in time series. A long-studied subject with a rich literature, change-point analysis has produced a host of well established methods for statistical inference available to practitioners. These techniques are widely used in diverse applications to address important real life problems, such as security monitoring, neuroimaging, ecological statistics and climate change, medical condition monitoring, sensor networks, risk assessment for disease outbreak, genetics and many others. The current frameworks for statistical analysis of change-point problems often times rely on traditional modeling assumptions of parametric nature that are inadequate to capture the inherent complexity of modern, high-dimensional datasets. in this talk I will introduce three high-dimensional change-point localization problems assuming independent observations: for univariate means, covariances and sparse network models. In each case, I will describe a phase transition in the space of the model parameters that sharply separates parameter combinations for which the localization task is possible from the ones for which no consistent estimator of the change-points exists. I will then present the corresponding algorithms for localization, which yield nearly minimax optimal rates in all cases. I will finally discuss ongoing work on a fully non-parametric change-point problem.

Joint work with Daren Wang, Yi Yu and Oscar Hernan Madrid Padilla.

Abstract: Matrix completion is an active area of research in itself,and a natural tool to apply to network data, since many real networks are observed incompletely and/or with noise. However, developing matrix completion algorithms for networks requires taking into account the network structure. This talk will discuss three examples of matrix completion used for network tasks. First, we discuss the use of matrix completion for cross-validation or non-parametric bootstrap on network data, a long-standing problem in network analysis. Two other examples focus on reconstructing incompletely observed networks, with structured missingness resulting from network sampling mechanisms. One scenario we consider is egocentric sampling, where a set of nodes is selected first and then their connections to the entire network are observed. Another scenario focuses on data from surveys, where people are asked to name a given number of friends. We show that matrix completion can generally be very helpful in solving network problems, as long as the network structure is taken into account. This talk is based on joint work with Elizaveta Levina, Tianxi Li and Yun-Jhong Wu.

Bio: Ji Zhu is a Professor in the Department of Statistics at the University of Michigan, Ann Arbor. He received his B.Sc. in Physics from Peking University, China in 1996 and M.Sc. and Ph.D. in Statistics from Stanford University in 2000 and 2003, respectively. His primary research interests include statistical machine learning, high-dimensional data modeling, statistical network analysis and their applications to health sciences. He received an NSF CAREER Award in 2008; and he was elected as a Fellow of the American Statistical Association in 2013 and a Fellow of the Institute of Mathematical Statistics in 2015.

A central feature of many oscillatory networks is their ability to display phase-locked solutions where the constituent elements fall into a well-defined pattern in which the phase difference between pairs of oscillators can be determined. Often the networks contain an identifiable pacemaker or external forcing. In these cases, the network is said to be entrained, because the pacemaker determines the overall network period and phasing. In this talk, we consider entrainment that arises in circadian systems. Such networks are subject to an external, pacemaking 24 hour light-dark drive in which the intensity and total hours of light within the 24 hour cycle are important parameters. We will introduce a new computational tool, a 1-dimensional entrainment map, to assess whether and at what phase a circadian oscillator entrains to periodic light-dark (LD) forcing. We have applied the map to a variety of circadian oscillators ranging from the Novak-Tyson model for protein-mRNA interactions to the Kronauer model of the human circadian rhythm. Using the entrainment map, we systematically investigate how various intrinsic properties of the circadian oscillator interact with properties of the LD forcing to produce stable circadian rhythms. We will focus on how to use the map to study the reentrainment process due long-distance travel to address the so-called east-west asymmetry of jet lag. Further, we show that individuals can experience jet lag after purely north-south travel. The mathematical and computational methods used to study these problems should be of wide interest to members of the mathematics community.

Considerable effort in cosmology today is focused on understanding the statistical nature and evolution of the (dark matter) density field that underlies the observed large-scale structure. Information about this field is mostly phrased in terms of two-point statistics, such as the power spectrum of galaxies or weak lensing, essentially approximating the large-scale structure as a Gaussian random field. However, the Universe is far more complex than that: Gravitational collapse turns the simple initial conditions into the cosmic web consisting of halos, filaments and large voids we see today. In my talk, I will show how we can use the abundance of galaxy clusters residing in the the 'knots' of the cosmic web to probe the non-Gaussian shape of the density field. This gives us insights into the physics of structure formation, and provides at the same time a new method to search for deviations from the cosmological standard model.

Can quantum materials be built out of light? In the hope of doing just that, we have developed a system for turning optical photons into cavity Rydberg polaritons: quasiparticles which inherit their spatial waveforms from the modes of an optical cavity and gain strong interactions from Rydberg excitations of an atomic gas. In a single cavity mode, the strong interactions between polaritons manifest as transport blockade, in which an individual photon in the cavity prevents any other photons from entering. To go beyond blockade, we use Floquet engineering to enable the polaritons to move around and self-organize among multiple transverse modes of the cavity. Finally, I will discuss our preliminary experiments on building photonic Laughlin states, the ground states of a fractional quantum Hall system.

The existence of antimatter was first predicted by Dirac in 1928. The antielectron (now called the positron) and the antiproton were discovered experimentally in 1932 and 1955, respectively. It then took, however, more than half a century before physicists were able to create and control the atomic form of antimatter, the antihydrogen atom, in sufficient quantity to be able to study its properties.

The hydrogen atom, the simplest atomic system has played a central role in developments of modern physics. By studying antihydrogen, an antiproton orbited by an antielectron, we wish to precisely probe the fundamental symmetries between matter and antimatter. In particular, CPT (charge, parity, time-reversal) symmetry underpins relativistic quantum field theory, and the Equivalence Principle is a key assumption in Einstein’s General Relativity. A violation of these symmetries, even at a very minute level, would force a radical change in the way we understand subatomic physics at its deepest level. In this talk, I will discuss how we produce, control, and perform precision measurements on antihydrogen atoms that are "bottled" in the ALPHA antihydrogen trap at CERN.

Matrix completion is an active area of research in itself, and a natural tool to apply to network data, since many real networks are observed incompletely and/or with noise. However, developing matrix completion algorithms for networks requires taking into account the network structure. This talk will discuss three examples of matrix completion used for network tasks. First, we discuss the use of matrix completion for cross-validation or non-parametric bootstrap on network data, a long-standing problem in network analysis. Two other examples focus on reconstructing incompletely observed networks, with structured missingness resulting from network sampling mechanisms. One scenario we consider is egocentric sampling, where a set of nodes is selected first and then their connections to the entire network are observed. Another scenario focuses on data from surveys, where people are asked to name a given number of friends. We show that matrix completion can generally be very helpful in solving network problems, as long as the network structure is taken into account.

This talk is based on joint work with Elizaveta Levina, Tianxi Li and Yun-Jhong Wu.

Dan will discuss his work at STATS: a leading sports analytics company, as well as his preparation for a career path in Industry.

Dark matter may have its own dark forces and interactions that are distinct from the Standard Model and unrelated the weak scale. To test this idea, galaxies and clusters of galaxies serve as cosmic colliders for measuring self-scattering among dark matter particles. Present constraints imply that if self-interactions are to solve the infamous core-cusp problem in dwarf galaxies, the scattering cross section must fall with energy/velocity to avoid cluster limits. To test this velocity dependence, I present new constraints on dark matter self-interactions at an intermediate scale with groups of galaxies. I also describe using mock observations from N-body simulations of self-interacting dark matter with baryons as a test of our methods. Lastly, I describe some recent work toward strongly-coupled theories of self-interacting dark matter, using tools borrowed from lattice QCD to compute its properties nonperturbatively.

]]>Dark matter may have its own dark forces and interactions that are distinct from the Standard Model and unrelated the weak scale. To test this idea, galaxies and clusters of galaxies serve as cosmic colliders for measuring self-scattering among dark matter particles. Present constraints imply that if self-interactions are to solve the infamous core-cusp problem in dwarf galaxies, the scattering cross section must fall with energy/velocity to avoid cluster limits. To test this velocity dependence, I present new constraints on dark matter self-interactions at an intermediate scale with groups of galaxies. I also describe using mock observations from N-body simulations of self-interacting dark matter with baryons as a test of our methods. Lastly, I describe some recent work toward strongly-coupled theories of self-interacting dark matter, using tools borrowed from lattice QCD to compute its properties nonperturbatively.

]]>Abstract: Standard deep-learning algorithms are based on a function-fitting approach that do not exploit any domain knowledge or constraints. This makes them unsuitable in applications that have limited data or require safety or stability guarantees, such as robotics. By infusing structure and physics into deep-learning algorithms, we can overcome these limitations. There are several ways to do this. For instance, we use tensorized neural networks to encode multidimensional data and higher-order correlations. We combine symbolic expressions with numerical data to learn a domain of functions and obtain strong generalization. We combine baseline controllers with learnt residual dynamics to improve landing of quadrotor drones. These instances demonstrate that building structure into ML algorithms can lead to significant gains.

Bio: Anima Anandkumar is a Bren professor at Caltech CMS department and a director of machine learning research at NVIDIA. Her research spans both theoretical and practical aspects of large-scale machine learning. In particular, she has spearheaded research in tensor-algebraic methods, non-convex optimization, probabilistic models and deep learning.

Anima is the recipient of several awards and honors such as the Bren named chair professorship at Caltech, Alfred. P. Sloan Fellowship, Young investigator awards from the Air Force and Army research offices, Faculty fellowships from Microsoft, Google and Adobe, and several best paper awards. She is a member of the World Economic Forum's Expert Network consisting of leading experts from academia, business, government, and the media. She has been featured in documentaries by PBS, KPCC, wired magazine, and in articles by MIT Technology review, Forbes, Yourstory, O’Reilly media, and so on.

Anima received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, a visiting researcher at Microsoft Research New England in 2012 and 2014, an assistant professor at U.C. Irvine between 2010 and 2016, an associate professor at U.C. Irvine between 2016 and 2017 and a principal scientist at Amazon Web Services between 2016 and 2018.

Neutrino flavour oscillations are now quite well established, having being probed at different neutrino energies from a few MeV to tens of GeV. In this talk I will focus on the high energy part, highlighting the role of large volume neutrino telescopes in the measurement of flavour oscillation parameters. I will comment on the results of a recent global fit, mentioning the expected contribution of future neutrino telescopes such as ORCA and PINGU. Finally, I will briefly comment on the role of large volume neutrino telescopes to probe some new physics scenarios through flavour oscillations.

In this talk, I will describe our experiments studying spin-dependent quantum transport, chemistry and interferometry in a Bose-Einstein condensate (BEC) of ultracold (87Rb) atoms subject to optically-generated “synthetic” spin-orbit coupling (SOC). We demonstrate spin-resolved atomic beam splitters and two-pathway interferometers based on tunable Landau-Zener transitions in the energy-momentum space (synthetic band structures generated by the SOC as well as Floquet-engineering) [1]. We also demonstrate a new approach of quantum control of (photo) chemical reactions (photoassociation of molecules from atoms) --- a “quantum chemistry interferometry” --- by preparing reactants in (spin) quantum superposition states and interfering multiple reaction pathways [2]. By performing a “quantum quench” in a SOC BEC, we induce head-on collisions between two spinor BECs and study spin transport and how it is affected by SOC, revealing rich phenomena arising from the interplay between quantum interference and many-body interactions [3]. Time permitting, I may discuss our recent realization of a (bosonic) topological state with band crossings protected by nonsymmorphic symmetry [4], by creating a “synthetic” cylinder with combined physical and synthetic dimensions and also a synthetic radial magnetic flux, where the BEC acquires an emergent crystalline order and exhibits quantum transport (Bloch oscillations) mimicking motion on a Mobius strip in energy-momentum space (band structure). Our experimental system can be a rich playground to study physics of interests to AMO physics, quantum chemistry, condensed matter physics, and even high energy physics.

Refs:

[1] A. Olson et al., “Tunable Landau-Zener transitions in a spin-orbit coupled Bose-Einstein condensate”, Phys. Rev. A. 90, 013616 (2014); “Stueckelberg interferometry using periodically driven spin-orbit-coupled Bose-Einstein condensates”, Phys. Rev. A. 95, 043623 (2017)

[2] D. Blasing et al., “Observation of Quantum Interference and Coherent Control in a Photo-Chemical Reaction”, Phys. Rev. Lett. 121, 073202 (2018)

[3] C. Li et al., “Spin Current Generation and Relaxation in a Quenched Spin-Orbit Coupled Bose-Einstein Condensate”, Nature Communications 10, 375 (2019)

[4] C. Li et al., “A Bose-Einstein Condensate on a Synthetic Hall Cylinder”, arXiv: 1809.02122

Want to learn more about Organizational Studies?

Join us to hear more about this interdisciplinary major based in social sciences where students customize their own education. Enjoy a small community of dedicated and ambitious students with access to top-notch faculty and an engaged alumni network. You'll have the opportunity to hear from the Program Director, Major Advisor, Prospective Student Advisors, and a diverse panel of OS students!

Visit our website in the meantime for more information on the curriculum, application, or to sign-up for a prospective student advising meeting.

Follow us on Facebook to engage with our community and stay up-to-date with OS happenings!

Regression is a widely used statistical tool to discover associations between variables. The estimated relationship can be further utilized for predicting new observations. Obtaining reliable prediction outcomes is a challenging task. When building a regression model, several difficulties such as high dimensionality in predictors, non-linearity of the associations and the unreliable results caused by outliers could deteriorate the results. Furthermore, the prediction error increases if the newly acquired data might not be processed carefully. In this dissertation, we aim at improving prediction performance by enhancing the model robustness at the training stage and duly handling the query data at the testing stage. We propose two methods to build robust models. One focuses on adopting a parsimonious model to limit the number of parameters and a refinement technique to enhance model robustness. We design the procedure to be carried out on parallel systems and further extend their abilities of handling complex and large-scale datasets. The other method restricts the parameter space to avoid the singularity issue and takes up the trimming techniques to limit the influence of outlying observations. We build both approaches by using the mixture-modeling principle to accommodating data heterogeneity without uncontrollably increasing model complexity. Both methods show their abilities to improve prediction performance, compared to existing approaches, in applications such as magnetic resonance vascular fingerprinting and source separation in single-channel polyphonic music, among others. To evaluate model robustness, we develop an efficient approach to generating adversarial samples, which could induce large prediction errors yet are difficult to detect visually. Finally, we propose a preprocessing system to detect and repair different kinds of abnormal testing samples for prediction efficacy, when testing samples are either corrupted or adversarially perturbed.

]]>Reaction-diffusion waves describe diverse natural phenomena from crystal growth in physics to range expansions in biology. Two classes of waves are known: pulled, driven by the leading edge, and pushed, driven by the bulk of the wave. Recently, we examined how demographic fluctuations change as the density-dependence of growth or dispersal dynamics is tuned to transition from pulled to pushed waves. We found three regimes with the variance of the fluctuations decreasing inversely with the population size, as a power law, or logarithmically. These scalings reflect distinct genealogical structures of the expanding population, which change from the Kingman coalescent in pushed waves to the Bolthausen-Sznitman coalescent in pulled waves. The genealogies and the scaling exponents are model-independent and are fully determined by the ratio of the wave velocity to the geometric mean of dispersal and growth rates at the leading edge. Our theory predicts that positive density dependence in growth or dispersal could dramatically alter evolution in expanding populations even when its contribution to the expansion velocity is small. On a technical side, our work highlights potential pitfalls in the commonly-used method to approximate stochastic dynamics and shows how to avoid them.

I will discuss the problem of statistical estimation with contaminated data. In the first part of the talk, I will discuss depth-based approaches that achieve minimax rates in various problems. In general, the minimax rate of a given problem with contamination consists of two terms: the statistical complexity without contamination, and the contamination effect in the form of modulus of continuity. In the second part of the talk, I will discuss computational challenges of these depth-based estimators. An interesting relation between statistical depth function and a general f-learning framework will be discussed, which leads to a computation strategy via minimax optimization in the framework of generative adversarial nets (GAN). Finally, I will address the problem of adaptive estimation under contamination model. It turns out adaptive estimation becomes a much harder task with contamination. Besides the classical logarithmic cost of adaptive estimation in some cases, it can be shown that in certain situation, adaptation can be completely impossible with any rate.

]]>Complex life above a certain size would not be possible without a circulatory system. Both plants and animals have developed vascular systems of striking complexity to solve the problem of nutrient delivery, waste removal, and information exchange. Vascular networks are intimately linked to the fitness of organisms. Despite their importance, the principles that govern the structure, topology, function, development and evolution of biological flow networks are not well understood.

In this talk we present how a biological transport network can utilise principles of self organization to develop and function. We first discuss how a hierarchically organized vascular system can develop under constant or variable flow and show how time-dependent flow can stabilize anastomoses and lead to a topology dominated by cycles. Next, inspired by haemodynamic fluctuations in the brain, we examine how networks can produce self-sustained oscillations in the flow even in the absence of varying external input. We discuss how these spontaneously emerging, self-organized fluctuations depend on the network topology, and how they can be modified by a controlled external input.

The discovery of a Higgs boson at Europe's Large Hadron Collider (LHC) in 2012 was a major achievement in particle physics. Experimental efforts are ongoing to confirm that it is the Higgs boson predicted by the Standard Model of elementary particle physics and to probe signs for new physics beyond the Standard Model. More importantly, physicists wish to build new powerful particle colliders to fully study this unusual particle, in order to probe the nature, and to attempt to discover new physics. The colloquium will present initiatives for new, large-scale accelerators that include the International Linear Collider (ILC) in Japan, the Compact Linear Collider (CLIC) in Europe, the Circular Electron Positron Collider (CEPC) in China, and Future Circular Collider (FCC) in Europe, all designed to study the Higgs boson in detail or to go beyond in energy the current LHC for particle physics in the energy frontier.

Atom interferometers exploit the quantum mechanical, wavelike nature of massive particles to make a broad range of highly precise measurements. Recent technological advances have opened a path for atom interferometers to contribute to two areas at the forefront of modern physics: gravitational wave astronomy and the search for dark matter. In this seminar, I will describe a new experiment, MAGIS 100, that will use a 100-meter-tall atom interferometer to pursue these directions. MAGIS-100 will serve as a prototype gravitational wave detector in the mid-band frequency range 0.1 Hz to 10 Hz, which is complementary to the frequency bands addressed by laser interferometers such as LIGO and the planned LISA experiment. I will discuss the scientific motivation for gravitational wave detection in the mid-band. In addition, I will explain how MAGIS-100 can look for ultralight dark matter, a well-motivated class of dark matter candidates that behave as coherently oscillating fields.

One generic scenario for the dark matter of our universe is that it resides in a hidden sector: it talks to other dark fields more strongly than it talks to the Standard Model. I'll discuss some simple, WIMP-y models of this kind of hidden sector dark matter, paying particular attention to what we can learn from the cosmic history of the dark sector. In particular, the need to populate the dark sector in the early universe can control the observability of dark matter today. Some results of interest include new cosmological lower bounds on direct detection cross-sections and simple models of dark matter with parametrically novel behavior.

]]>One generic scenario for the dark matter of our universe is that it resides in a hidden sector: it talks to other dark fields more strongly than it talks to the Standard Model. I'll discuss some simple, WIMP-y models of this kind of hidden sector dark matter, paying particular attention to what we can learn from the cosmic history of the dark sector. In particular, the need to populate the dark sector in the early universe can control the observability of dark matter today. Some results of interest include new cosmological lower bounds on direct detection cross-sections and simple models of dark matter with parametrically novel behavior.

]]>Partice physics in Michigan, Indiana, Kentucky, Illinois and Ohio.

This meeting will bring together the local community of midwest phenomenologists to share their work and discuss recent advances in high energy particle physics, particle astrophysics, dark matter, and cosmology.

In the 7th Annual LCTP Spring Symposium, the main focus will be set on Neutrino Physics, with numerous speakers covering the field.

Event program can be found in links below.

Single-cell RNA-sequencing (scRNA-seq) involves the measurement of gene expression from isolated single cells, with the potential to illuminate cellular heterogeneity within complex tissue samples. However, scRNA-seq data are subject to a large number of technical effects. In this work, we present two approaches that can be applied for removing technical effects from scRNA-seq data in downstream analyses. The first part introduces different concepts of stably expressed genes with respect to true biological expression. Different classes of stably expressed genes may capture different technical effects, assisting in the removal of these technical effects and increasing the biological interpretability of later results. We find that genes associated with the cytosolic ribosomal structure of cells are enriched with genes that are stably expressed in proportion to the total RNA content of a cell. The cytosolic ribosomal genes can serve as a foundation for a gene set incorporated into normalization procedures to remove some technical effects associated with cell size. The second part describes a procedure to analyze which genes are captured with more accuracy from scRNA-seq experiments. The number of reads captured for each unique molecular identifier (UMI) informs how well a specific gene is captured within a dataset. Reliably detectable genes can then be applied to downstream analyses, correcting for additional technical effects. Together, these two projects provide two approaches to identifying genes that capture technical effects in scRNA-seq data and can be applied to later normalization procedures.

]]>"Only a short distance offshore from Troy, the Bronze Age settlement on the islet of Koukonissi, Lemnos offers important evidence for the local production and consumption of Mycenaean pottery during the 14th century BCE, a time ostensibly of little contact of the North Aegean with the Mycenaean world, with the best evidence for Mountjoy’s “Upper Interface” being represented by Troy (phase VI late). This paper presents new evidence produced by integrated petrographic, chemical and stylistic ceramic analysis for Koukonissi as an outpost of the Southern Aegean, and contrasts this with its neighbor Troy on the Asia Minor coast.

At Troy during LH IIIA2, the bulk of the Mycenaean pottery seems to have been imported, mainly from the Argolid/NE Peloponnese, with assumed local pattern painted wares comprising only a small part of the total assemblage and standard Mycenaean wares (fine plain) being rare. In contrast, typical Mycenaean shapes were commonly imitated at Troy in local fabrics (grey and tan wares).

At Koukonissi, standard Mycenaean pottery, such as fine plain wares, are locally produced and well represented. Most importantly, the common local ware (Red Slipped pottery) seems relatively unaffected by the Mycenaean repertoire. This lies in contrast to other parts of the Eastern Aegean and Troy, where hybrid shapes and decorations are present.

This new identification of previously undocumented, substantial production of Mycenaean pottery on Lemnos has far-reaching implications, as some of the Eastern Aegean Mycenaean chemical compositional groups may have been produced on the island, something quite unexpected. The evidence from Koukonissi, therefore, offers the potential to alter our view of the interface between Mycenaean and other cultures. It suggests the existence of important differences at a social, economic and cultural level between Troy and Koukonissi, and a diversity of interaction with the southern Aegean and Mycenaean Greece between different sites in the North Aegean."

Mini-Bio:

Peter Day teaches and researches in the Department of Archaeology at the University of Sheffield, running a research group on ceramics which has close ties with the the National Centre for Scientific Research ‘Demokritos’ in Greece and the University of Barcelona.

He gained his BA in Archaeology at the University of Southampton under Colin Renfrew and Peter Ucko as Heads of Department. Having trained in Ceramic Petrography with David Peacock, he worked as Research Fellow in Ceramic Petrology at the Fitch Laboratory, British School at Athens from 1984-1986. He subsequently carried out doctoral research in the Department of Archaeology, University of Cambridge, under the supervision of Sander van der Leeuw, on ceramic production in East Crete during the Neopalatial period of the Bronze Age and the twentieth century. He held a Postdoctoral Fellowship at Cambridge before a two year postdoctoral position at NCSR ‘Demokritos from 1991-1993.

From 1994 he has been based in Sheffield, working on analytical approaches to ceramics, both in terms of provenance and especially the reconstruction of ceramic technologies. From 1998-2002, he was Co-ordinator of the GEOPRO European Training Network and has been involved in a succession of other major, collaborative projects funded by the European Union. His research usually has a Mediterranean focus, though he has also been involved in a range of ceramic-based projects in Asia, Africa and the Americas. Although basically an anthropological archaeologist and prehistorian, Peter has been gradually civilized by a number of postgraduates and postdoctoral researchers that he has had the privilege of working with.

The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

Deep Underground Neutrino Experiment, which is being actively developed by an international collaboration of 1,000+ researchers from 30+ countries, will be a multi-decade physics program. The experiment will carry out precision oscillation measurements in the neutrino beam created at Fermilab, look for nucleon decay, and be ready to capture a burst of neutrinos from a galactic core-collapse supernova. Theoretical input is essential for many aspects of this program. In the first part of the talk, I will discuss some relevant aspects of the neutrino-nucleus cross section physics and related questions about measuring neutrino energy at DUNE. In the second part, I will focus on the astrophysical capabilities of the far detector. I will show that the supernova burst signal carries in it the signatures of physical processes taking place close to the collapsed core. If we know what to look for, we can learn about the conditions in the neutrino-driven wind and the impact of collective flavor oscillations.

In the 7th Annual LCTP Spring Symposium, the main focus will be set on Neutrino Physics, with numerous speakers covering the field.

Event program can be found in links below.

Ultralong-range Rydberg molecules (ULRRMs) provide a sensitive and versatile, in situ probe of quantum statistics and spatial correlations in quantum gases. In ULRRMs, one or more ground-state atoms are bound to an atom in a highly excited Rydberg state through atom-electron scattering. Background atoms experience a potential that is given by the shape of the Rydberg-electron probability distribution, and the photo-excitation rate is proportional to the probability of finding atoms in the original ultracold gas with appropriate atomic configurations. In the low-density, few-body regime, ULRRMs can be created with well-defined internuclear spacing, set by the radius of the outer lobe of the Rydberg electron wavefunction. For the most-deeply bound dimer molecular state in particular, the excitation rate is proportional to the pair-correlation function, g^2(R), of the initial sample, and R can be scanned by varying the principal quantum number of the target Rydberg state. We demonstrate this with ultracold, non-degenerate strontium gases and pair-separation length scales from R=1000-3000$ a_0, which is on the order of the thermal de Broglie wavelength for temperatures around 1 mu_K. Quantum statistics results in bunching for a single-component Bose gas of ^{84}Sr and Pauli exclusion for a polarized Fermi gas of ^{87}Sr. In the many-body regime the Rydberg atom is dressed by many background atoms, and for fermions the shape of the excitation spectrum can be explained in terms of Pauli blocking in the filled molecular orbitals of the final state. ULRRM excitation can be nearly non-destructive, and the time scale for molecule formation (~1 mu s) is also much faster than the inverse chemical potential or Fermi energy in quantum gases, potentially making this a valuable new probe of spatial correlations in many-body systems.

Research supported by the AFOSR (FA9550-14-1-0007), and the Robert A, Welch Foundation (C-1844)

Collaborators

F. B. Dunning, R. Ding, S. K. Kanungo, J. D. Whalen, H. Y. Rathore, S. Yoshida, J. Burgdorfer, John Sous, H. R. Sadeghpour, E. Demler, M. Wagner, and Richard Schmidt

In the 7th Annual LCTP Spring Symposium, the main focus will be set on Neutrino Physics, with numerous speakers covering the field.

Event program can be found in links below.

High precision technology offers a powerful new approach for particle physics. Discovering the answers to open questions such as the hierarchy problem and the nature of dark matter may in fact require a new, high precision approach instead of conventional techniques. Examples of such new physics include the axion, a solution to the strong CP problem and strongly-motivated dark matter candidate. Technologies such as atom interferometry, nuclear magnetic resonance, and high precision magnetometry allow novel, highly sensitive experiments for direct detection of dark matter and new fundamental interactions. These approaches are similar in many respects to gravitational wave detectors. I will discuss several new detectors for dark matter and/or gravitational waves. Such precision experiments will open new avenues for probing the origin and composition of the universe.

The current causal inference literature on blocking has two main branches: one on larger blocks, with multiple treatment and control units in each block and the second on matched pairs, with a single treatment and control unit in each block. For larger blocks, variance estimation is relatively straightforward. For matched pairs, however, because one cannot directly estimate variance within a block we have to use estimators that look at variation across the blocks. These alternative estimators have been evaluated under different assumptions than found in the large block literature. Because of this, these two literatures do not handle cases with blocks of varying sizes, but which contain singleton treatment or control units. This has also created some confusion regarding the benefits of blocking in general. In this talk, we reconcile the literatures by carefully examining the performance of different estimators of treatment effect and of variance under several different frameworks. We also provide variance estimators for experiments with many small blocks of different sizes and for experiments with mixtures of large and small blocks. We finally discuss in which situations blocking is or is not guaranteed to reduce the variance of our estimator.

Nicole Pashley & Luke Miratrix

Behind the unconventional behavior of many strongly interacting quantum systems is an intrinsically complex phase diagram exhibiting a variety of orders. These may not only compete but also cooperate with each other, describing phases with a common origin that are intertwined. Holographic techniques provide a theoretical laboratory to probe such strongly correlated systems, offering a new window into their dynamics.

In this talk I will discuss a holographic model of a striped superconductor, which provides a concrete realization of intertwined orders. I will also examine the formation and structure of Fermi surfaces in various holographic systems with broken translational invariance. In particular, we will see that sufficiently strong lattice effects generically cause the Fermi surface to dissolve, leaving behind disconnected segments. This segmentation process is reminiscent of the puzzling Fermi arc phenomenon observed in the high temperature superconductors.

Abstract: In this talk Dr. Leow will share her reflections, as both a computational researcher and a practicing psychiatrist, on the current landscape of psychiatric neuroimaging research and where we go from here.

To this end, she argues that recent advances in data science and information technology will revolutionize the way we conceptualize psychiatric disorders and enable us to objectively quantify their symptomatology, which traditionally has been primarily based on self reports.

To illustrate, she will highlight two lines of ongoing research that apply data science approaches to the assessment of mood and cognition. In the first example, she will propose how EEG connectomics coupled with manifold learning and dimensionality reduction may allow us to measure the ‘speed of thinking’ on a sub-second time scale. In the second example, she will introduce her recent joint work with Dr. Melvin McInnis that seeks to unobtrusively turn smartphones into ‘stethoscopes’ of the brain, in real time and in the wild.

Bio: Dr. Alex Leow is an Associate Professor in the Departments of Psychiatry, Bioengineering, and Computer Science at the University of Illinois at Chicago (UIC) and an attending physician at the University of Illinois Hospital. With Dr. Olu Ajilore, Alex founded the Collaborative Neuroimaging Environment for Connectomics (CoNECt) at UIC. CoNECt is an inter-departmental research team devoted to the study of the human brain using multidisciplinary approaches of brain imaging, non-invasive brain stimulation, Big Data analytics, virtual-reality immersive visualization, and more recently mobile technologies.

Most relevant to this talk, Alex is honored to the project lead of the BiAffect project. BiAffect is the first scientific study that seeks to turn smartphones into “brain fitness trackers”, by unobtrusively inferring neuropsychological functioning using entirely passively-collected typing kinematics metadata (i.e., not what you type but how you type it) from a smartphone’s virtual keyboard. The iOS BiAffect study app now powers the first-ever crowd-sourced research study to unobtrusively measure mood and cognition in real-time using iPhones and Apple’s ResearchKit framework.

The CoNECt team’s research has been extensively featured in the news, including more recently in Chicago Tribune, Chicago Tonight, Forbes, the Wall Street Journal, the Associated Press news, and the Rolling Stone.

Re)Making Memory in Southeast Asia is a graduate student conference and exhibition highlighting new interdisciplinary research and artistic projects focusing on issues of memory and forgetting in Southeast Asia. The one-day event culminates with a presentation by keynote speaker, Professor Eric Tagliacozzo, Cornell University, Department of History.

8:00 - 9:00 Breakfast and registration

9:00 - 9:15 Opening remarks, UM CSEAS Director Christi-Anne Castro

9:15 - 10:30 Panel 1: Constructing Identity

“Post-conflict Construction of Memory Through Mainstream Media: The Case of the Tak Bai Incident”

Ornwara Tritrakarn, Cornell University, Department of Asian Studies,

“Old stories, new heroes: Memories of masculinity in Ambon” Michael Kirkpatrick Miller, Cornell University, Department of History,

“The Royal Gift of Thai: What the Wild Boar Incident Teaches Us”

Tyler Esch, University of Hawai’i Mānoa, Department of Southeast Asian Studies,

Moniek van Rheenen, Department of Anthropology, University of Michigan, Discussant

10:30 - 11:45 Panel 2: Counter Narratives and Modes of Silence

“From "Asia as Method" to "Tây Sơn as Method"? Postwar historiography and the rise of counter-memories from the margins in the Vietnamese diaspora” Vinh Nguyen, Department of East Asian Languages and Civilizations, Harvard University

“Gender Identity and Marginalization of Vietnamese Women's Roles: The case study of HátChèo, a folk theatre in the eighteenth and nineteenth centuries” Huong Nguyen, Department of World Languages, Literature, and Culture,

Arkansas University

“Glimmers of "Pen Gan Eng": State-Sponsored Craft Fairs in Bangkok and the Aesthetics of Precarity among Silk Vendors from Surin, Thailand”

Alexandra Dalferro, Department of Anthropology, Cornell University Chao Ren, Department of History, University of Michigan, Discussant

11:45-12:45 Lunch

12:45 - 1:00 Film Screening: “Big Durian Big Apple” Azalia P. Muchransyah, SUNY Buffalo

1:15 - 2:15 Panel 3: Embodied Memory

“Temporal Emplacements Among Migrant Domestic Workers in Hong Kong”

Lai Wo, Department of Anthropology, University of Michigan

“What does it mean to remember? Cultural Memory and the Embodiment of

the Ati in the Sadsad Phenomenon”

Jemuel Jr. B. Garcia, Department of Critical Dance Studies, University of California, Riverside

Cheryl Yin, Department of Anthropology, University of Michigan, Discussant

2:30 - 3:30 Artist Talks: Photovoice Exhibition and Performance “Nostalgia, for 30-note hand crank music box.”

Can Bilir, Department of Music, Cornell University

“If age is only a number, then gender is only a word.” Understanding the circumstances of youth navigating non-traditional sexuality and gender expression in rural areas of Northern Thailand.

Colleen Towler, School of Social Work, University of Michigan

3:30 - 5:00 Keynote: Eric Tagliacozzo, Department of History, Cornell University

---

If you are a person with a disability who requires an accommodation to attend this event, please reach out to us at least 2 weeks in advance of this event. Please be aware that advance notice is necessary as some accommodations may require more time for the university to arrange. Contact: alibyrne@umich.edu

Synchronization of neural activity in the brain is involved in a variety of brain functions including perception, cognition, memory, and motor behavior. Excessively strong, weak, or otherwise improperly organized patterns of synchronous oscillatory activity may contribute to the generation of symptoms of different neurological and psychiatric diseases. However, neuronal synchrony is frequently not perfect, but rather exhibits intermittent dynamics. The same synchrony strength may be achieved with markedly different temporal patterns of activity. I will discuss methods to describe these phenomena and will present the application of this analysis to the neurophysiological data in healthy brain, Parkinson’s disease, and drug addiction disorders. I will finally discuss potential cellular mechanisms and functional advantages of some of the observed temporal patterning of neural synchrony.

In high energy proton-proton collisions at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), high energy sprays of particles, called jets, are one of the most copiously produced final states. Jets form when a quark or gluon is produced in a scattering process, and because of confinement, these quarks and gluons nonperturbatively form a collimated spray of hadrons. While jets are one of the most frequently used objects in physics analyses at RHIC and the LHC, it was only recently realized that the structure of jets can probe a wide variety of physics at collider facilities. In this talk I will discuss the breadth of physics that can be probed by studying the constituents of jets, with a focus on recent results from the LHCb experiment.

Student Poster Exhibition

Where: 337/340 West Hall

When: 2:30-3:50pm

Tuesday, April 9 and Thursday April 11

Talk with students about their research!

Light refreshments served.

Neutrons make up half of all matter but become unstable when freed from the nucleus. The precise value of the neutron lifetime plays an important role in nuclear and particle physics and cosmology. Professor Liu will describe the latest measurement, which traps neutrons by levitating neutrons with a large array of permanent magnets. The lifetime measured this way appears different than that measured with a beam of neutrons leading some to conjecture their disappearance into an undetectable state.

]]>Abstract: This talk will discuss my current views about the basic science and the practical applications of phenomena in the computational universe of simple programs. I'll talk about my current ideas about modeling, abstraction, and mining the computational universe for technology. I'll also talk about implications for AI, SETI, and basic questions about the role of humans in the computational universe.

About: Stephen Wolfram is the creator of Mathematica, Wolfram|Alpha and the Wolfram Language; the author of A New Kind of Science; and the founder and CEO of Wolfram Research. In recognition of his early work in physics and computing, Wolfram became in 1981 the youngest recipient of a MacArthur Fellowship. Following his scientific work on complex systems research, in 1986 Wolfram founded the first journal in the field: Complex Systems.

Student Poster Exhibition

Where: 337/340 West Hall

When: 2:30-3:50pm

Tuesday, April 9 and Thursday April 11

Talk with students about their research!

Light refreshments served.

Latent class models for learning in online and e-learning settings are introduced. A real data example of an intervention for learning rotational skills in spatial reasoning is used to illustrate restricted latent class models for item responses, known as cognitive diagnosis models, that are coupled with transition models for learning. An array of possibilities are considered that include models with explanatory variables, such as intervention and practice effects, as well as more parsimonious first-order Markov models. In addition, we consider a higher-order continuous variable that may be thought of as general learning ability. The value of response times in assessing learning is considered, and the concept of fluency in which learned attributes are applied more and more easily is introduced. Extensions of the models that include parameters for the instructional value of individual items are given, and MCMC methods for fitting the models are discussed along with results from numerical studies. An R package that includes the spatial reasoning dataset and tools for fitting learning models is reviewed and future directions and new possibilities for applying learning models in e-learning environments are discussed.

]]>The Twin Higgs model is an attractive solution to the little Hierarchy problem with top partners that are neutral under SM gauge charges. The framework is consistent with the null result of LHC colored top partner searches while offering many alternative discovery channels. Depending on model details, the phenomenology looks very different: either spectacular long-lived particle signals at colliders, or a plethora of unusual cosmological and astrophysical signatures via the existence of a predictive hidden sector. I will examine the latter possibility, and describe how the asymmetrically reheated Mirror Twin Higgs provides a predictive framework for a highly motivated and highly non-trivial interacting dark sector, with correlated signals in the CMB, Large Scale Structure, and direct detection searches, as well as higgs precision measurements at colliders. This provides a vivid example of the collider-cosmology complementarity, and motivates a variety of new astrophysical searches, including the search for X-ray point sources from Mirror Stars, that are motivated by the hierarchy problem.

]]>The Twin Higgs model is an attractive solution to the little Hierarchy problem with top partners that are neutral under SM gauge charges. The framework is consistent with the null result of LHC colored top partner searches while offering many alternative discovery channels. Depending on model details, the phenomenology looks very different: either spectacular long-lived particle signals at colliders, or a plethora of unusual cosmological and astrophysical signatures via the existence of a predictive hidden sector. I will examine the latter possibility, and describe how the asymmetrically reheated Mirror Twin Higgs provides a predictive framework for a highly motivated and highly non-trivial interacting dark sector, with correlated signals in the CMB, Large Scale Structure, and direct detection searches, as well as higgs precision measurements at colliders. This provides a vivid example of the collider-cosmology complementarity, and motivates a variety of new astrophysical searches, including the search for X-ray point sources from Mirror Stars, that are motivated by the hierarchy problem.

]]>Abstract: The Data Science group at The New York Times develops and deploys machine learning solutions to newsroom and business problems. Re-framing real-world questions as machine learning tasks requires not only adapting and extending models and algorithms to new or special cases but also sufficient breadth to know the right method for the right challenge. I’ll first outline how unsupervised, supervised, and reinforcement learning methods are increasingly used in human applications for description, prediction, and prescription, respectively. I’ll then focus on the ‘prescriptive’ cases, showing how methods from the reinforcement learning and causal inference literatures can be of direct impact in engineering, business, and decision-making more generally.

Bio: At Columbia, Chris is a founding member of the executive committee of the Data Science Institute, the Department of Systems Biology, and is affiliated faculty in Statistics. He is a co-founder and co-organizer of hackNY (http://hackNY.org), a nonprofit which since 2010 has organized once a semester student hackathons and the hackNY Fellows Program, a structured summer internship at NYC startups. Prior to joining the faculty at Columbia he was a Courant Instructor at NYU (1998-2001) and earned his PhD at Princeton University (1993-1998) in theoretical physics. He is a Fellow of the American Physical Society and is a recipient of Columbia’s Avanessians Diversity Award.

The MiniBooNE short-baseline neutrino experiment at Fermilab observes a significant excess of electron-like events. From 2.4E21 protons on target in neutrino and antineutrino mode, a total electron-neutrino charged-current quasi-elastic excess of 460.5 +- 99.0 events (4.7 sigma) is observed in the neutrino energy range from 200-1250 MeV. If interpreted in a standard two-neutrino oscillation model, the best oscillation fit to the excess has a probability of 21.1%, while the background-only fit has a chisquare probability of 6E-7 relative to the best fit. The MiniBooNE data are consistent in energy and magnitude with the excess of events reported by the Liquid Scintillator Neutrino Detector (LSND), and the significance of the combined LSND and MiniBooNE excesses is 6 sigma. All of the major backgrounds are constrained by in-situ event measurements, so non-oscillation explanations would need to invoke new anomalous background processes.

Relativistic Dirac and Weyl fermions were extensively studied in quantum field theory. Recently they emerged in the nonrelativistic condensed-matter setting as gapless quasiparticle states in some types of crystals. Notable examples of 2D systems include graphene and surface states in topological insulators such as Bi_2Se_3. Their 3D implementation is Dirac and Weyl semimetals. Most of the research has been focused on their topological properties and electron transport. However, their optical properties are no less exciting. Optical phenomena provide valuable insight into the fascinating physics of these materials. Optical spectroscopy can provide a cleaner and more straightforward way of studying topological properties of electron states as compared to transport measurements. Moreover, unusual optical properties of these materials can be utilized in future optoelectronic devices. I will discuss several examples illustrating these points. They include bulk and surface polaritons in Weyl semimetals, magnetooptics of Dirac and Weyl semimetals, and the nonlocal nonlinear optical response of graphene and topological insulators.

My lab is interested in the role that neuronal firing patterns play in the encoding, storage, transfer and retrieval of information by the brain. To study this question, we focus on in-vivo extracellular recordings and computational analyses of spike trains from up to 100 neurons from the hippocampus and cortex during activity and sleep, combined with optogenetic and chemogenetic manipulations. In this talk, I will discuss the distinct patterns that we see in these spike trains on multiple timescales and how they change during and across different brain network states. Because these activities are found in the hippocampus, I will discuss their relationship to memory. Finally, I will describe current efforts to evaluate the temporal structure in neuronal spike trains using unsupervised machine learning by hidden Markov models.

]]>In a crystal with only one atom per unit cell, all atoms play the same role in producing the solid's global response to external perturbations. Disordered materials are not similarly constrained and a new principle emerges: independence of bond-level response. This allows one to drive the system to different regimes of behavior by successively removing individual bonds. We can thus exploit disorder to achieve unique, varied, textured and tunable global response or long-range interactions inspired by allosteric behavior in proteins. While this approach is successful for systems with only a few degrees of freedom, it is difficult to scale up the number of elements to be controlled or scale down the size of the individual components. However, because a material has a memory of under what conditions it has been aged, we can direct the aging using Nature's greedy algorithms to achieve a variety of mechanical functionalities.

Cellular reprogramming is a phenomenon where mature, specialized cells can be reprogrammed to immature cells capable of developing into all tissues of the body. Do cells individual cells differ in their ability to reprogram? We address this using cellular barcoding based lineage tracing, and demonstrate that reprogramming dynamics in large "interacting" populations are dominated by “elite” clones [1]. This work highlights the importance of cellular interactions and/or epigenetic heterogeneity in fate programming outcomes. In contrast, tissue regeneration in animals exhibit neutral dynamics between the underlying population of stem cells [2]. Taken together, we show that looking at cell fate transition from the lens of eco-evolutionary lens shed light on underlying biology.

Neuroscientists often use functional magnetic resonance imag- ing (fMRI) to infer effects of treatments on neural activity in brain regions. In a typical fMRI experiment, each subject is observed at several hundred time points. At each point, the blood oxygenation level dependent (BOLD) response is measured at 100,000 or more locations (voxels). Typically, these responses are modeled treating each voxel separately, and no rationale for interpreting associations as effects is given. Building on Sobel and Lindquist (2014), who used potential outcomes to define unit and average effects at each voxel and time point, we define and estimate both “point” and “cumu- lated” effects for brain regions. Second, we construct a multi-subject multi-voxel multi-run whole brain causal model with explicit param- eters for regions. We justify estimation using BOLD responses av- eraged over voxels within regions, making feasible estimation for all regions simultaneously, and facilitating inference about association between effects in different regions. We apply the model to a study of pain, finding effects in standard pain regions; we also observe more cerebellar activity than observed in previous studies using prevailing methods. We visualize results using whole-brain maps of effects and spatio-temporal correlation plots that illustrate temporally lagged re- lationships between brain regions.

By Michael E. Sobel†,‡, and Martin A. Lindquist†,§ Columbia University‡ and Johns Hopkins University§

The High Luminosity Large Hadron Collider (HL-LHC) is an upgrade to the Large Hadron Collider at CERN, which will extend the accelerator's potential for new discoveries in physics. This upgrade will increase the rate of collisions by a factor of five beyond the original design value and the total collisions created by a factor ten. To meet the challenging conditions of the HL-LHC, the Compact Muon Solenoid (CMS) detector is undergoing an extensive Phase 2 Upgrade program. In particular, a new precision timing detector with hermetic coverage up to a pseudo-rapidity of |η|=3 will measure minimum ionizing particles (MIPs) with a time resolution of 30-40 ps. This measurement of the time coordinate will reduce the effects of the high levels of pile-up expected at the HL-LHC and bring new capabilities to the CMS detector. In this seminar, I will discuss the impact on the HL-LHC physics program as well as the design and technology of this new detector.

Catalin Florea (Applied Physics PhD, 2002) will share notes on his (non-academic) early and mid-career path – from landing the first job deep into Midwest, to working now in R&D for a Fortune 100 company. Achievements and setbacks will be discussed, and an informal Q & A session will provide an opportunity to connect with the speaker.

We normally think of large accelerators and massive detectors when we consider the frontiers of elementary particle physics, pushing to understand the universe at higher and higher energy scales. However, several tabletop low-energy experiments are positioned to discover a wide range of new physics beyond the Standard model, where feeble interactions require precision measurements rather than high energies. In high vacuum, optically-levitated dielectric nanospheres achieve excellent decoupling from their environment, making force sensing at the zeptonewton level (10^{-21} N) achievable. In this talk I will describe our progress towards using these sensors for tests of the Newtonian gravitational inverse square law at micron length scales. Optically levitated dielectric objects show promise for a variety of other applications, including searches for gravitational waves. Finally, I will discuss the Axion Resonant InterAction Detection Experiment (ARIADNE), a precision magnetometry experiment using laser-polarized 3-He gas to search for a notable dark-matter candidate: the QCD axion.

Two of the biggest open questions in the Standard Model of Particle Physics are: is the neutrino its own antiparticle, a Majorana particle, and is Peccei-Quinn Symmetry with the resulting axion the solution to the strong CP problem. The answer to these questions is a portal to new physics and the answer to the even bigger questions of the generation of the matter-antimatter asymmetry and the nature of dark matter. My group works to address these questions with searches for neutrinoless double-beta decay and ultra-light axions. In this talk, I will review the physics that connects these two efforts, the current status of the fields, and our R&D efforts towards the next-generation experiments.

Dynamic treatment regimes, also called adaptive interventions, guide sequential treatment decision-making in a variety of fields, including healthcare and education. Dynamic treatment regimes accommodate differences between individuals and changes in individuals over time. Sequential randomized trials are a specific type of trial design useful for developing high-quality dynamic treatment regimes. Sequential randomized trials utilize re-randomization of individuals over time in order to discover how to sequence, time, and personalize treatments. Two of the most commonly used sequential randomized trial designs are sequential multiple assignment randomized trials and micro-randomized trials.

In this thesis, we contribute to both the design and analysis of sequential randomized trials. We describe design considerations for sequential randomized trials in online education. We present the design and analysis for a sequential randomized trial developed to reduce dropout in a massively open online course. We also develop statistical methodology and sample size formulae for sequential multiple assignment randomized trial designs which include cluster-level randomization. The techniques are inspired by a trial aiming to develop high-quality dynamic treatment regimes for mental health clinics. Lastly, we illustrate the design, describe the analysis, and present results of a large micro-randomized trial aiming to develop mobile health interventions for improving medical interns' mental health.

With technology advances in recent years, sensing and media storage capabilities have enabled the generation of enormous amounts of information, often in the form of large data sets in different scientific fields such as biology, marketing and medicine. As this vast amount of data has opened a wealth of opportunities for data analysis, computationally scalable methods become increasingly important for statistical modeling. This thesis focuses on developing scalable classification methods and their applications to automotive dealerships and healthcare problems.

The first project studies parameter estimation of customers' and dealerships' consumption preference for the automotive market, which determines the manufacturers' profits. Most existing methods assume that the dealerships are rational and hence aim to maximize profits, which conflicts with observations. We propose a structural Bayesian model for customers’ and dealerships’ preference which aims to maximize a flexible utility function. Further we develop an MCMC algorithm utilizing parallel computing to estimate model parameters. The model is calibrated to data from a manufacturer, and the estimates are used in a simulation model to design optimal financial incentive offers to maximize profits.

The second project focuses on the two-class classification problem based on the area under the receiver operating curve (AUC), which is often considered as a more comprehensive measure for the performance of a classifier comparing with the misclassification error. Maximizing the empirical AUC directly, however, is computationally challenging as naive computation of the AUC requires quadratic time complexity, while computing the misclassification error only requires linear time complexity. Further, the optimization involves indicator functions and it is NP-hard. In this project, we propose a non-convex differentiable surrogate function for the AUC, and further develop a scalable algorithm to optimize this surrogate loss function. The proposed algorithm takes advantage of the selection tree data structure and also uses a truncated Newton strategy so that the computational complexity of the optimization scales at the quasilinear time. In the setting of linear classification, we also show that the estimated coefficients enjoy theoretical asymptotic consistency. Finally, we evaluate the performance of the proposed method using both simulation studies and two data sets, one for normal/abnormal vertebral column classification and the other for behaving/not-behaving network visit classification, and show that the proposed method outperforms the support vector machine (SVM) in terms of the AUC.

The last project is motivated by the problem of predicting midterm mortality of patients using the Ninth Revision, International Classification of Diseases (ICD-9) codes, which is relevant for healthcare and clinical research. The ICD-9 contains a list of standard six-character alphanumeric codes recording useful clinical information including patient diagnoses and procedures. However, the number of ICD-9 codes in a specific study is often large, on the order of thousands or tens of thousands, and the dependence structure among ICD-9 codes is complicated, which pose statistical challenges for using the ICD-9 codes. To address these challenges, we develop a supervised embedding method that combines an unsupervised criterion for learning latent representations of ICD-9 codes and a Deep Set neural network model for classification, which is invariant with respect to the ordering of the ICD-9 codes. The proposed supervised embedding method has the advantage of modeling the inter-relationship within ICD-9 codes and the nonlinear relationship between codes and the outcome variable simultaneously, and it can also be naturally extended to the semi-supervised learning setting. The model is trained using the stochastic gradient descent (SGD) approach, which allows the entire database to be stored on multiple computing nodes and hence makes the method suitable for analyzing large data sets. We have applied the proposed method to 1-year mortality prediction using the Medical Information Mart for Incentive Care III (MIMIC-III) database and achieved superior performance in comparison with several benchmark models.

Excitons, made of electron-hole pairs bound by Coulomb interaction, provide compelling opportunities for applications in optoelectronics, information storage, non-volatile logic. However, the small binding energy of exciton in conventional semiconductors jeopardizes its integration and potentials in modern optoelectronics schemes. In the past decade, a new type of two-dimensional semiconductors, mainly transition metal dichalcogenides (TMDs), attract tremendous interests with much larger exciton binding energy. Thus, stable excitonic effects up to room temperature can give rise to extremely strong light-matter interaction. Together with their ultra-lightweight and other emerging properties, such strong excitonic interaction in 2D TMD opens up the possibility to optically control properties of monolayer semiconductors over the suspended structure.

In this talk, I will first review this new type of 2D semiconductors and interesting device physics by employing the structure of nanoelectromechanical systems (NEMS). Then I’ll present our study of exciton-induced nonlinearities in suspended TMD monolayers, where we achieved a robust optical bistability near the exciton resonance. Our results also demonstrate a helicity-dependent optical switching that enables control of light not only by light intensity but also by its polarization using monolayer materials. Additionally, I will discuss our recent results on dynamically manipulating the mechanical motion of a suspended 2D semiconductor through its exciton resonance, without an optical cavity structure.

Network data represent connectivity relationships between individuals of interest and are common in many scientific fields, including biology, sociology, medicine and healthcare. Often, additional node features are also available together with the data on relationships. Both types of data contain important information about individual characteristics and the population structure. This thesis focuses on developing statistical machine learning methods and theory for network data with node features.

We first study the problem of community detection for networks with node features using a model-based approach. Most existing models make strong conditional independence assumptions between the network, features and community memberships, which limits the applicability of the model. In our work, we develop a general statistical framework to describe the dependence structure between the link structure, node features and communities. Further, we propose two families of models that are the most general under this framework with the least conditional independence assumptions between the three components. We have established mild conditions for model identifiability and developed variational EM algorithms to estimate model parameters and community memberships. Extensive simulation studies and application to a food web and a lawyer friendship network indicate that the proposed methods work well.

The second project focuses on the problem of node classification using both individual features and the network. In a classical setting, data points are assumed independent and identically distributed, and a data point is classified using only its own features. When a network between the data points is available, it often contains additional information about class memberships and can be utilized to improve classification performance. In this work, we develop a general statistical framework for network augmented classification. Under this framework, we derive the optimal Bayes classifiers for two general families of distributions incorporating node features and networks. Further, we establish asymptotic consistency results for plug-in classifiers with respect to the optimal ones under the two families. We have also applied these general approaches to specific models and developed effective classifiers for practical use. The proposed methods have been evaluated using both simulation studies and a teenage friendship network, and show promising results.

The final contribution of this thesis is on link prediction for incomplete network data. Most existing link prediction methods require at least partial observation of connections for every node. In real-world networks, however, there often exist nodes that do not have any link information, and it is of interest to make link predictions for them using only their node features. We consider a general setup in which a network consists of three types of nodes, nodes only having feature information, nodes only having link information, and nodes having both. Our goal is to make link predictions for nodes having only feature information. Under this setting, we have proposed a family of generative models for incomplete networks with node features, and we have developed a variational auto-encoder algorithm for model estimation and link prediction and investigated different encoder structures. We have also designed a cross-validation scheme under the problem setting for model selection. The proposed method has been evaluated on an online social network and two citation networks and achieves superior performance comparing with existing methods.

Direct detection experiments have delivered impressive limits on the interaction strength of dark matter with nuclei. A large experimental program is underway to extend the sensitivity of direct detection experiments, however, such experiments are becoming increasingly difficult and costly. Recently, we proposed paleo-detectors as analternative approach to the direct detection of dark matter: Instead of searching for dark matter induced nuclear recoils in a real-time laboratory experiment, we propose to search for the traces of dark matter interactions recorded in ancient minerals over geological time-scales. In this talk I will discuss this proposal, including ways to mitigate backgrounds and methods to read out tracks from ancient minerals. I will also briefly discuss some preliminary results for applications of paleo-detectors beyond dark matter, e.g. for searching for neutrinos from core collapse supernovae.

Spectral methods are a staple of modern statistics. For statistical learning tasks such as clustering or classification, one can featurize the data with spectral methods and then perform the task on the features. Despite the success and wide use of spectral methods, certain theoretical properties of spectral methods are not well-understood. In this oral defense, we investigate the uniform convergence (as opposed to convergence in an ``average'' sense) of the spectral embeddings obtained from the leading eigenvectors of certain similarity matrices to their population counterparts. We will first introduce necessary preliminaries and review existing results on this topic, and then explain the motivations and benefits for studying the convergence in a uniform sense. After that, we present the two main results in our work. The first result is a general perturbation result for orthonormal bases of invariant subspaces that can serve as a general recipe for establishing uniform consistency type results. The second result in an application of the first result to normalized spectral clustering---by tapping into the rich literature of Sobolev spaces and exploiting some concentration results in Hilbert spaces, we are able to prove a finite sample error bound on the uniform consistency error of the spectral embeddings in normalized spectral clustering.

The material in this oral defense is based on the first chapter of Ruofei Zhao's thesis "Convergence and Consistency Results in Spectral Clustering and Gaussian Mixture Models".

The Graduation Reception for Statistics Graduate Students will be held on May 3, 2019 in 340 West Hall from 2:30pm - 4:30pm. Doors will open at 2:15pm.

More details TBA

The use and development of mobile interventions are experiencing rapid growth. Ideally, mobile devices can be used to provide treatment/support whenever needed and to adapt treatment to the context of the user. Just-in-time Adaptive Interventions (JITAIs) are composed of decision rules that map a user’s context (e.g., user's behaviors, location, current time, social activity, stress and urges to smoke) to a treatment that is delivered to the user via the mobile device in near real-time. Advancements in mobile health engineering and technology (e.g., passive stress sensing) continue to bring us closer to being able to provide interventions in this way. However, a number of important gaps in data science must be addressed before mobile devices can be used to deliver on the promise of JITAIs. First, there is a need for experimental designs to collect data that can be used to assess the effectiveness of the sequence of treatments delivered by a mobile device on health outcomes in order to support the development of JITAIs. Second, there is a need for data-driven methods to inform the construction of efficacious JITAIs. In the vast majority of currently deployed JITAIs, the decision rules underpinning JITAIs are formulated using domain expertise and clinical experience, with very limited use of data evidence.

In this dissertation, we make several contributions by tackling the above mentioned data science barriers to effective JITAI development in mobile health. First, we propose a micro-randomized trial (MRT) design and develop the primary analysis for assessing the proximal causal effect of treatments. In addition, we develop stratified micro-randomized trials for the setting where there is a time-varying, discrete variable and the primary analysis focuses on how the effectiveness of interventions changes with this variable. We also develop a novel algorithm to design randomization scheme for this setting when there is an average constraint on the number of times interventions that should be sent in a certain time interval. Second, we develop a semi-parametric model to estimate the long-term average of health outcomes that would accrue should a given JITAI be followed. We derive asymptotic theory for the consistency and asymptotic normality of the proposed estimator. Third, we develop an online learning algorithm that continuously learns and improves the JITAI as the data is collected from the user. The proposed algorithm introduces a proxy of future outcomes based on a dosage variable to capture the delayed effect of sending the interventions due to the treatment burden.

Dark Matter (DM) is a long standing puzzle in fundamental physics and goal of a diverse research program. In underground experiments such as LZ we search for DM directly using lowest possible energy thresholds, at the LHC we seek to produce dark matter at the very highest energies, and using telescopes we look for telltale signatures in the cosmos. All these detection methods probe different parts of the possible parameters space with complementary strengths. I will present current DM searches, their connection and how an interdisciplinary program bridging different experimental frontiers can achieve optimal sensitivity. Finally, I will highlight recent theoretical and experimental developments and the near term discovery prospects in upcoming experiments.

]]>Scientific research can be a slow and laborious process. The absolute final step in the process is to then communicate your exciting scientific findings to other scientists both in and outside of your field. Yet it is sometimes at this final step where the least amount of time is spent. In this interactive 90-min workshop, I will give a basic introduction to making scientific figures using Adobe Illustrator and Blender3D. I will go over the basics of these software, how they treat objects, and the useful hotkeys for speeding up workflow. In the first hour, I will introduce Illustrator and cover topics like workflow; importing external plots/figures; creating patterns (i.e. schematic atomic lattices); and creating 3D structures. In the last half-hour I will give a brief introduction to Blender, a powerful (and free) open-source software for rendering 3D objects. I will go over the basics of how Blender treats objects/structures, lighting, and rendering a scene.

**All are welcome, but it is strongly recommended that participants bring laptops with Adobe Illustrator CC (or at least CS6) and Blender3D pre-installed so that you can follow along with the demos.**

Neuroimaging data on functional connections in the brain are frequently represented by weighted networks. These networks share the same set of labeled nodes corresponding to a fixed atlas of the brain, while each subject’s network has their own edge weights. This thesis focuses on developing statistical tools for analyzing samples of weighted networks with applications to neuroimaging.

We first propose a method for modeling such brain networks via linear mixed effects models, which takes advantage of the community structure, or functional regions, known to be present in the brain. The model allows for comparing two populations, such as patients and healthy controls, globally, at functional systems level, and at individual edge level, with systems-level inference in particular allowing for a biologically meaningful interpretation. We incorporate correlation between edge weights into the model by allowing for a general variance structure, and show this leads to much more accurate inference. A thorough study comparing schizophrenics to healthy controls illustrates the full potential of our methods, and obtains results consistent with the medical literature on schizophrenia.

While we focus on networks as the main object of analysis, auxillary information about subjects is frequently available. The subject’s age is a particularly important covariance, since studying how the brain changes over time can lead to insights about brain development in children and adolescents and the effects of aging for older subjects. A typical neuroimaging study, however, is cross-sectional rather than longitudinal, meaning we measure subjects of many different ages, but only once. We developed two methods for analyzing such samples of multiple, time-stamped networks. One is a parametric approach utilizing a linear mixed effects model with age included as a covariate; the other one is a nonparametric method which can be viewed as a network version of principal component analysis, where we look for components that explain age-related trends and vary smoothly with age. Both approaches take network community structure into account and allow for concise and interpretable representation of the data by obtaining developmental curves for functional regions of the brain that vary smoothly with age. We apply the methods to fMRI data of subjects who are 8 to 22 years old, and extract developmental curves consistent with the current understanding of brain maturation in neuroscience.

Clustering is of special interest in neuroimaging studies of mental illness, because psychiatrists believe that many psychiatric conditions present in multiple distinct and not yet identified subtypes. Clustering brain connectivity networks of patients with a certain disorder can lead to discovering these subtypes, and ideally identifying the differences in connectivity patterns that distinguish between subtypes. Clustering with a large number of features is challenging in itself, and the network nature of the observations presents additional difficulties. Our goal is to develop a clustering method that respects the network nature of the data, allows for feature selection, and scales well to high dimensions. One general method for clustering and feature selection in high dimensions is sparse K-means, which performs feature selection by minimizing the K-means objective function plus a lasso penalty. Here we develop network-aware sparse K-means, using a network-induced penalty for simultaneously clustering weighted networks and performing feature selection. We also develop a Gaussian mixture model version of the algorithm, particularly useful when features are highly correlated, which is the case in neuroimaging. We illustrate the method on simulated networks and an fMRI dataset of youth.

In metals, orbital motions of conduction electrons on the Fermi surface are quantized in magnetic fields, which is manifested by quantum oscillations in electrical resistivity. This Landau quantization is generally absent in insulators. Here we report a notable exception in an insulator — ytterbium dodecaboride (YbB12). The resistivity of YbB12exhibits distinct quantum oscillations despite having a much larger magnitude than in metals [1]. This unconventional oscillation is shown to arise from the insulating bulk, even though the temperature dependence of the oscillation amplitude follows the conventional Fermi liquid theory of metals. The large effective masses indicate the presence of a Fermi surface consisting of strongly correlated electrons. Quantum oscillations are also observed in the magnetization of YbB12 [1]. Our result reveals a mysterious dual nature of the ground state in YbB12: it is both a charge insulator and a strongly correlated metal.

[1] Z. xiang et al., Science 362, 65 (2018).

Tidal stellar streams have gained a lot of popularity in the field of astrophysics. These orbit-like structures, that are formed by the tidal disruption of a globular cluster or a satellite galaxy by the potential of the host galaxy, serve as “fossils” that encode information regarding the accretion history of our Galaxy. Recently, it has also been realized that analysis of the morphology and dynamics of star streams provide powerful means to constrain the Milky Way’s gravitational potential and its dark matter distribution, and can also be useful in probing the very nature of the dark matter particle itself.

The talk is intended to provide a short introduction on “stellar stream” systems and their importance in various scientific studies. The other highlight of the talk would be the STREAMFINDER algorithm (an algorithm designed to detect stellar streams in the astrophysical catalogues) and the new panoramic sky map of the stellar streams of the Milky Way halo that we obtained by analyzing ESA/Gaia DR2. Towards the end, I will also mention some of the recent studies that I have been involved in which also employ stellar streams.

In the field of high-dimensional statistics, it is commonly assumed that only a small subset of the variables are relevant and sparse estimators are pursued to exploit this assumption. Sparse estimation methodologies are often straightforward to construct, and indeed there is a full spectrum of sparse algorithms covering almost all statistical learning problems. In contrast, theoretical developments are more limited and often focus on asymptotic theories. In applications, non-asymptotic results may be more relevant.

The goal of this work is to show how non-asymptotic statistical theory can be developed for sparse estimation problems that assume group sparsity. We discuss three different problems: principal component analysis (PCA), sliced inverse regression (SIR) and multivariate regression. For PCA, we study a two-stage thresholding algorithm and provide theories that go beyond the common spiked-covariance model. SIR is then related to PCA in some special settings, and it is shown that the theory of sparse PCA can be modified to work for SIR. Regression represents another important research direction in high-dimensional analysis. We study a linear regression model in which both the response and predictors are grouped, as an extension of group Lasso.

Despite the distinctions in these problems, the proofs of consistency and support recovery share some common elements: concentration inequalities and union probability bounds, which are also the foundation of most existing sparse estimation theories. The proofs are presented in modules in order to clearly reveal how most sparse estimators can be theoretically justified. Moreover, we identify those modules that are possibly not optimized to show the limitation of the existing proof techniques and how they could be extended.

Advances in computational hardware has greatly expanded the power to collect and store data. With collection of greater data sets, comes greater difficulties in estimation, warranting more analysis and novel methods to handle. This is still true in threshold estimation, the estimation of discontinuities. To this end we present this body of work in thresholding problems in long data sequences and data with growing dimensions. The former setting, more commonly known as the change point problem, we introduce and analyze a method which can estimate change points with greater computational efficiency than existing procedures, without compromising the accuracy of the estimators. For the latter problem, also known as the change plane problem, we will study the case when the dimension of the problem grows with the sample size, a setting not well-studied in existing literature, and lay the groundwork with asymptotic results.

]]>The KOTO experiment at the J-PARC research facility in Tokai, Japan aims to observe and measure the rare decay of the neutral kaon, K_L→π^0νν. This decay has a very small Standard Model predicted branching ratio of 3 x 10^{-11} which is why it has never been experimentally observed. While this decay is extremely rare, it is one of the best decays for studying charge-parity violation, which can tell us about the matter and antimatter asymmetry that we see in the universe today. In this talk, I will explain the details of how KOTO searches for this rare decay and present new results from the collaboration published in January 2019 as well as preliminary results from the current analysis.

]]>LOC Chair Dragan Huterer

]]>A e^+e^-collider that covers the center of mass energy of 2-7 GeV and has a luminosity of 10^{35} cm^{-2} s^{-1} could produce billions of charmonia, charmed baryon pairs and tau lepton pairs right at their production thresholds, which could be the unique data for systematically study physics with Charm quark and tau lepton, in particular the study of the hadron structure, search for exotic hadrons like glueball, hybrid and multi-quark-states, as well as new physics that is beyond the Standard Model through high precision measurements. This presentation will briefly introduce the STCF, mainly its physics motivation, the conception design of the machine and detector, as well as its current status of the project promotion in the world.

Advances in data collection and computation tools popularize localized modeling on temporal or spatial data. Similar to the connection between derivatives and smooth functions, one approach to studying the local structure of a random field is to look at the tangent field, which is a stochastic random field obtained as a limit of suitably normalized increment of the random field at any fixed location. This thesis develops theories for tangent fields of any order and new statistical tools for their inference.

Our first project focuses on various properties of tangent fields. In particular, we show that tangent fields are self-similar and intrinsically stationary. Those two properties, along with the assumption of mean-square continuity, allow us to fully characterize a tangent field via a spectral representation, which provides a systematic way to obtain useful models. Our extension of the spectral theory to abstract spaces, including function spaces, can be of interest on its own. We also connect our theories with common models in spatial statistics including the Mat\'ern model and its variations. Preliminary inference methods are proposed along with simulation studies.

An important example of random fields with tangent fields is the multifractional Brownian motion which has been studied extensively. Our second project focuses on a wide range of issues concerning the estimation of the Hurst function of a multifractional Brownian motion when the process is observed on a regular grid. A theoretical lower bound for the minimax risk of this inference problem is established for a wide class of smooth Hurst functions. We also propose nonparametric estimators and show they are rate optimal. Implementation issues including how to overcome the presence of a nuisance parameter and choose the tuning parameter from data have also been included. An extensive numerical study is conducted to compare our approach with other approaches. Some explorations about nongrid observations and nonconstant variances are also included.

Recently, more than a dozen new stellar streams in the Milky Way were discovered in the southern hemisphere with the Dark Energy Survey (DES). In this talk, I will present an ongoing spectroscopic program S5, which maps these southern streams with the 2df/AAOmega spectrograph on the Anglo-Australian Telescope. S5 is the first systematic program pursuing a complete census of known streams in the southern hemisphere. The radial velocities and stellar metallicities from S5, together with the proper motions from Gaia DR2, provide a unique sample to understand the Milky Way halo populations, the progenitors and formation of the streams, the mass and shape of the Milky Way potential, and to test the characteristics of dark matter. So far, the S5 program has obtained the 6D+1 (metallicity) phase space information for 10 streams in the DES footprint, all of which are the first-time measurements for these southern streams, and we are expanding our program beyond the DES footprint to cover more southern streams. I will give an overview of the S5 program, including target selection, observation, and data analysis, and I will end with a discussion of the implications of the preliminary results from S5.

]]>Photosynthesis is a vital process that forms the basis of most life and energy sources on the planet. The knowledge of the underlying mechanisms of charge and energy transfer involved in this process can be used to develop artificial light-harvesting systems and biofuels, helping us to meet our own energy needs. In this talk, I will discuss how we use fluorescence-detection-based two-dimensional electronic spectroscopy (F-2DES) to study the energy transfer in light-harvesting (LH2, in particular) complexes present in photosynthetic purple bacteria. Due to long acquisition times, photobleaching effects during the 2D measurements can distort the features of the acquired spectra. Motivated by the desire to reduce these effects without sacrificing the signal-to-noise ratio (SNR), we have adapted a rapid-scanning approach to record the linear spectra of the complexes in question. I will discuss the technique and results from the same. Extending this rapid-scanning technique to F-2DES promises reduced acquisition times and improved SNR for the 2D spectra.

]]>Subgroup analysis is frequently used to account for the treatment effect heterogeneity in clinical trials. When a promising subgroup is selected from existing trial data, a decision on whether an additional confirmatory trial for the selected subgroup is worth pursuing needs to be made. Unfortunately, the usual statistical analysis applied to the selected subgroup as if the subgroup is identified independently of the data often leads to overly optimistic evaluations. Any statistical analysis that ignores how the subgroup is selected tends to suffer from subgroup selection bias. In this dissertation, we propose two new statistical tools to evaluate the selected subgroup. The first is a risk index which can be used as a simple screening tool to reduce the risk of over-optimism in naive subgroup analysis and the second is debiased inference to answer the question of how good the selected subgroup really is. The proposed tools are model-free, easy-to-implement and adjust for the subgroup selection bias appropriately. We demonstrate the merit of the proposed tools by re-analyzing the MONET1 trial. An extension of the debiased inference method is also discussed for observational studies with potentially many confounders.

]]>Decomposing a network into communities (a partition of the vertices such that there is a significantly higher density of connections within groups than between groups) has been a subject of great interest in the network science community due to its numerous applications in data compression and machine learning. For many real networks, however, we do not know the "true" community labels, and so one way of assessing whether a community detection algorithm works well or not is to frame the task as an inference problem: there is a set of nodes with artificially assigned "ground truth" community labels, from which a network is created through some probabilistic generative process, and the goal is to recover this structure using only the network and the algorithm of interest. Intuitively, if a graph is too sparsely connected or it is generated from a noisy process, we should fail to recover partitions that are correlated with our artificial ground truth. In this talk I discuss an interesting phenomenon in which it suddenly (in terms of a control parameter) becomes impossible to recover the true communities in a graph, even when they are explicitly planted in its topology! This abrupt qualitative change in the difficulty of the community detection problem is characterized by a phase transition analogous to that in a generalized Potts model in statistical mechanics, which can be derived from a statistical physics perspective using a free energy approximation and the cavity method. I will also discuss future work in this area and its implications for nonconvex optimization.

]]>Neutrinos are one half of the leptons included in the standard model of particle physics yet some of their properties are the most poorly constrained aspects of the standard model. Neutrinos are also important in the cosmological standard model due to their suppression of the growth of structure at small angular scales and their influence on the evolution of early universe. The cosmic microwave background (CMB) is one of the best probes we have at observing the effects of neutrinos on the growth of large scale structure and by observing those effects we in turn can place tight constraints on two elusive properties of neutrinos, the sum of their masses and the number of different species. In my talk I’ll introduce both properties of the neutrino and the CMB, the effects neutrinos have on large scale structure that leave imprints on the CMB, current and future missions to observe those effects, and my experimental contributions to those missions.

]]>Electron spin has great potential for use in electronic device applications. To that end, our research group focuses on using optical pump-probe techniques to study electron spin dynamics in semiconductor materials. My current project began with an observation of an unexpected dependence of electron spin polarization in gallium arsenide on external magnetic field history. In this talk, I will recount this mystery and how we have set out to solve it. Join me as we search for clues and interrogate the prime suspect, dynamic nuclear polarization. Along the way, I will introduce the key concepts vital to understanding our experiments. Together, we will unravel the mystery of an unexpected spin phenomenon in gallium arsenide as I present a tale of intrigue and spin dynamics.

]]>GPS navigation is commonly used in many applications including defense, autonomous vehicles, and robotics. However, absolute dependence on GPS is unreliable due to its limited reachability and susceptibility to interference. For example, a jammer or even a simple and cheap device can be used to spoof GPS signal. As a result, for navigation of high-end vehicles like that of defense and military, one can’t rely entirely on GPS. To make navigation more secure and reliable, inertial sensors are used for navigation when GPS signal is unavailable. Inertial sensors consist of primarily three accelerometers and three gyroscopes in the three perpendicular axes to measure acceleration (or velocity or position) or rate (or angle) of rotation respectively for navigation. Gyroscopes are used to measure the rotation rate and angle of rotation with high precision. Commercial gyroscopes which are used in commercial flights as well as space missions are very precise in their measurement. However, their large sizes, high costs and power requirements limit their use in many applications.

MEMS or Microelectromechanical systems consists of a range of mechanical structures which can be used for various applications. They have an inherent advantage of low cost (C), weight (W), size (S) and power (P) or low CWSaP. They, however, are limited in performance due to large noise. This is a major hurdle which has been limiting the entry of MEMS inertial sensors in navigation-grade performance applications. Our research is focused on bridging this gap and making an ultra-low noise MEMS gyroscope using the microfabrication technologies.

In this talk, I will talk about the design and fabrication of miniaturized 3D shell resonators for gyroscopes. These resonators have exhibited quality factor as high as 10 million leading to very low noise gyroscope at their small size. The achieved performance matrices would enable the use of MEMS sensors as a navigation-grade gyroscope at a cost lower by several orders of magnitude than the existing commercial gyroscopes. Only this would enable each one of us to own a self-driving car and autonomous robots at our homes!

This thesis proposes a method to measure fine-grained spatial differences in prices using retail barcode scanner datasets. To avoid conflating spatial price differences with differences in consumer preferences for products sold in each area, it extends the framework proposed by Redding and Weinstein to adjust price indices for product turnover, from the temporal to the spatial context. In this extension, differences in spatial product availability are considered analogous to differences in product availability across time. It describes a method to estimate these "spatial UPI" indices, and the uncertainty associated with these estimates. It then applies this method to compare the food cost of living between different counties within the state of Michigan based on the Nielsen retail scanner database.

]]>Frequency Combs, or pulsed lasers which are capable of emitting many narrow and closely spaced spectral lines (teeth) with fixed phase relationships between adjacent teeth, are an essential tool in precision metrology and spectroscopy. Their usefulness comes from the fact that their entire spectrum can be controlled by just adjusting the time between pulses and the pulse-to-pulse phase slip of their electric field. This means that, using relatively simple control schemes, frequency combs enable the most precise measurements of time and frequency possible, among a plethora of other applications. Typically, however, these light sources are roughly the size of a kitchen table and require the high stability of a lab environment to maintain the controllability of their output. Miniaturized combs exist, in the form of microscopic ring resonators, but these light sources are not very tunable, typically require large and powerful pump lasers to operate, and are expensive to manufacture. These drawbacks are all showstoppers when it comes to allowing frequency comb enabled precision measurement and spectroscopy to leave the lab. We have demonstrated a new, extremely cheap, simple, and low power laser diode-based frequency comb which is roughly the size of a grain of rice. This laser can be battery powered, and its spectrum is highly controllable, making it an ideal light source to allow advanced precision measurement and spectroscopy to leave the lab. In my talk, I will give a brief overview of frequency comb-based measurements, demonstrate the stability and tunability of our new sources, and outline their prospect for future ground- and space-based applications.

]]>Physics Professor and Chair David Gerdes will welcome faculty, staff, and students to the 2019-20 school year and talk about different aspects of the Physics Department.

]]>Quantum materials research aims to uncover exotic physics and new approaches toward applied technologies. Two-dimensional crystals consisting of individual layers of van der Waals materials provide an exciting platform to study correlated and topological electronic states. These same crystals can be flexibly restacked into van der Waals heterostructures, which enable clean interfaces between heterogeneous materials. Such heterostructures enable the isolation and protection of air sensitive 2D materials as well as provide new degrees of freedom for tailoring electronic structure and interactions. In this talk, I will present experimental work studying electronic transport in monolayer WTe_2. First, un-doped monolayer WTe_2 exhibits behaviors characteristic of a 2D topological insulator, including edge mode transport approaching the quantum of conductance up to nearly 100 Kelvin. Second, we have discovered that the same monolayers display superconductivity at low carrier densities accessible by local field-effect gating through a low-κ dielectric. The concurrence of electrostatically accessible superconductor and topological insulator phases in the same 2D crystal allows us to envision a new model of gate-configurable topological electronic devices.

To Be Announced

]]>The Department of Statistics at the University of Michigan is inviting all stats faculty and graduate students to join us for an informal ice cream social to kick off the fall semester. We hope to see you there!

]]>Intergalactic Medium (IGM)-based cosmology established itself as a solid cosmological probe with the wide success of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). With the Dark Energy Spectroscopic Instrument (DESI) survey starting imminently, we are taking a look at the accomplishments of SDSS-III with regards to IGM-based cosmology and discussing exciting science and new statistical challenges in the era of DESI.

Department Colloquium

To Be Announced

]]>I will present a machine-learning approach for estimating galaxy cluster masses from Chandra x-ray mock observations. I will describe how a Convolutional Neural Network (CNN) -- a deep machine learning tool commonly used in image recognition tasks -- can be used to infer cluster masses from these images, reducing scatter in the mass estimates by up to 50%. I will also show an interpretation tool, inspired by Google DeepDream, that can be used to gain some physical insight into what the CNN sees.

Department Colloquium

The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

]]>HEP - Astro Seminar

Quantum time crystal has been an intriguing many-body “time” state that has received much attention and debate since its early prediction. In this talk, first, I will construct a class of concrete “clean” Floquet models to answer the open question on the role of disorder and many-body localization. Second, by observing the equivalent role of the space and imaginary time in the path integral formalism, I will present the finding that hard-core bosons coupled to a thermal bath may exhibit the order of “imaginary spacetime crystal”.

Department Colloquium

To Be Announced

]]>Department Colloquium

To Be Announced

]]>Department Colloquium

To Be Announced

]]>Department Colloquium

To Be Announced

]]>"For Egypt's Coptic Orthodox, image theology is central to mediating human-divine relations. From the Arab uprisings to Sisi's military coup, varying theologies of material imagination have enabled communal critique and minoritarian identification. This talk navigates the social life of theology to understand how visual images organize relations between Christians and Muslims toward national and sectarian ends. In doing so, it considers the communicative aesthetics of religion and the creative making of religious difference within the terms of national unity."

The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

Despite being a remarkably simple theoretical model, the Higgs mechanism is the only known theory that is connected to some of the most profound mysteries in the modern physics: dark energy, dark matter and missing antimatter. Measurements of the Higgs boson decay may shield lights on those open questions. In this talk, I will present a few selective results from the ATLAS experiment on the Higgs boson decays. Namely the first observation of the Higgs boson decay to a pair of b-quarks, which had eluded us for many years despite it is the most probable Higgs decay channel; novel techniques to search for potential new physics using the hardonically decaying Higgs boson, and a first search for singly produced long-lived neutral particle that may be realized via Higgs portal. The talk will mainly focus on general descriptions of the measurements without too much technical details, so that the content is accessible to non experimental particle physicists.

Department Colloquium

To Be Announced

]]>To Be Announced

]]>The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

]]>With knowledge of thousands of exoplanet systems from the NASA Kepler Mission, we are closer than ever to understanding how planets form. Patterns in exoplanet populations, compositions, and planetary system architectures are already revealing the most common outcomes of planet formation. I will discuss how I use exoplanet systems as laboratories to test theories of planet formation. My work ranges from characterizing broad patterns across many planetary systems to studying individual systems through their transits, transit timing variations, and radial velocities. In the next ten years, we will measure exoplanet multiplicities, orbital periods, masses, radii, eccentricities, inclinations, obliquities, dynamical interactions, atmospheric compositions, and host star properties using a combination of ground-based and space telescopes. These detailed observations of our exoplanet laboratories will allow us to place the solar system in its galactic context.

CM-AMO Seminar

CM Theory Seminar

To Be Announced

]]>To Be Announced

]]>Department Colloquium

To Be Announced

]]>Department Colloquium

The Michigan Anthropology Colloquia Series presents speakers on current topics in the field of anthropology

]]>