BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//UM//UM*Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Detroit
TZURL:http://tzurl.org/zoneinfo/America/Detroit
X-LIC-LOCATION:America/Detroit
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241114T153541
DTSTART;TZID=America/Detroit:20241121T150000
DTEND;TZID=America/Detroit:20241121T160000
SUMMARY:Workshop / Seminar:IOE 899: Optimization methods for compressing large neural networks
DESCRIPTION:About the speaker: Rahul Mazumder is the NTU Associate Professor of Operations Research and Statistics at MIT Sloan School of Management. He is affiliated with MIT Operations Research Center\, and MIT\nCenter for Statistics. His research interests are at the intersection of statistics\, machine learning and mathematical programming (large-scale convex and mixed integer optimization)\, and their applications to industry\, the government\, and the sciences. He is a recipient of the Leo Breiman Junior Award from the American Statistical Association\, International Indian Statistical Association Early Career Award in Statistics and Data Science\, INFORMS Donald P. Gaver\, Jr. Early Career Award for Excellence in Operations Research\, INFORMS Optimization Society Young Researchers Prize\, Office of Naval Research Young Investigator Award\, INFORMS ICS Prize (Honorable Mention). He is currently serving as an Associate/Action Editor of the Annals of Statistics\, Bernoulli\, Operations Research\, and the Journal of Machine Learning Research.\n\nAbstract: Foundation models have achieved remarkable performance across various domains\, but their large model sizes lead to high computational costs (storage\, inference latency\, memory\, etc). Neural network pruning\, roughly categorized as unstructured and structured\, aims to reduce these costs by removing less-important parameters while retaining model utility as much as possible. Structured pruning is a practical way to improve inference latency on standard hardware in contrast to unstructured pruning\, requiring specialized hardware and software. In this talk\, I will discuss discrete optimization methods to address such problems. Interestingly\, algorithms from sparse regression and high-dimensional statistics can be useful here. I'll discuss how model compression tools can aid interpretability in black-box decision tree ensembles\; and how our investigations in large model pruning motivate new algorithms to accelerate branch-and-bound (integer programming) solvers with GPUs.
UID:129069-21862123@events.umich.edu
URL:https://events.umich.edu/event/129069
CLASS:PUBLIC
STATUS:CONFIRMED
CATEGORIES:899 Seminar Series,Industrial And Operations Engineering,Michigan Engineering,seminar,Talk
LOCATION:Industrial and Operations Engineering Building - 1680
CONTACT:
END:VEVENT
BEGIN:VEVENT
DTSTAMP:20241108T095114
DTSTART;TZID=America/Detroit:20241121T150000
DTEND;TZID=America/Detroit:20241121T160000
SUMMARY:Workshop / Seminar:Networking Afternoon Social Event
DESCRIPTION:Enjoy snacks and spend time practicing networking with peers!
UID:128925-21861905@events.umich.edu
URL:https://events.umich.edu/event/128925
CLASS:PUBLIC
STATUS:CONFIRMED
CATEGORIES:Social,statistics
LOCATION:West Hall - 274
CONTACT:
END:VEVENT
END:VCALENDAR