The Joint CARTE (University of Toronto) and University of Seoul Applied AI seminar series welcomes Professor Nicolas Papernot.
Registration: Register for this event.
Abstract: Machine learning has been perhaps this decade’s most significant technological development, with the prospect of becoming a general-purpose technology. Applications range from autonomous driving to assisting with court decisions. In many of these settings, the worst-case performance of machine learning is critical. Yet, the predictions of machine learning often appear fragile, with no hint as to the reasoning behind them—and may be dangerously wrong. This situation is in large part due to the absence of security considerations in the design of machine learning algorithms. This is unacceptable: society must be able to trust and hold machine learning accountable. One direction that has been proposed to develop more trustworthy ML algorithms is the introduction of randomization. In this keynote, we contrast the success of randomized algorithms for privacy-preserving learning with failed applications of randomization to develop more robust machine learning models. From this comparison, we identify best practices for the research community, moving forward, as it continues to research the role of randomization in trustworthy machine learning.
Bio: Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute, and a faculty affiliate at the Schwartz Reisman Institute. His research interests span the security and privacy of machine learning. Some of his group’s recent projects include proof-of-learning, collaborative learning beyond federation, dataset inference, and machine unlearning. Nicolas is an Alfred P. Sloan Research Fellow in Computer Science. His work on differentially private machine learning was awarded an outstanding paper at ICLR 2022 and a best paper at ICLR 2017. He serves as an associate chair of the IEEE Symposium on Security and Privacy (Oakland), and an area chair of NeurIPS. He co-created and will co-chair the first IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) in 2023. Nicolas earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year at Google Brain where he still spends some of his time.