Eleni Triantafillou

Eleni Triantafillou is a Research Scientist at Google DeepMind. Previously, she obtained her PhD in Computer Science from the University of Toronto. Her research is centered around techniques for adapting / updating models in both directions: inserting “knowledge” (transfer learning, domain adaptation, few-shot learning) and removing outdated, harmful or unwanted “knowledge”. She was the lead author of a well-adopted benchmark for few-shot learning and has expertise in devising evaluation protocols. She also co-authored two papers on unlearning this year and is involved in several projects on this topic.

Fabian Pedregosa

Fabian Pedregosa is a Senior Research Scientist at Google DeepMind. His main area of expertise is optimization and his previous work includes analysis and development of parallel stochastic methods, the development of more practical methods for optimizing non-smooth functions and new methods for smooth hyperparameter optimization. Fabian is also a co-founder of the popular scikit-learn machine learning library.

Isabelle Guyon

Isabelle Guyon recently joined Google as a director of research. She is also chaired professor in artificial intelligence at the Université Paris-Saclay, specialised in statistical data analysis, pattern recognition and machine learning. She is co-founder and president of ChaLearn, a non-profit organisation dedicated to challenges in machine learning. Her areas of expertise include computer vision and bioinformatics. Prior to joining Paris-Saclay she worked as an independent consultant and was a researcher at ATT Bell Laboratories, where she pioneered applications of neural networks to pen computer interfaces (with collaborators including Yann LeCun and Yoshua Bengio) and co-invented with Bernhard Boser and Vladimir Vapnik Support Vector Machines (SVM), which became a textbook machine learning method. She worked on early applications of Convolutional Neural Networks (CNN) to handwriting recognition in the 1990s. She is also the primary inventor of SVM-RFE, a variable selection technique based on SVM. The SVM-RFE paper has thousands of citations and is often used as a reference method against which new feature selection methods are benchmarked. She also authored a seminal paper on feature selection that received thousands of citations. She organised many challenges in Machine Learning since 2003 supported by the EU network Pascal2, NSF, and DARPA, with prizes sponsored by Microsoft, Google, Facebook, Amazon, Disney Research, and Texas Instrument. She was a leader in the organisation of the AutoML and AutoDL challenge series https://autodl.chalearn.org/ and the meta-learning challenge series https://metalearning.chalearn.org/.

Sergio Escalera

Sergio Escalera is a full professor at the Universitat de Barcelona (UB) and head of informatics degree and coordinator of the Universitat de Barcelona part of the Catalan Master in Computer Vision. He is fellow of ELLIS, IAPR, and AAIA. He leads the Computer Vision group at the University of Barcelona and the Computer Vision Center, and he is also vice-president of ChaLearn Challenges in Machine Learning. He works on inclusive and transparent analysis of humans from visual and multi-modal data. He organized several challenges and associated benchmarks, workshops, and special issues, and collaborated on the development of the Codalab open source platform for challenge organization. He is interested in the privacy aspect of MIA and associated mechanisms for membership detection and unlearning.

Julio C. S. Jacques Junior

Julio C. S. Jacques Junior is an adjunct professor and research assistant at University of Barcelona. Previously, he worked as a postdoctoral researcher at Computer Vision Center and Universitat Oberta de Catalunya. He is a member of the HUPBA group and ChaLearn LAP. He worked as part of the organizing team of several workshops and challenges organized at high impact conferences (i.e., at WACV’23, ECCV’22, ICCV’21, CVPR’21, ECCV’20, FG’20, NeurIPS’19 and CVPR’17). He has strong experience with Codalab, a powerful open source framework for running competitions. His interests include trustworthy computer vision, person perception, fairness and bias mitigation methods.

Gintare Karolina Dziugaite

Gintare Karolina Dziugaite is a Senior Research Scientist at Google DeepMind, based in Toronto, an adjunct professor in the McGill University School of Computer Science, and an associate industry member of Mila, the Quebec AI Institute. Prior to joining Google, she led the Trustworthy AI program at Element AI / ServiceNow. Dr. Dziugaite’s research combines theoretical and empirical approaches to understanding deep learning, with a focus on generalization and on network and data compression. The unlearning challenge relates to her work on memorization, fairness, privacy and generalization in deep learning. She was one of the main organizers of a NeurIPS 2019 workshop on “ML with Guarantees”, one of the largest workshops in 2019. She also co-organized the 2022 Eastern European Machine Learning summer school, the 2022 Mila-Google Brain scientific workshop, and the NeurIPS 2020 Generalization Measure competition.

Peter Triantafillou

Peter Triantafillou is a Professor of Data Systems and Head of the Data Sciences Research Theme at the Department of Computer Science at the University of Warwick and a Fellow of the Alan Turing Institute. Peter is a member of the Advisory Board of PVLDB and has served as a member of the Advisory Board of the Huawei Ireland Research Centre and as a member of the Advisory Board of the Urban Big Data Research Centre (a national infrastructure for urban data services and analytics). Prior to that, Peter held professorial positions at the University of Glasgow, the University of Patras, the Technical University of Crete, and Simon Fraser University, and held visiting professorships at the Max-Planck Institute for Informatics in Germany. Peter’s papers have won numerous awards, including the most influential paper award in ACM DEBS 2019, the best paper award at the ACM SIGIR 2016 Conference, the best paper award at the ACM CIKM Conference 2006, and the best student paper award at IEEE Big Data 2018 Conference. Peter has served in the Technical Program Committees of more than 140 international conferences and has been the PC Chair or Vice-chair/Associate Editor in several prestigious conferences (including ACM SIGMOD, IEEE ICDE, PVLDB Reproducibility, IEEE DSAA, ACM Middleware, Wise) and will be serving as the General co-Chair for VLDB2025. His recent research has been focusing on ML for Data Systems and most recently on the impact of data updates (such as data insertions and deletions) on ML models learned from data for downstream database tasks (such as approximate query processing, cardinality estimation, query optimization, etc.). The data-deletion thread inevitably led him to work on machine unlearning.

Vincent Dumoulin

Vincent Dumoulin is a Senior Research Scientist at Google DeepMind working on learning in non-iid settings. He was involved in the creation of the Meta-Dataset benchmark, and he lead a follow-up effort on bridging the gap between transfer learning and meta-learning evaluation protocols for the purpose of measuring the few-shot learning capabilities of image classifiers. Previously, he completed his PhD at Université de Montréal under the supervision of Yoshua Bengio and Aaron Courville.

Ioannis Mitliagkas

Ioannis Mitliagkas is an associate professor in the department of Computer Science and Operations Research (DIRO) at the University of Montréal. He is also a core member of Mila and Canadian CIFAR AI chair holder. He holds a part-time staff research scientist position at Google DeepMind in Montreal. Previously, he was a postdoctoral scholar with the departments of Statistics and Computer Science at Stanford University and obtained his Ph.D. from the department of Electrical and Computer Engineering at The University of Texas at Austin. His research includes topics in statistical learning and inference, focusing on optimization, efficient large-scale and distributed algorithms and statistical learning theory. His recent work includes methods for efficient and adaptive optimization, studying the interaction between optimization and the dynamics of largescale learning systems and the dynamics of games.

Lisheng Sun Hosoya

Lisheng Sun Hosoya is a postdoctoral researcher at LISN, Université Paris-Saclay. Her research interests include AutoML, Meta-Learning, Frugal machine learning, Reinforcement learning, and their real-world applications. After obtaining her Ph.D. in 2019 from LRI, Université Paris Sud, under the supervision of Prof. Isabelle Guyon and Prof. Michèle Sebag, she spent 3 years at Crédit Agricole S.A. working as a research referent and data scientist, before returning to academia in her current role.

Meghdad Kurmanji

Meghdad Kurmanji is a PhD candidate at the University of Warwick, researching machine learning for Data Systems. Meghdad is basically interested in studying the effects of database operations, such as INSERT and DELETE, on ML models developed for database optimization. Meghdad’s previous research focused on INSERT operations and efficient methods for updating ML models in the face of OOD data insertions. Currently, he is researching data DELETION and its impact on ML models for downstream database tasks like Cardinality Estimation and Approximate Query Processing, leading him to explore the problem of Machine Unlearning.

Kairan Zhao

Kairan Zhao is a PhD candidate at the University of Warwick, working with Prof. Peter Triantafillou on machine unlearning and its implications for privacy concerns. Her research interests are primarily centered around security, privacy, and optimization in various systems, such as Cyber-Physical Systems and Data Systems. Her current work focuses on studying the impact of machine unlearning methods on ML models, as well as evaluating different privacy attacks against them.

Jun Wan

Jun Wan received the BS degree from the China University of Geosciences, Beijing, China, in 2008, and the PhD degree from the Institute of Information Science, Beijing Jiaotong University, Beijing, China, in 2015. Since January 2015, he has been a Faculty Member with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science, China, where he currently serves as an Associate Professor.

Peter Kairouz

Peter Kairouz is a Research Scientist at Google, where he focuses on researching and building federated learning, differential privacy, and other privacy-preserving technologies. Before joining Google, he was a Postdoctoral Research Fellow at Stanford University. He received his Ph.D. in electrical and computer engineering from the University of Illinois at Urbana-Champaign (UIUC). He is the recipient of the 2012 Roberto Padovani Scholarship from Qualcomm’s Research Center, the 2015 ACM SIGMETRICS Best Paper Award, the 2015 Qualcomm Innovation Fellowship Finalist Award, and the 2016 Harold L. Olesen Award for Excellence in Undergraduate Teaching from UIUC, and the 2021 ACM Conference on Computer and Communications Security (CCS) Best Paper Award. Dr. Kairouz has organized several Google-hosted workshops on Federated Learning and Analytics (2018, 2019, and 2020) and upcoming ICML 2023 workshop, and he has given two tutorials on the same topic, one at NeurIPS 2020 and another at AAAI 2021. He has served as the associate editor-in-chief for the IEEE JSAC Series on Machine Learning in Communications and Networks, and as an area chair of AISTATS 2023. He has also led a 58-author effort that produced one of the most influential manuscripts on the advances and open problems in federated learning, published in Foundations and Trends of Machine Learning.