Plenary Speakers
Keynote: Computing in the Foundation Model Era
Monday, June 19, 2023
Kunle Olukotun, Stanford University
Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a pioneer in multicore processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project. He founded Afara Websystems to develop high-throughput, low-power multicore processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle's SPARC-based servers. In 2017, Olukotun co-founded SambaNova Systems, a Machine Learning and Artificial Intelligence company, and continues to lead as their Chief Technologist. Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics tor What's Next (DAWN) Lab, developing infrastructure for usable machine learning. He is a member of the National Academy of Engineering, an ACM Fellow, and an IEEE Fellow for contributions to multiprocessors on a chip design and the commercialization of this technology. He also received the Harry H. Goode Memorial Award. Olukotun received his Ph.D. in Computer Engineering from The University of Michigan.
Abstract
Generative AI applications with their ability to produce natural language, computer code and images are transforming all aspects of society. These applications are powered by huge foundation models such as GTP-3 which are trained on massive unlabeled datasets. Foundation models have 10s of billions of parameters and have obtained state-of-the-art quality in natural language processing, vision and speech applications. These models are computationally challenging because they require 100s of petaFLOPS of computing capacity for training and inference. Future foundation models will have even greater capabilities provided by more complex model architectures with longer sequence lengths, irregular data access (sparsity) and irregular control flow. In this talk I will describe how the evolving characteristics of foundation models will impact the design of the optimized computing systems required for training and serving these models. A key element of improving the performance and lowering the cost of deploying future foundation models will be optimizing the data movement within the model using specialized hardware. In contrast to human-in-the-loop applications such as conversational AI, an emerging application of foundation models is in continuous batch processing applications that operate without human supervision. I will describe how continuous batch processing and real-time machine learning can be used to create an intelligent network data plane.
Keynote: Taking on the World's Challenges: The Role of Computing Research and Innovation
Tuesday, June 20, 2023
Margaret Martonosi, US National Science Foundation
Margaret Martonosi is the US National Science Foundation’s (NSF) Assistant Director for Computer and information Science and Engineering (CISE). With an annual budget of more than $1B, the CISE directorate at NSF has the mission to uphold the Nation’s leadership in scientific discovery and engineering innovation through its support of fundamental research and education in computer and information science and engineering as well as transformative advances in research cyberinfrastructure. While at NSF, Dr. Martonosi is on leave from Princeton University where she is an endowed professor of Computer Science. Dr. Martonosi's research interests are in computer architecture and hardware-software interface issues in both classical and quantum computing systems. Dr. Martonosi is a member of the National Academy of Engineering and a Fellow of the ACM and IEEE.
Abstract
Throughout human history, society has faced great opportunities and challenges, and has used its available toolkit to navigate them. Today, many of the global opportunities and challenges we face will require the full engagement of the computing innovation and research community to take on. Resiliently navigating climate trends will require computing techniques and systems to model the future, as well as innovative techniques to mitigate carbon footprint by employing telepresence, optimizing logistics, and more. Another grand challenge of our era is the ability for us as individuals and as groups to communicate with each other in a way that upholds accuracy, integrity, privacy, and trust. The computing research and innovation ecosystem has the power to help. This talk will discuss how the different elements of this ecosystem— academia, industry, professional organizations, and governments—can work together to meet these challenges. It will be a call to action on how we can best navigate the next decade and beyond to do so.
Keynote: Constructing and Deconstructing Trust: Employing Cryptographic Recipe in the ML Domain
Wednesday, June 21, 2023
Shafi Goldwasser
Director, Simons Institute for the Theory of Computing, University of California Berkeley
C. Lester Hogan Professor of Electrical Engineering and Computer Sciences,
University of California Berkeley
Shafi Goldwasser Shafi Goldwasser is Director of the Simons Institute for the Theory of Computing, and Professor of Electrical Engineering and Computer Science at the University of California Berkeley. Goldwasser holds a B.S. Applied Mathematics from Carnegie Mellon University (1979), and M.S. (1981) and Ph.D.(1984) in Computer Science from the University of California Berkeley. Goldwasser's pioneering contributions include the introduction of probabilistic encryption and signatures, zero knowledge protocols, elliptic curve primality testings, multi-prover interactive proofs, hardness of approximation proofs for combinatorial problems, graph property testing, and pseudo deterministic algorithms and proofs. Goldwasser was the recipient of the ACM Turing Award in 2012, the Gödel Prize in 1993 and in 2001, the ACM Grace Murray Hopper Award in 1996, the RSA Award in Mathematics in 1998, the ACM Athena Award for Women in Computer Science in 2008, the Benjamin Franklin Medal in 2010, the IEEE Emanuel R. Piore Award in 2011, the Simons Foundation Investigator Award in 2012, the BBVA Foundation Frontiers of Knowledge Award in 2018, the L'oreal-Unesco award for Women in Science 2021, and the FOCS 2021 and STOC 2021 Test of time Awards. Goldwasser is a member of the NAS, NAE, AAAS, the Russian Academy of Science, the Israeli Academy of Science, and the London Royal Mathematical Society. Goldwasser holds honorary degrees from Ben Gurion University, Bar Ilan University, Carnegie Mellon University, Haifa University, Tel Aviv University, Oxford University, and the University of Waterloo, and has received the UC Berkeley Distinguished Alumnus Award and the Barnard College Medal of Distinction.
Abstract
For decades now cryptographic tools and models have been developed to transform platforms controlled by worst case adversaries to trustworthy platforms. In this talk I will describe how to use a general cryptographic recipe and specific cryptographic tools to build trust in various phases of the machine learning pipelines or prove that at times it is impossible to achieve. We will touch on achieving verification, robustness and privacy. If time permits, we will show how cryptographic tools can be brought to build trust in the legal domain.
Keynote: The Quantum Internet: Recent Advances and Challenges
Thursday, June 22, 2023
Don Towsley, University of Massachusetts
Don Towsley is currently a Distinguished Professor at the University of Massachusetts in the College of Information & Computer Sciences. He has made seminal contributions to the design, analysis, optimization, and control of networks. More recently he and his colleagues have pioneered the information theory of covert communications, and the design, analysis, optimization, and control of quantum networks. Towsley was co-founder and Co-Editor-in-Chief of the ACM Transactions on Modeling and Performance Evaluation of Computing Systems (TOMPECS) and has served as Editor-in-Chief of IEEE/ACM Transactions on Networking. He is a corresponding member of the Brazilian Academy of Sciences and has received several achievement awards including the 2007 IEEE Koji Kobayashi Award, the 2007 ACM SIGMETRICS and 2008 ACM SIGCOMM Achievement Awards, and the 2011 INFOCOM Achievement Award. He has received numerous Test of Time and Best Paper Awards. Last, he is Fellow of the ACM and IEEE.
Abstract
Quantum information processing is at the threshold of having significant impact on technology and society in the form of providing unbreakable security, ultra-high-precision distributed sensing with applications to metrology and science discovery (e.g., LIGO), and polynomial speed-ups on search with implications to big data. Most of these applications are enabled by high-rate distributed shared entanglement between pairs and groups of users. A critical missing component that prevents crossing this threshold is a distributed infrastructure in the form of a world-wide “Quantum Internet”. This motivates the study of quantum networks, namely what is the right architecture and how should it operate, e.g., dynamic fair allocation of resources. Moreover, the architecture and network operation must account for operation in harsh, noisy environments. This talk will introduce two proposed quantum network architectures referred to as one-way and two-way architectures. The latter architecture envisions networks generating and distributing entangled quantum states to pairs or groups of users. It currently receives the most attention and will be the focus of most of the talk. The talk will present recent results, opportunities, and challenges for such networks, and will focus on similarities to, and differences from classical networks.
Keynote: Scalable and Efficient AI: From Supercomputers to Smartphones
Friday, June 23, 2023
Torsten Hoefler, ETH Zurich
Torsten Hoefler is a Professor of Computer Science at ETH Zurich, a member of Academia Europaea, and a Fellow of the ACM and IEEE. His research interests revolve around the central topic of "Performance-centric System Design" and include scalable networks, parallel programming techniques, and performance modeling. Torsten won best paper awards at the ACM/IEEE Supercomputing Conference SC10, SC13, SC14, SC19, SC22, EuroMPI'13, HPDC'15, HPDC'16, IPDPS'15, and other conferences. He published numerous peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. He received the IEEE CS Sidney Fernbach Award, the ACM Gordon Bell Prize, the Latsis prize of ETH Zurich, as well as both ERC starting and consolidator grants. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch.
Abstract
Billion-parameter artificial intelligence models have proven to show exceptional performance in a large variety of tasks ranging from natural language processing, computer vision, and image generation to mathematical reasoning and algorithm generation. Those models usually require large parallel computing systems, often called "AI Supercomputers", to be trained initially. We will outline several techniques ranging from data ingestion, parallelization, to accelerator optimization that improve the efficiency of such training systems. Yet, training large models is only a small fraction of practical artificial intelligence computations. Efficient inference is even more challenging - models with hundreds-of-billions of parameters are expensive to use. We continue by discussing model compression and optimization techniques such as fine-grained sparsity as well as quantization to reduce model size and significantly improve efficiency during inference. These techniques may eventually enable inference with powerful models on hand-held devices.