Loading…
Back To Schedule
Friday, November 15 • 5:00pm - 5:30pm
Next-generation frameworks for Large-scale Machine Learning

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

As the deep-learning revolution matures, there is ever-growing demand for bigger datasets, larger models and more compute infrastructure. What is the role of algorithmic design in this?  I will show several ways to infuse structure into deep networks to overcome these limitations, viz., through tensors, graphs, physical laws, and simulations. Tensorized neural networks lead to large rates of compression while improving on generalization and robustness. In order to speed up multi-node model training, I will demonstrate how simple gradient compression (SignSGD) leads to communication savings while preserving accuracy. Thus, with better algorithmic design, it is possible to obtain “free lunches” and obtain better efficiency in ML.

Speakers
avatar for Anima Anandkumar

Anima Anandkumar

Professor, Caltech
Anima Anandkumar holds dual positions in academia and industry. She is a Bren professor at Caltech CMS department and a director of machine learning research at NVIDIA. At NVIDIA, she is leading the research group that develops next-generation AI algorithms. At Caltech, she is t... Read More →


Friday November 15, 2019 5:00pm - 5:30pm PST
data