Memory Optimizations in Machine Learning

Tue Sep 17 | 3:05pm
Location:
Cypress
Abstract

As Machine Learning continues to forge its way into diverse industries and applications, optimizing computational resources, particularly memory, has become a critical aspect of effective model deployment. This session, ""Memory Optimizations for Machine Learning,"" aims to offer an exhaustive look into the specific memory requirements in Machine Learning tasks and the cutting-edge strategies to minimize memory consumption efficiently.

We'll begin by demystifying the memory footprint of typical Machine Learning data structures and algorithms, elucidating the nuances of memory allocation and deallocation during model training phases. The talk will then focus on memory-saving techniques such as data quantization, model pruning, and efficient mini-batch selection. These techniques offer the advantage of conserving memory resources without significant degradation in model performance. Additional insights into how memory usage can be optimized across various hardware setups, from CPUs and GPUs to custom ML accelerators, will also be presented.

Learning Objectives

Learn about different optimization techniques used in Machine Learning Models
Learn about how Machine Learning Models use memory
Learn about some exciting areas of research in Memory optimizations for the future

---

Tejas Chopra
Netflix, Inc.
Related Sessions