Gradient compression for distributed training of machine learning models


Project Description

Modern supervised machine learning models are trained using enormous amounts of data, and for this distributed computing systems are used. The training data is distributed across the memory of the nodes of the system, and in each step of the training process one needs to aggregate updates computed by all nodes using local data. This aggregation step requires communication of a large tensor, which is the bottleneck limiting the efficiency of the training method.

To mitigate this issue, various compression (e.g., sparsification/quantization/dithering) schemes were propose in the literature recently. However, many theoretical, system-level and practical questions remain to be open. In this project the intern will aim to advance the state of the art in some aspect of this field. As this is a fast moving field, details of the project will only be finalized together with the successful applicant. Background reading based on research on this topic done in my group:

Program - Computer Science
Division - Computer, Electrical and Mathematical Sciences and Engineering
Faculty Lab Link -
Field of Study - ​computer science, mathematics, machine learning

About the

Peter Richtarik

Peter Richtarik

Desired Project Deliverables

​Ideally author or coauthor a research paper, and submit it to a premier conference in the field (e.g., ICML, AISTATS, NeurIPS, ICLR).​