Model- vs. Data-Parallelism for Training of Deep Neural Networks

Apply

Project Description

​Training of very large Deep Neural Networks is typically performed on large-scale distributed systems using the so-called data-parallelism approach. However, the scalability of this approach is limited by the convergence properties of the training algorithms. In this project, we willstudy a less common approach, called model-parallelism, which has the potential to overcome the convergence limitations. We will deploy and evaluate experimentally the two approaches, in order to understand the trade-offs. We will then design a hybrid method that will attempt to combine the benefits of the existing approaches.​ ​​​​​
Program - Computer Science
Division - Computer, Electrical and Mathematical Sciences and Engineering
Field of Study - ​Computer Science / Machine Learning

About the
Researcher

Panagiotis Kalnis

Professor, Computer Science

Panagiotis Kalnis
​Professor Kalnis's research interests are in Databases and Information management. Specifically, he is interested in: Database outsourcing and cloud computing, mobile computing, Peer-to-Peer, OLAP, data warehouses, spatial-temporal and high-dimensional databases, GIS, Security - Privacy – Anonymity.

Desired Project Deliverables

​1. Experimental evaluation of model- versus data-parallelism. 2. Design and implementation of a hybrid approach.​