Scalable Deep Learning over Parallel and Distributed Infrastructures
摘要截稿:
全文截稿: 2019-01-25
开会时间: 2019-05-24
会议难度:
CCF分类: 无
会议地点: Rio de Janeiro
Overview
In this workshop we solicit research papers focused on distributed deep learning aiming to achieve efficiency and scalability for deep learning jobs over distributed and parallel systems. Papers focusing both on algorithms as well as systems are welcome. We invite authors to submit papers on topics including but not limited to:
Deep learning on HPC systems
Deep learning for edge devices
Model-parallel and data-parallel techniques
Asynchronous SGD for Training DNNs
Communication-Efficient Training of DNNs
Model/data/gradient compression
Learning in Resource constrained environments
Coding Techniques for Straggler Mitigation
Elasticity for deep learning jobs/spot market enablement
Hyper-parameter tuning for deep learning jobs
Hardware Acceleration for Deep Learning
Scalability of deep learning jobs on large number of nodes
Deep learning on heterogeneous infrastructure
Efficient and Scalable Inference
Data storage/access in shared networks for deep learning jobs