Special Issue on Hardware Acceleration for Machine Learning
摘要截稿:
全文截稿: 2019-01-15
影响因子: 1.214
期刊难度:
CCF分类: C类
中科院JCR分区:
• 大类 : 工程技术 - 3区
• 小类 : 计算机:硬件 - 4区
• 小类 : 工程:电子与电气 - 4区
Overview
Many machine learning (ML) workloads, especially those related to deep neural networks, are both computation and memory intensive. Hardware accelerators are essential to ensure that such ML applications can be accelerated to not only meet the performance and throughput targets but also power and energy efficiency requirements. In this special issue of Integration, the VLSI Journal, we call for the most advanced research results on hardware acceleration of machine learning for both training and inference. Topics of interest include (but are not limited to) the following:
Software/Compilers/Tools for mapping ML workloads to accelerators
New design methodologies for ML-centric or ML-aware hardware accelerators
New microarchitecture designs of hardware accelerators for ML
ML workload acceleration on existing accelerators such as GPU, FPGA, CGRA, or ASIC
Accelerators for new ML algorithms such as adversarial learning, transfer learning, etc.
ML hardware acceleration for edge computing and IoT
ML hardware acceleration for cloud computing
Hardware friendly ML modeling, optimization, quantization, and compression
Comparison studies of different acceleration architectures (GPUs, TPUs, ASICs, FPGAs, etc.)
Survey and tutorial studies of ML hardware acceleration