Accelerating Neural Network Training with Processing-in-Memory GPU

摘要

Processing-in-memory (PIM) architecture is promising for accelerating deep neural network (DNN) training due to its low-latency and energy-efficient data movement between computation units and the memory. This paper explores a novel GPU-PIM architecture for DNN training, where streaming multiprocessors of GPU are integrated into the logic layer of 3D memory stack, and multiple such stacks are connected to form a PIM-network. Two corresponding optimization strategies are proposed. The first is to increase the computational parallelism of the data-parallel training mode with the large memory, high bandwidth/high network transmission speed of GPU-PIM. The second is further utilizing the optimized model-parallel training to significantly reduce the communication overhead: We propose a mapping scheme to decide the proper parallelization for different DNN layers on the proposed architecture. Experiments show that the proposed architecture outperforms the baseline GPU by 35.5% and 59.9% and reduces energy consumption by 28.2% and 27.8% for the two benchmarks we evaluated.

出版物
In 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid)