High Performance Simulation of Spiking Neural Network on GPGPUs

Abstract

Spiking neural network (SNN) is the most commonly used computational model for neuroscience and neuromorphic computing communities. It provides more biological reality and possesses the potential to achieve high computational power and energy efficiency. Because existing SNN simulation frameworks on general-purpose graphics processing units (GPGPUs) do not fully consider the biological oriented properties of SNNs, like spike-driven, activity sparsity, etc., they suffer from insufficient parallelism exploration, irregular memory access, and load imbalance. In this article, we propose specific optimization methods to speed up the SNN simulation on GPGPU. First, we propose a fine-grained network representation as a flexible and compact intermediate representation (IR) for SNNs. Second, we propose the cross-population/-projection parallelism exploration to make full use of GPGPU resources. Third, sparsity aware load balance is proposed to deal with the activity sparsity. Finally, we further provide dedicated optimization to support multiple GPGPUs. Accordingly, BSim, a code generation framework for high-performance simulation of SNN on GPGPUs is also proposed. Tests show that, compared to a state-of-the-art GPU-based SNN simulator GeNN, BSim achieves 1.41× - 9.33× speedup for SNNs with different configurations; it outperforms other simulators much more.

Publication
In IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, NOVEMBER 2020