ENLARGE: An Efficient SNN Simulation Framework on GPU Clusters

摘要

Spiking Neural Networks (SNNs) are currently the most widely used computing model for neuroscience communities. There is also an increasing research interest in exploring the potential of SNN in brain-inspired computing, artificial intelligence, and other areas. As SNNs possess distinguished characteristics that originate from biological authenticity, they require dedicated simulation frameworks to achieve usability and efficiency. However, there is no widely-used, easily accessible, high performance SNN simulation framework for GPU clusters. In this paper, we propose ENLARGE, an efficient SNN simulation framework on GPU clusters. ENLARGE provides a multi-level architecture that deals with computation, communication, and synchronization hierarchically. We also propose an efficient communication method with an all-to-all communication pattern. To deal with the delay of spike delivery, which is the most distinguished SNN characteristic, several delay-aware optimization methods are also proposed. We further propose a multilevel workload management method. Various experiments are carried out to demonstrate the performance and scalability of the framework, as well as the effects of the optimization methods. Test results show that ENLARGE can achieve 3.17×∼28.12× speedup compared with the most widely used NEST simulator and 3.26×∼13.57× speedup compared with the widely used NEST GPU simulator for GPU clusters.

出版物
In IEEE Transactions on Parallel and Distributed Systems 2024
渠鹏
渠鹏
助理研究员