FABLE: A Development and Computing Framework for Brain-inspired Learning Algorithms

Abstract

Spiking neural networks (SNNs) have received extensive attention in multi-disciplinary fields, due to their rich spatiotemporal dynamics and the potential for low processing delay and high energy efficiency on neuromorphic hardware. The research on SNN learning algorithms is active and diverse, and many algorithms differ significantly from those of DNN in terms of computation model/features and weight adjustment mechanisms. This paper proposes FABLE, a multi-level framework for building and running SNN learning algorithms efficiently. Its kernel is an adaptable computation model based on synchronous data flow, which can well express the spatiotemporal parallelism of SNN and then can organize and schedule the underlying SNN-custom tensor operators (OPs) to construct optimized computing procedures. It also provides a flexible programming interface for users to design or customize their learning algorithms. In addition, the implementation of FABLE has high compatibility: It extends PyTorch’s OP library, scheduler, and APIs to take advantage of the ecology and usability of the latter. To show the flexibility of the framework, we have ported five different learning algorithms, each with less programming than its original implementation. Further experiments demonstrate that FABLE outperforms all of them (up to 2.61 x) in terms of computing performance, while the original implementations are either based on PyTorch, based on some third-party tool using PyTorch, or based on GPGPU’s runtime directly.

Publication
In 2023 International Joint Conference on Neural Networks (IJCNN)