Neural network transformation under hardware constraints

摘要

There are a number of mature ways to train various kinds of ANNs (artificial neural networks), including the BP (back propagation) based algorithm and so on. These training procedures are usually carried out on some GPU-enabled machine(s); 16-/32-bit-width floating point numbers are used as the NN parameters, without any limitation on the maximum fan-in/fan-out of a single neuron or on the type of activation functions. In contrast, for neuromorphic chips [1][2][3], quite a few hardware-specific constraints (the limited fan-in/fan-out of a single neuron, the limited range of synaptic weights, and the hardware types of neurons or activation functions are usually simpler than the software counterparts) do exist, which makes programming such chips difficult.

出版物
In 2016 International Conference on Compliers, Architectures, and Sythesis of Embedded Systems (CASES)