This paper introduces a dedicated neural network engine developed for resource constrained embedded devices such as hearing aids. It implements a novel dynamic two-step scaling technique for quantizing the activations in order to minimize word size and thereby memory traffic. This technique requires neither computing a scaling factor during training nor expensive hardware for on-the-fly quantization. Memory traffic is further reduced by using a 12-element vectorized multiply-accumulate datapath that supports data-reuse. Using a keyword spotting neural network as benchmark, performance of the neural network engine is compared with an implementation on a typical audio digital signal processor used by Demant in some of its hearing instruments. In general, the neural network engine offers small area as well as low power. It outperforms the digital signal processor and results in significant reduction of, among others, power (5×), memory accesses (5.5×), and memory requirements (3×). Furthermore, the two-step scaling ensures that the engine always executes in a deterministic number of clock cycles for a given neural network.
|Title of host publication
|Proceedings of 54th Asilomar Conference on Signals, Systems, and Computers
|Published - 2020
|54th Asilomar Conference on Signals, Systems, and Computers - Pacific Grove, United States
Duration: 1 Nov 2020 → 4 Nov 2020
|54th Asilomar Conference on Signals, Systems, and Computers
|01/11/2020 → 04/11/2020