A depthwise separable convolutional neural network for keyword spotting on an embedded system

Peter Mølgaard Sørensen, Bastian Epp, Tobias May*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

754 Downloads (Pure)

Abstract

A keyword spotting algorithm implemented on an embedded system using a depthwise separable convolutional neural network classifier is reported. The proposed system was derived from a high-complexity system with the goal to reduce complexity and to increase efficiency. In order to meet the requirements set by hardware resource constraints, a limited hyper-parameter grid search was performed, which showed that network complexity could be drastically reduced with little effect on classification accuracy. It was furthermore found that quantization of pre-trained networks using mixed and dynamic fixed point principles could reduce the memory footprint and computational requirements without lowering classification accuracy. Data augmentation techniques were used to increase network robustness in unseen acoustic conditions by mixing training data with realistic noise recordings. Finally, the system’s ability to detect keywords in a continuous audio stream was successfully demonstrated.
Original languageEnglish
Article number10
JournalEurasip Journal on Audio, Speech, and Music Processing
Volume2020
Issue number1
Number of pages14
ISSN1687-4714
DOIs
Publication statusPublished - 2020

Keywords

  • Keyword spotting
  • Speech recognition
  • Embedded software
  • Deep learning
  • Convolutional neural networks
  • Quantization

Fingerprint

Dive into the research topics of 'A depthwise separable convolutional neural network for keyword spotting on an embedded system'. Together they form a unique fingerprint.

Cite this