PeakRNN and StatsRNN: Dynamic Pruning in Recurrent Neural Networks

Zuzana Jelcicova, Rasmus Thomas Jones, David Thorn Blix, Marian Verhelst, Jens Sparsø

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

120 Downloads (Pure)


This paper introduces two dynamic real-time pruning techniques PeakRNN and StatsRNN for reducing costly multiplications and memory accesses in recurrent neural networks. The methods are demonstrated on a gated recurrent unit in a multi-layer network, solving a single-channel speech enhancement task with a wide variety of real-world acoustic environments and speakers. The performance is compared against the baseline gated recurrent unit and the DeltaRNN method. Compared to the unprocessed speech, the SNR and Perceptual Evaluation of Speech Quality were on average improved by 8.11 dB and 0.43 MOS-LQO, respectively. Additionally, the two proposed methods outperformed DeltaRNN by 0.7 dB and 0.11 MOS-LQO in the two objective measures, while using the same computational budget per timestep and reducing the original operations by 88%. Furthermore, PeakRNN is fully deterministic, i.e. it is always known in advance how many computations will be executed. Such worst-case guarantees are crucial for real-time acoustics applications.
Original languageEnglish
Title of host publicationProceedings of 29th European Signal Processing Conference
Number of pages5
Publication date2022
ISBN (Print)978-1-6654-0900-1
Publication statusPublished - 2022
Event29th European Signal Processing Conference - Virtual event, Dublin, Ireland
Duration: 23 Aug 202127 Aug 2021
Conference number: 29


Conference29th European Signal Processing Conference
LocationVirtual event
Internet address


  • RNN
  • Determinism
  • Statistics
  • Peaks
  • Threshold
  • Single-channel speech enhancement
  • Hearing instruments


Dive into the research topics of 'PeakRNN and StatsRNN: Dynamic Pruning in Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this