Scalable Speech Enhancement with Dynamic Channel Pruning

Riccardo Miccini, Clément Laroche, Tobias Piechowiak, Luca Pezzarossa

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

10 Downloads (Orbit)

Abstract

Speech Enhancement (SE) is essential for improving productivity in remote collaborative environments. Although deep learning models are highly effective at SE, their computational demands make them impractical for embedded systems. Furthermore, acoustic conditions can change significantly in terms of difficulty, whereas neural networks are usually static with regard to the amount of computation performed. To this end, we introduce Dynamic Channel Pruning to the audio domain for the first time and apply it to a custom convolutional architecture for SE. Our approach works by identifying unnecessary convolutional channels at runtime and saving computational resources by not computing the activations for these channels and retrieving their filters. When trained to only use 25 % of channels, we save 29.6 % of MACs while only causing a 0.75 % drop in PESQ. Thus, DynCP offers a promising path toward deploying larger and more powerful SE solutions on resource-constrained devices.
Original languageEnglish
Title of host publicationProceedings of the 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2025)
Number of pages5
PublisherIEEE
Publication statusAccepted/In press - 2025
Event2025 IEEE International Conference on Acoustics, Speech, and Signal Processing - Hyderabad, India
Duration: 6 Apr 202511 Apr 2025

Conference

Conference2025 IEEE International Conference on Acoustics, Speech, and Signal Processing
Country/TerritoryIndia
CityHyderabad
Period06/04/202511/04/2025

Keywords

  • Speech Enhancement
  • Dynamic Neural Networks
  • Edge AI

Fingerprint

Dive into the research topics of 'Scalable Speech Enhancement with Dynamic Channel Pruning'. Together they form a unique fingerprint.

Cite this