Towards a tailored mixed-precision sub-8-bit quantization scheme for Gated Recurrent Units using Genetic Algorithms

Riccardo Miccini, Alessandro Cerioli, Clément Laroche, Tobias Piechowiak, Jens Sparsø, Luca Pezzarossa

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

6 Downloads (Pure)


Despite the recent advances in model compression techniques for deep neural networks, deploying such models on ultra-low-power embedded devices still proves challenging. In particular, quantization schemes for Gated Recurrent Units (GRU) are difficult to tune due to their dependence on an internal state, preventing them from fully benefiting from sub-8bit quantization. In this work, we propose a modular integer quantization scheme for GRUs where the bit width of each operator can be selected independently. We then employ Genetic Algorithms (GA) to explore the vast search space of possible bit widths, simultaneously optimizing for model size and accuracy. We evaluate our methods on four different sequential tasks and demonstrate that mixed-precision solutions exceed homogeneous-precision ones in terms of Pareto efficiency. Our results show a model size reduction between 25% and 55% while maintaining an accuracy comparable with the 8-bit homogeneous equivalent.
Original languageEnglish
Title of host publicationProceedings of tinyML Research Symposium'24
Number of pages7
Publication statusAccepted/In press - 2024
EventtinyML Research Symposium'24 - San Francisco, United States
Duration: 22 Apr 202422 Apr 2024


ConferencetinyML Research Symposium'24
Country/TerritoryUnited States
CitySan Francisco


  • Neural networks
  • Quantization
  • Neural architecture search


Dive into the research topics of 'Towards a tailored mixed-precision sub-8-bit quantization scheme for Gated Recurrent Units using Genetic Algorithms'. Together they form a unique fingerprint.

Cite this