Tunable Floating-Point for Artificial Neural Networks

Marta Franceschi, Alberto Nannarelli, Maurizio Valle

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

235 Downloads (Pure)


Approximate computing has emerged as a promising approach to energy-efficient design of digital systems in many domains such as digital signal processing, robotics, and machine learning. Numerous studies report that employing different data formats in Deep Neural Networks (DNNs), the dominant Machine Learning approach, could allow substantial improvements in power efficiency considering an acceptable quality for results. In this work, the application of Tunable Floating-Point (TFP) precision to DNN is presented. In TFP different precisions for different operations can be set by selecting a specific number of bits for significand and exponent in the floating-point representation. Flexibility in tuning the precision of given layers of the
neural network may result in a more power efficient computation.
Original languageEnglish
Title of host publicationProceedings of 25th IEEE International Conference on Electronics Circuits and Systems
Publication date2018
ISBN (Print)9781538695623
Publication statusPublished - 2018
Event25th IEEE International Conference on Electronics Circuits and Systems - Palais des Congrès, Bordeaux, France
Duration: 9 Dec 201812 Dec 2018


Conference25th IEEE International Conference on Electronics Circuits and Systems
LocationPalais des Congrès


  • Floating-point
  • Power efficiency
  • Neural networks

Fingerprint Dive into the research topics of 'Tunable Floating-Point for Artificial Neural Networks'. Together they form a unique fingerprint.

Cite this