Abstract
Approximate computing has emerged as a promising approach to energy-efficient design of digital systems in many domains such as digital signal processing, robotics, and machine learning. Numerous studies report that employing different data formats in Deep Neural Networks (DNNs), the dominant Machine Learning approach, could allow substantial improvements in power efficiency considering an acceptable quality for results. In this work, the application of Tunable Floating-Point (TFP) precision to DNN is presented. In TFP different precisions for different operations can be set by selecting a specific number of bits for significand and exponent in the floating-point representation. Flexibility in tuning the precision of given layers of the
neural network may result in a more power efficient computation.
neural network may result in a more power efficient computation.
Original language | English |
---|---|
Title of host publication | Proceedings of 25th IEEE International Conference on Electronics Circuits and Systems |
Publisher | IEEE |
Publication date | 2018 |
Pages | 289-292 |
ISBN (Print) | 9781538695623 |
DOIs | |
Publication status | Published - 2018 |
Event | 2018 IEEE 25th International Conference on Electronics, Circuits and Systems - Palais des Congrès, Bordeaux, France Duration: 9 Dec 2018 → 12 Dec 2018 Conference number: 25 https://ieeexplore.ieee.org/xpl/conhome/8599658/proceeding |
Conference
Conference | 2018 IEEE 25th International Conference on Electronics, Circuits and Systems |
---|---|
Number | 25 |
Location | Palais des Congrès |
Country/Territory | France |
City | Bordeaux |
Period | 09/12/2018 → 12/12/2018 |
Internet address |
Keywords
- Floating-point
- Power efficiency
- Neural networks