Tunable Floating-Point for Energy Efficient Accelerators

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.
Original languageEnglish
Title of host publicationProceedings of 2018 IEEE 25th Symposium on Computer Arithmetic
PublisherIEEE
Publication date2018
Pages29-36
ISBN (Print)9781538626139
DOIs
Publication statusPublished - 2018
Event2018 Ieee 25th Symposium on Computer Arithmetic - Amherst, United States
Duration: 25 Jun 201827 Jun 2018

Conference

Conference2018 Ieee 25th Symposium on Computer Arithmetic
CountryUnited States
CityAmherst
Period25/06/201827/06/2018

Fingerprint Dive into the research topics of 'Tunable Floating-Point for Energy Efficient Accelerators'. Together they form a unique fingerprint.

Cite this