Tunable Floating-Point for Energy Efficient Accelerators

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.
Original languageEnglish
Title of host publicationProceedings of 2018 IEEE 25th Symposium on Computer Arithmetic
PublisherIEEE
Publication date2018
Pages29-36
ISBN (Print)9781538626139
DOIs
Publication statusPublished - 2018
Event2018 Ieee 25th Symposium on Computer Arithmetic - Amherst, United States
Duration: 25 Jun 201827 Jun 2018

Conference

Conference2018 Ieee 25th Symposium on Computer Arithmetic
CountryUnited States
CityAmherst
Period25/06/201827/06/2018

Cite this

Nannarelli, A. (2018). Tunable Floating-Point for Energy Efficient Accelerators. In Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic (pp. 29-36). IEEE. https://doi.org/10.1109/ARITH.2018.8464797
Nannarelli, Alberto. / Tunable Floating-Point for Energy Efficient Accelerators. Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic. IEEE, 2018. pp. 29-36
@inproceedings{0b05a80887b64dd5b174a57f61f1e36b,
title = "Tunable Floating-Point for Energy Efficient Accelerators",
abstract = "In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.",
author = "Alberto Nannarelli",
year = "2018",
doi = "10.1109/ARITH.2018.8464797",
language = "English",
isbn = "9781538626139",
pages = "29--36",
booktitle = "Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic",
publisher = "IEEE",
address = "United States",

}

Nannarelli, A 2018, Tunable Floating-Point for Energy Efficient Accelerators. in Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic. IEEE, pp. 29-36, 2018 Ieee 25th Symposium on Computer Arithmetic, Amherst, United States, 25/06/2018. https://doi.org/10.1109/ARITH.2018.8464797

Tunable Floating-Point for Energy Efficient Accelerators. / Nannarelli, Alberto.

Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic. IEEE, 2018. p. 29-36.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

TY - GEN

T1 - Tunable Floating-Point for Energy Efficient Accelerators

AU - Nannarelli, Alberto

PY - 2018

Y1 - 2018

N2 - In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.

AB - In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.

U2 - 10.1109/ARITH.2018.8464797

DO - 10.1109/ARITH.2018.8464797

M3 - Article in proceedings

SN - 9781538626139

SP - 29

EP - 36

BT - Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic

PB - IEEE

ER -

Nannarelli A. Tunable Floating-Point for Energy Efficient Accelerators. In Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic. IEEE. 2018. p. 29-36 https://doi.org/10.1109/ARITH.2018.8464797