In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.
|Title of host publication||Proceedings of 2018 IEEE 25th Symposium on Computer Arithmetic|
|Publication status||Published - 2018|
|Event||2018 Ieee 25th Symposium on Computer Arithmetic - Amherst, United States|
Duration: 25 Jun 2018 → 27 Jun 2018
|Conference||2018 Ieee 25th Symposium on Computer Arithmetic|
|Period||25/06/2018 → 27/06/2018|