A pseudo-softmax function for hardware-based high speed image classification

Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Daniele Giardino, Alberto Nannarelli, Marco Re, Sergio Spanò*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

198 Downloads (Pure)

Abstract

In this work a novel architecture, named pseudo-softmax, to compute an approximated form of the softmax function is presented. This architecture can be fruitfully used in the last layer of Neural Networks and Convolutional Neural Networks for classification tasks, and in Reinforcement Learning hardware accelerators to compute the Boltzmann action-selection policy. The proposed pseudo-softmax design, intended for efficient hardware implementation, exploits the typical integer quantization of hardware-based Neural Networks obtaining an accurate approximation of the result. In the paper, a detailed description of the architecture is given and an extensive analysis of the approximation error is performed by using both custom stimuli and real-world Convolutional Neural Networks inputs. The implementation results, based on CMOS standard-cell technology, compared to state-of-the-art architectures show reduced approximation errors.
Original languageEnglish
Article number15307
JournalScientific Reports
Volume11
Issue number1
Number of pages10
ISSN2045-2322
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'A pseudo-softmax function for hardware-based high speed image classification'. Together they form a unique fingerprint.

Cite this