Training of deep neural networks is often constrained by the available memory and computational power. This often causes it to run for weeks even when the underlying platform is employed with multiple GPUs. In order to speed up the training and reduce space complexity the paper presents an approach of using reduced precision (8-bit) ﬂoating points for training hand-written characters classiﬁer LeNeT-5 which allows for achieving 97.10% (Top-1 and Top-5) accuracy while reducing the overall space complexity by 75% in comparison to a model using single precision ﬂoating points.
|Number of pages||4|
|Publication status||Published - 2018|
|Series||DTU Compute Technical Report-2018|
- Approximate computing
- Deep learning