Measuring Arithmetic Extrapolation Performance

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

38 Downloads (Pure)

Abstract

The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can learn exact arithmetic operations between the elements of a hidden state. The goal of NALU is to learn perfect extrapolation, which requires learning the exact underlying logic of an unknown arithmetic problem. Evaluating the performance of the NALU is non-trivial as one arithmetic problem might have many solutions. As a consequence, single-instance MSE has been used to evaluate and compare performance between models. However, it can be hard to interpret what magnitude of MSE represents a correct solution and models sensitivity to initialization. We propose using a success-criterion to measure if and when a model converges. Using a success-criterion we can summarize success-rate over many initialization seeds and calculate confidence intervals. We contribute a generalized version of the previous arithmetic benchmark to measure models sensitivity under different conditions. This is, to our knowledge, the first extensive evaluation with respect to convergence of the NALU and its sub-units. Using a success-criterion to summarize 4800 experiments we find that consistently learning arithmetic extrapolation is challenging, in particular for multiplication.
Original languageEnglish
Title of host publicationScience meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems
Number of pages5
Publication date2019
Publication statusPublished - 2019
EventThirty-Third Annual Conference on Neural Information Processing Systems - Vancouver Convention Center, Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
https://nips.cc/Conferences/2019/

Conference

ConferenceThirty-Third Annual Conference on Neural Information Processing Systems
LocationVancouver Convention Center
CountryCanada
CityVancouver
Period08/12/201914/12/2019
Internet address

Cite this

Johansen, A. R., & Madsen, A. (2019). Measuring Arithmetic Extrapolation Performance. In Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems
Johansen, Alexander Rosenberg ; Madsen, Andreas. / Measuring Arithmetic Extrapolation Performance. Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems. 2019.
@inproceedings{8dfd3d11a133436693b3ca31f4ac909e,
title = "Measuring Arithmetic Extrapolation Performance",
abstract = "The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can learn exact arithmetic operations between the elements of a hidden state. The goal of NALU is to learn perfect extrapolation, which requires learning the exact underlying logic of an unknown arithmetic problem. Evaluating the performance of the NALU is non-trivial as one arithmetic problem might have many solutions. As a consequence, single-instance MSE has been used to evaluate and compare performance between models. However, it can be hard to interpret what magnitude of MSE represents a correct solution and models sensitivity to initialization. We propose using a success-criterion to measure if and when a model converges. Using a success-criterion we can summarize success-rate over many initialization seeds and calculate confidence intervals. We contribute a generalized version of the previous arithmetic benchmark to measure models sensitivity under different conditions. This is, to our knowledge, the first extensive evaluation with respect to convergence of the NALU and its sub-units. Using a success-criterion to summarize 4800 experiments we find that consistently learning arithmetic extrapolation is challenging, in particular for multiplication.",
author = "Johansen, {Alexander Rosenberg} and Andreas Madsen",
year = "2019",
language = "English",
booktitle = "Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems",

}

Johansen, AR & Madsen, A 2019, Measuring Arithmetic Extrapolation Performance. in Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems. Thirty-Third Annual Conference on Neural Information Processing Systems, Vancouver, Canada, 08/12/2019.

Measuring Arithmetic Extrapolation Performance. / Johansen, Alexander Rosenberg; Madsen, Andreas.

Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems. 2019.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

TY - GEN

T1 - Measuring Arithmetic Extrapolation Performance

AU - Johansen, Alexander Rosenberg

AU - Madsen, Andreas

PY - 2019

Y1 - 2019

N2 - The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can learn exact arithmetic operations between the elements of a hidden state. The goal of NALU is to learn perfect extrapolation, which requires learning the exact underlying logic of an unknown arithmetic problem. Evaluating the performance of the NALU is non-trivial as one arithmetic problem might have many solutions. As a consequence, single-instance MSE has been used to evaluate and compare performance between models. However, it can be hard to interpret what magnitude of MSE represents a correct solution and models sensitivity to initialization. We propose using a success-criterion to measure if and when a model converges. Using a success-criterion we can summarize success-rate over many initialization seeds and calculate confidence intervals. We contribute a generalized version of the previous arithmetic benchmark to measure models sensitivity under different conditions. This is, to our knowledge, the first extensive evaluation with respect to convergence of the NALU and its sub-units. Using a success-criterion to summarize 4800 experiments we find that consistently learning arithmetic extrapolation is challenging, in particular for multiplication.

AB - The Neural Arithmetic Logic Unit (NALU) is a neural network layer that can learn exact arithmetic operations between the elements of a hidden state. The goal of NALU is to learn perfect extrapolation, which requires learning the exact underlying logic of an unknown arithmetic problem. Evaluating the performance of the NALU is non-trivial as one arithmetic problem might have many solutions. As a consequence, single-instance MSE has been used to evaluate and compare performance between models. However, it can be hard to interpret what magnitude of MSE represents a correct solution and models sensitivity to initialization. We propose using a success-criterion to measure if and when a model converges. Using a success-criterion we can summarize success-rate over many initialization seeds and calculate confidence intervals. We contribute a generalized version of the previous arithmetic benchmark to measure models sensitivity under different conditions. This is, to our knowledge, the first extensive evaluation with respect to convergence of the NALU and its sub-units. Using a success-criterion to summarize 4800 experiments we find that consistently learning arithmetic extrapolation is challenging, in particular for multiplication.

M3 - Article in proceedings

BT - Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems

ER -

Johansen AR, Madsen A. Measuring Arithmetic Extrapolation Performance. In Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems. 2019