Attention: A Machine Learning Perspective

Publication: Research - peer-reviewArticle in proceedings – Annual report year: 2012

Standard

Attention: A Machine Learning Perspective. / Hansen, Lars Kai.

2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE, 2012.

Publication: Research - peer-reviewArticle in proceedings – Annual report year: 2012

Harvard

Hansen, LK 2012, 'Attention: A Machine Learning Perspective'. in 2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE., 10.1109/CIP.2012.6232894

APA

Hansen, L. K. (2012). Attention: A Machine Learning Perspective. In 2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE. 10.1109/CIP.2012.6232894

CBE

Hansen LK. 2012. Attention: A Machine Learning Perspective. In 2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE. Available from: 10.1109/CIP.2012.6232894

MLA

Hansen, Lars Kai "Attention: A Machine Learning Perspective". 2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE. 2012. Available: 10.1109/CIP.2012.6232894

Vancouver

Hansen LK. Attention: A Machine Learning Perspective. In 2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE. 2012. Available from: 10.1109/CIP.2012.6232894

Author

Hansen, Lars Kai / Attention: A Machine Learning Perspective.

2012 3rd International Workshop on Cognitive Information Processing (CIP). IEEE, 2012.

Publication: Research - peer-reviewArticle in proceedings – Annual report year: 2012

Bibtex

@inbook{06523535e4294862a3f8701037830b74,
title = "Attention: A Machine Learning Perspective",
publisher = "IEEE",
author = "Hansen, {Lars Kai}",
year = "2012",
doi = "10.1109/CIP.2012.6232894",
isbn = "978-1-4673-1877-8",
booktitle = "2012 3rd International Workshop on Cognitive Information Processing (CIP)",

}

RIS

TY - GEN

T1 - Attention: A Machine Learning Perspective

A1 - Hansen,Lars Kai

AU - Hansen,Lars Kai

PB - IEEE

PY - 2012

Y1 - 2012

N2 - We review a statistical machine learning model of top-down task driven attention based on the notion of ‘gist’. In this framework we consider the task to be represented as a classification problem with two sets of features — a gist of coarse grained global features and a larger set of low-level local features. Attention is modeled as the choice process over the low-level features given the gist. The model takes its departure in a classical information theoretic framework for experimental design. This approach requires the evaluation over marginalized and conditional distributions. By implementing the classifier within a Gaussian Discrete mixture it is straightforward to marginalize and condition, hence, we obtained a relatively simple expression for the feature dependent information gain — the top-down saliency. As the top-down attention mechanism is modeled as a simple classification problem, we can evaluate the strategy simply by estimating error rates on a test data set. We illustrate the attention mechanism on a simple simulated visual domain in which the choice is over nine patches in which a binary pattern has to be classified. The performance of the classifier equipped with the attention mechanism is almost as good as one that has access to all low-level features and clearly improving over a simple ‘random attention’ alternative.

AB - We review a statistical machine learning model of top-down task driven attention based on the notion of ‘gist’. In this framework we consider the task to be represented as a classification problem with two sets of features — a gist of coarse grained global features and a larger set of low-level local features. Attention is modeled as the choice process over the low-level features given the gist. The model takes its departure in a classical information theoretic framework for experimental design. This approach requires the evaluation over marginalized and conditional distributions. By implementing the classifier within a Gaussian Discrete mixture it is straightforward to marginalize and condition, hence, we obtained a relatively simple expression for the feature dependent information gain — the top-down saliency. As the top-down attention mechanism is modeled as a simple classification problem, we can evaluate the strategy simply by estimating error rates on a test data set. We illustrate the attention mechanism on a simple simulated visual domain in which the choice is over nine patches in which a binary pattern has to be classified. The performance of the classifier equipped with the attention mechanism is almost as good as one that has access to all low-level features and clearly improving over a simple ‘random attention’ alternative.

U2 - 10.1109/CIP.2012.6232894

DO - 10.1109/CIP.2012.6232894

SN - 978-1-4673-1877-8

BT - 2012 3rd International Workshop on Cognitive Information Processing (CIP)

T2 - 2012 3rd International Workshop on Cognitive Information Processing (CIP)

ER -