Top-down attentionwith features missing at random

Seliz Karadogan, Letizia Marchegiani, Jan Larsen, Lars Kai Hansen

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    Abstract

    In this paper we present a top-down attention model designed for an environment in which features are missing completely at random. Following (Hansen et al., 2011) we model top-down attention as a sequential decision making process driven by a task - modeled as a classification problem - in an environment with random subsets of features missing, but where we have the possibility to gather additional features among the ones that are missing. Thus, the top-down attention problem is reduced to finding the answer to the question what to measure next? Attention is based on the top-down saliency of the missing features given as the estimated difference in classification confusion (entropy) with and without the given feature. The difference in confusion is computed conditioned on the available set of features. In this work, we make our attention model more realistic by also allowing the initial training phase to take place with incomplete data. Thus, we expand the model to include a missing data technique in the learning process. The top-down attention mechanism is implemented in a Gaussian Discrete mixture model setting where marginals and conditionals are relatively easy to compute. To illustrate the viability of expanded model, we train the mixture model with two different datasets, a synthetic data set and the well-known Yeast dataset of the UCI database. We evaluate the new algorithm in environments characterized by different amounts of incompleteness and compare the performance with a system that decides next feature to be measured at random. The proposed top-down mechanism clearly outperforms random choice of the next feature.
    Original languageEnglish
    Title of host publication2011 IEEE International Workshop on Machine Learning for Signal Processing (MLSP)
    PublisherIEEE
    Publication date2011
    ISBN (Print)978-1-4577-1621-8
    ISBN (Electronic)978-1-4577-1622-5
    DOIs
    Publication statusPublished - 2011
    Event2011 IEEE International Workshop on Machine Learning for Signal Processing - Beijing, China
    Duration: 18 Sept 201121 Sept 2011
    Conference number: 21
    https://ieeexplore.ieee.org/xpl/conhome/6058570/proceeding

    Conference

    Conference2011 IEEE International Workshop on Machine Learning for Signal Processing
    Number21
    Country/TerritoryChina
    CityBeijing
    Period18/09/201121/09/2011
    Internet address
    SeriesMachine Learning for Signal Processing
    ISSN1551-2541

    Keywords

    • Missing data techniques
    • Attention modeling
    • Machine learning
    • Entropy

    Fingerprint

    Dive into the research topics of 'Top-down attentionwith features missing at random'. Together they form a unique fingerprint.

    Cite this