Structure learning by pruning in independent component analysis

Andreas Brinch Nielsen, Lars Kai Hansen

    Research output: Contribution to journalJournal articleResearchpeer-review

    Abstract

    We discuss pruning as a means of structure learning in independent component analysis (ICA). Learning the structure is attractive in both signal processing and in analysis of abstract data, where it can assist model interpretation, generalizability and reduce computation. We derive the relevant saliency expressions and compare with magnitude based pruning and Bayesian sparsification. We show in simulations that pruning is able to identify underlying structures without prior knowledge on the dimensionality of the model. We find, that for ICA, magnitude based pruning is as efficient as saliency based methods and Bayesian methods, for both small and large samples. The Bayesian information criterion (BIC) seems to outperform both AIC and test sets as tools for determining the optimal dimensionality.
    Original languageEnglish
    JournalNeurocomputing
    Volume71
    Issue number10-12
    Pages (from-to)2281-2290
    ISSN0925-2312
    DOIs
    Publication statusPublished - 2008

    Keywords

    • Blind separation
    • Neural networks
    • Independent component analysis
    • Pruning
    • Structure
    • Sparsity

    Fingerprint

    Dive into the research topics of 'Structure learning by pruning in independent component analysis'. Together they form a unique fingerprint.

    Cite this