Structure Learning by Pruning in Independent Component Analysis

Andreas Kjems, Lars Kai Hansen

    Research output: Book/ReportReportResearchpeer-review


    We discuss pruning as a means of structure learning in independent component analysis. Sparse models are attractive in both signal processing and in analysis of abstract data, they can assist model interpretation, generalizability and reduce computation. We derive the relevant saliency expressions and compare with magnitude based pruning and Bayesian sparsification. We show in simulations that pruning is able to identify underlying sparse structures without prior knowledge on the degree of sparsity. We find that for ICA magnitude based pruning is as efficient as saliency based methods and Bayesian methods, for both small and large samples. The Bayesian information criterion (BIC) seems to outperform both AIC and test sets as tools for determining the optimal degree of sparsity.
    Original languageEnglish
    Publication statusPublished - 2006


    Dive into the research topics of 'Structure Learning by Pruning in Independent Component Analysis'. Together they form a unique fingerprint.

    Cite this