Model sparsity and brain pattern interpretation of classification models in neuroimaging

Publication: Research - peer-reviewJournal article – Annual report year: 2011

Standard

Model sparsity and brain pattern interpretation of classification models in neuroimaging. / Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Churchill, Nathan W; Hansen, Lars Kai; Strother, Stephen C.

In: Pattern Recognition, Vol. 45, No. 6, 2012, p. 2085-2100.

Publication: Research - peer-reviewJournal article – Annual report year: 2011

Harvard

APA

CBE

MLA

Vancouver

Author

Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Churchill, Nathan W; Hansen, Lars Kai; Strother, Stephen C / Model sparsity and brain pattern interpretation of classification models in neuroimaging.

In: Pattern Recognition, Vol. 45, No. 6, 2012, p. 2085-2100.

Publication: Research - peer-reviewJournal article – Annual report year: 2011

Bibtex

@article{02b6ddbf0e9f4665ae650487d15510d6,
title = "Model sparsity and brain pattern interpretation of classification models in neuroimaging",
publisher = "Pergamon",
author = "Rasmussen, {Peter Mondrup} and Madsen, {Kristoffer Hougaard} and Churchill, {Nathan W} and Hansen, {Lars Kai} and Strother, {Stephen C}",
year = "2012",
doi = "10.1016/j.patcog.2011.09.011",
volume = "45",
number = "6",
pages = "2085--2100",
journal = "Pattern Recognition",
issn = "0031-3203",

}

RIS

TY - JOUR

T1 - Model sparsity and brain pattern interpretation of classification models in neuroimaging

A1 - Rasmussen,Peter Mondrup

A1 - Madsen,Kristoffer Hougaard

A1 - Churchill,Nathan W

A1 - Hansen,Lars Kai

A1 - Strother,Stephen C

AU - Rasmussen,Peter Mondrup

AU - Madsen,Kristoffer Hougaard

AU - Churchill,Nathan W

AU - Hansen,Lars Kai

AU - Strother,Stephen C

PB - Pergamon

PY - 2012

Y1 - 2012

N2 - Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a ‘brain map’ derived from the classification model. In this study we focus on the relative influence of model regularization parameter choices on both the model generalization, the reliability of the spatial patterns extracted from the classification model, and the ability of the resulting model to identify relevant brain networks defining the underlying neural encoding of the experiment. For a support vector machine, logistic regression and Fisher's discriminant analysis we demonstrate that selection of model regularization parameters has a strong but consistent impact on the generalizability and both the reproducibility and interpretable sparsity of the models for both ℓ2 and ℓ1 regularization. Importantly, we illustrate a trade-off between model spatial reproducibility and prediction accuracy. We show that known parts of brain networks can be overlooked in pursuing maximization of classification accuracy alone with either ℓ2 and/or ℓ1 regularization. This supports the view that the quality of spatial patterns extracted from models cannot be assessed purely by focusing on prediction accuracy. Our results instead suggest that model regularization parameters must be carefully selected, so that the model and its visualization enhance our ability to interpret the brain.

AB - Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a ‘brain map’ derived from the classification model. In this study we focus on the relative influence of model regularization parameter choices on both the model generalization, the reliability of the spatial patterns extracted from the classification model, and the ability of the resulting model to identify relevant brain networks defining the underlying neural encoding of the experiment. For a support vector machine, logistic regression and Fisher's discriminant analysis we demonstrate that selection of model regularization parameters has a strong but consistent impact on the generalizability and both the reproducibility and interpretable sparsity of the models for both ℓ2 and ℓ1 regularization. Importantly, we illustrate a trade-off between model spatial reproducibility and prediction accuracy. We show that known parts of brain networks can be overlooked in pursuing maximization of classification accuracy alone with either ℓ2 and/or ℓ1 regularization. This supports the view that the quality of spatial patterns extracted from models cannot be assessed purely by focusing on prediction accuracy. Our results instead suggest that model regularization parameters must be carefully selected, so that the model and its visualization enhance our ability to interpret the brain.

KW - Neuroimaging

KW - NPAIRS resampling

KW - Classification

KW - Regularization

KW - Model interpretation

KW - Kernel methods

KW - Sparsity

KW - Pattern analysis

U2 - 10.1016/j.patcog.2011.09.011

DO - 10.1016/j.patcog.2011.09.011

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

IS - 6

VL - 45

SP - 2085

EP - 2100

ER -