Archetypal analysis for machine learning and data mining

Publication: Research - peer-reviewJournal article – Annual report year: 2011

View graph of relations

Archetypal analysis (aa) proposed by Cutler and Breiman (1994) [7] estimates the principal convex hull (pch) of a data set. As such aa favors features that constitute representative ‘corners’ of the data, i.e., distinct aspects or archetypes. We currently show that aa enjoys the interpretability of clustering – without being limited to hard assignment and the uniqueness of svd – without being limited to orthogonal representations. In order to do large scale aa, we derive an efficient algorithm based on projected gradient as well as an initialization procedure we denote FurthestSum that is inspired by the FurthestFirst approach widely used for k-means (Hochbaum and Shmoys, 1985 [14]). We generalize the aa procedure to kernel-aa in order to extract the principal convex hull in potential infinite Hilbert spaces and derive a relaxation of aa when the archetypes cannot be represented as convex combinations of the observed data. We further demonstrate that the aa model is relevant for feature extraction and dimensionality reduction for a large variety of machine learning problems taken from computer vision, neuroimaging, chemistry, text mining and collaborative filtering leading to highly interpretable representations of the dynamics in the data. Matlab code for the derived algorithms is available for download from
Original languageEnglish
Pages (from-to)54–63
StatePublished - 2012
CitationsWeb of Science® Times Cited: 45


  • Non-negative matrixfactorization, Principal convexhull, FurthestSum, FurthestFirst, Archetypal analysis, Clustering, Kernel methods
Download as:
Download as PDF
Select render style:
Download as HTML
Select render style:
Download as Word
Select render style:

ID: 6339400