Hidden dimensions of the data: PCA vs autoencoders

Davide Cacciarelli, Murat Kulahci*

*Corresponding author for this work

Research output: Book/ReportReportResearchpeer-review

219 Downloads (Orbit)

Abstract

Principal component analysis (PCA) has been a commonly used unsupervised learning method with broad applications in both descriptive and inferential analytics. It is widely used for representation learning to extract key features from a dataset and visualize them in a lower dimensional space. With more applications of neural network-based methods, autoencoders (AEs) have gained popularity for dimensionality reduction tasks. In this paper, we explore the intriguing relationship between PCA and AEs and demonstrate, through some examples, how these two approaches yield similar results in the case of the so-called linear AEs (LAEs). This study provides insights into the evolving landscape of unsupervised learning and highlights the relevance of both PCA and AEs in modern data analysis.

Original languageEnglish
PublisherTaylor & Francis
Volume35
Number of pages10
DOIs
Publication statusPublished - 2023
SeriesQuality Engineering
ISSN0898-2112

Keywords

  • Autoencoders
  • Deep learning
  • Dimensionality reduction
  • Principal component analysis
  • Unsupervised learning

Fingerprint

Dive into the research topics of 'Hidden dimensions of the data: PCA vs autoencoders'. Together they form a unique fingerprint.

Cite this