Multiple Kernel Representation Learning on Networks

Abdulkadir Celikkanat, Yanning Shen, Fragkiskos D. Malliaros

Research output: Contribution to journalJournal articleResearchpeer-review

84 Downloads (Pure)

Abstract

Learning representations of nodes in a low dimensional space is a crucial task with numerous interesting applications in network analysis, including link prediction, node classification, and visualization. Two popular approaches for this problem are matrix factorization and random walk-based models. In this paper, we aim to bring together the best of both worlds, towards learning node representations. In particular, we propose a weighted matrix factorization model that encodes random walk-based information about nodes of the network. The benefit of this novel formulation is that it enables us to utilize kernel functions without realizing the exact proximity matrix so that it enhances the expressiveness of existing matrix decomposition methods with kernels and alleviate their computational complexities. We extend the approach with a multiple kernel learning formulation that provides the flexibility of learning the kernel as the linear combination of a dictionary of kernels in data-driven fashion. We perform an empirical evaluation on real-world networks, showing that the proposed model outperforms baseline node embedding algorithms in downstream machine learning tasks.
Original languageEnglish
JournalIEEE Transactions on Knowledge and Data Engineering
Volume35
Issue number6
Pages (from-to)6113 - 6125
ISSN1558-2191
DOIs
Publication statusPublished - 2023

Keywords

  • Graph representation learning
  • Node embeddings
  • Kernel methods
  • Node classification
  • Link prediction

Fingerprint

Dive into the research topics of 'Multiple Kernel Representation Learning on Networks'. Together they form a unique fingerprint.

Cite this