Robust fake news detection over time and attack

Benjamin D. Horne, Jeppe Nørregaard, Sibel Adali

Research output: Contribution to journalJournal articleResearchpeer-review


In this study, we examine the impact of time on state-of-the-art news veracity classifiers. We show that, as time progresses, classification performance for both unreliable and hyper-partisan news classification slowly degrade. While this degradation does happen, it happens slower than expected, illustrating that hand-crafted, content-based features, such as style of writing, are fairly robust to changes in the news cycle.We show that this small degradation can bemitigated using online learning. Last, we examine the impact of adversarial content manipulation by malicious news producers. Specifically, we test three types of attack based on changes in the input space and data availability. We show that static models are susceptible to content manipulation attacks, but online models can recover from such attacks.

Original languageEnglish
Article number7
JournalACM Transactions on Intelligent Systems and Technology
Issue number1
Number of pages23
Publication statusPublished - Dec 2019


  • Adversarial machine learning
  • Biased news
  • Concept drift
  • Disinformation
  • Fake news
  • Fake news detection
  • Misinformation
  • Misleading news
  • Robust machine learning

Cite this