Detecting Post Editing of Multimedia Images using Transfer Learning and Fine Tuning

Simon Jonker, Malthe Jelstrup, Weizhi Meng*, Brooke Lampe

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

2 Downloads (Pure)

Abstract

In the domain of general image forgery detection, a myriad of different classification solutions have been developed to distinguish a “tampered” image from a “pristine” image. In this work, we aim to develop a new method to tackle the problem of binary image forgery detection. Our approach builds upon the extensive training that state-of-the-art image classification models have undergone on regular images from the ImageNet dataset, and transfers that knowledge to the image forgery detection space. By leveraging transfer learning and fine tuning, we can fit state-of-the-art image classification models to the forgery detection task. We train the models on a diverse and evenly distributed image forgery dataset. With five models—EfficientNetB0, VGG16, Xception, ResNet50V2, and NASNet-Large—we transferred and adapted pre-trained knowledge from ImageNet to the forgery detection task. Each model was fitted, fine-tuned, and evaluated according to a set of performance metrics. Our evaluation demonstrated the efficacy of large-scale image classification models—paired with transfer learning and fine tuning—at detecting image forgeries. When pitted against a previously unseen dataset, the best-performing model of EfficientNetB0 could achieve an accuracy rate of nearly 89.7%.
Original languageEnglish
JournalACM Transactions on Multimedia Computing, Communications and Applications
Number of pages21
ISSN1551-6865
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Multimedia data integrity
  • Image forgery
  • Fake news
  • Post editing
  • Fine tuning
  • Transfer learning

Fingerprint

Dive into the research topics of 'Detecting Post Editing of Multimedia Images using Transfer Learning and Fine Tuning'. Together they form a unique fingerprint.

Cite this