Abstract
The performance and inference capabilities of neural networks rely heavily on the training data they are exposed to. Generally, larger dataset yield more powerful models. This incentive to continuously extend the training sets of models can lead to data exploitation, where data is being used against the owner's wishes to train neural networks. Even if such misuse of data is suspected, it is currently next to impossible to determine its veracity. This research explores the utilization of adversarial noise to manipulate the performance of neural networks, investigating how those findings can be used to infer whether a collection of data is a member of a training set. It proposes a novel approach to generate 'deepmarked' images containing adversarial noise that maximizes its detectability as a training set member, while remaining visually indistinguishable from the original data. The findings of this study demonstrate the feasibility of detecting and inferring the membership status of a data collection within a neural network's training set using the proposed technique, in a restricted black-box setting where the model output only contains the single highest likelihood class.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2024 IEEE International Conference on Computational Photography (ICCP) |
Number of pages | 10 |
Publisher | IEEE |
Publication date | 2024 |
ISBN (Print) | 979-8-3503-6156-8 |
ISBN (Electronic) | 979-8-3503-6155-1 |
DOIs | |
Publication status | Published - 2024 |
Event | 2024 IEEE International Conference on Computational Photography - Lausanne, Switzerland Duration: 22 Jul 2024 → 24 Jul 2024 |
Conference
Conference | 2024 IEEE International Conference on Computational Photography |
---|---|
Country/Territory | Switzerland |
City | Lausanne |
Period | 22/07/2024 → 24/07/2024 |
Keywords
- Computational Photography