Abstract
Machine learning and specifically deep learning techniques address many of the issues faced in visual object detection and classification tasks. However, they have the caveat of needing large amounts of annotated training data. In the maritime domain one may encounter objects fairly infrequently, depending on weather and location. This creates an issue of data collection. Areas such as harbors and channels see a lot of traffic, but the ships are of a specific class. Furthermore, the variability of the buoys from region to region and within regions is difficult and expensive to sample. Thus the amount and quality of available data is severely lacking. Furthermore very few publicly available maritime datasets exist In this work, we present a novel approach that detects possible “poor” training samples and automatically re-annotates them, based on the current state of the object detector. We show the applicability of our approach on real-life maritime data and show that the poor annotation quality of the datasets used can be mitigated. We show performance gain with respect to a baseline approach is proportional to the amount of poorly annotated data in the dataset. When 25% of the data is poor we achieve a 5.5%, 13.7%, and 8.0% increase in performance on 3 separate datasets, compared to a baseline model. With 50% noise we reach 58.5%, 18.7% and 94.2% increase respectively. Our approach also allows for the iterative improvement of a given dataset by providing a set of pseudo-annotations to replace the current incorrect ones.
| Original language | English |
|---|---|
| Article number | 100411 |
| Journal | Machine Learning with Applications |
| Volume | 10 |
| Number of pages | 9 |
| ISSN | 2666-8270 |
| DOIs | |
| Publication status | Published - 2022 |
Keywords
- Machine learning
- Deep learning
- Robust neural networks
- Pseudo-labels
- Self-learning
- Semi-supervised learning
Fingerprint
Dive into the research topics of 'Re-annotation of training samples for robust maritime object detection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver