Deep neural networks have had profound signiﬁcance in addressing vi-sual object detection and classiﬁcation tasks. However, though with thecaveat of needing large amounts of annotated training data. Further-more, the possibility of neural networks overﬁtting to the biases andfaults included in their respective datasets. In this work, methods forachieving robust neural networks, able to tolerate untrusted and possiblyerroneous training data, a re explored. The proposed method is shown toimprove performance and help neural networks learn from untrusteddata, provided a thoroughly annotated subset.