Abstract
Segmentation of spots in diffraction images is critical for accurate grain mapping in 3D. When the grain mapping is done by the recently established lab-based X-ray diffraction contrast tomography (LabDCT), diffraction spots suffering from low signal-to-noise ratios impose a severe challenge in precise identification of the spots using conventional image filters, thereby hindering the detection of small grains and limiting the spatial resolution. To overcome this challenge, we have applied an automatic instance segmentation deep learning network based on Mask R-CNN (two-stage region-based convolutional neural network) for finding spots in LabDCT images. The training data for the neural network was synthesized by combining virtual noise-free images (obtained from a forward simulation model) and noise-only images (obtained by filtering out diffraction spots in experimental images). Based on the diffraction spots deducted by the forward simulation model, data labelling and annotation was thus performed in an unsupervised manner without the need for tedious human labelling. By applying the network in a PyTorch framework called Detectron2, we show that the trained model performed significantly better than the conventional method in spot segmentation, resulting in a better grain reconstruction, subsequently. The work illustrates the potential of deep learning for improving LabDCT and other grain mapping techniques in a broader sense.
Original language | English |
---|---|
Article number | 112983 |
Journal | Materials Characterization |
Volume | 201 |
Number of pages | 11 |
ISSN | 1044-5803 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Deep learning
- Grain mapping
- Instance segmentation
- Tomography
- X-ray diffraction