Graphical abstract-synth2real.jpg
FA info icon.svg Angle down icon.svg Source data
Type Paper
Cite as Citation reference for the source document. Petsiuk A, Singh H, Dadhwal H, Pearce JM. Synthetic-to-Real Composite Semantic Segmentation in Additive Manufacturing. Journal of Manufacturing and Materials Processing. 2024; 8(2):66. https://doi.org/10.3390/jmmp8020066 Academia open access ArXiv
FA info icon.svg Angle down icon.svg Project data
Authors Aliaksei L. Petsiuk
H. Singh
H. Dadhwal
Joshua M. Pearce
Location London, ON
Status Designed
Modelled
Prototyped
Verified
Verified by FAST
Completed 2024
Made Yes
Instance of 3D Printing
AI
computer vision
machine learning
OKH Manifest Download

The application of computer vision and machine learning methods for semantic segmentation of the structural elements of 3D-printed products in the field of additive manufacturing (AM) can improve real-time failure analysis systems and potentially reduce the number of defects by providing additional tools for in situ corrections. This work demonstrates the possibilities of using physics-based rendering for labeled image dataset generation, as well as image-to-image style transfer capabilities to improve the accuracy of real image segmentation for AM systems. Multi-class semantic segmentation experiments were carried out based on the U-Net model and the cycle generative adversarial network. The test results demonstrated the capacity of this method to detect such structural elements of 3D-printed parts as a top (last printed) layer, infill, shell, and support. A basis for further segmentation system enhancement by utilizing image-to-image style transfer and domain adaptation technologies was also considered. The results indicate that using style transfer as a precursor to domain adaptation can improve real 3D printing image segmentation in situations where a model trained on synthetic data is the only tool available. The mean intersection over union (mIoU) scores for synthetic test datasets included 94.90% for the entire 3D-printed part, 73.33% for the top layer, 78.93% for the infill, 55.31% for the shell, and 69.45% for supports.

Keywords[edit | edit source]

3-D printing, additive manufacturing, g-code segmentation, sim-to-real, semantic segmentation, synthetic data, machine learning, open source software, open-source hardware, RepRap, computer vision, quality assurance, real-time monitoring, anomaly detection; Blender, synthetic images

See also[edit | edit source]

Cookies help us deliver our services. By using our services, you agree to our use of cookies.