No edit summary
No edit summary
Line 7: Line 7:
===MOST Papers===
===MOST Papers===


1. ''Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M.'' '''[https://www.mdpi.com/2504-4494/1/1/2 Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views.]''' J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002. <ref>Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.</ref>
<font color="blue">'''[1]'''</font> ''Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M.'' '''[https://www.mdpi.com/2504-4494/1/1/2 Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views.]''' J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002. <ref>Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.</ref>


'''Abstract''' Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.
'''Abstract''' Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.
Line 16: Line 16:
*---
*---
*---
*---
x. '''Delta 3D Printer''' (https://www.appropedia.org/Delta_Build_Overview:MOST)
The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform.  Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.
A delta-type FFF printer with 250 mm diameter and 240 mm high cylindrical working volume has been used in our experiments. It fuses 1.75 mm Polylactic Acid (PLA) plastic filament under a temperature of 210 °C from a nozzle with 0.4 mm diameter. The printer operates by RAMPS 1.4 print controller with integrated SD card reader. Its working area is under dual surveillance, where the main camera provides a rectified top view and the secondary camera captures a side view of the working zone.
A visual marker plate located on top of the printing bed allows us to determine the spatial position of the working area relatively to cameras. The plate has 9 cm2 printing area, seven contrast square markers (1.5 cm2 and 1 cm2) build a reference frame for the main camera, and four 1.5 cm2  markers allow us to determine the relative position of the side camera.
x. '''Athena II 3D Printer''' (https://www.appropedia.org/AthenaII)


----
----


===2019===
===2019===
x. '''RepRap''' (https://reprap.org/wiki/RepRap)
x. '''Rostock (delta robot 3D printer)''' (https://www.thingiverse.com/thing:17175)


2. [https://dashcamtalk.com/cams/lk-7950-wd/Sony_IMX322.pdf '''SONY IMX322 Datasheet'''] (accessed on 16 May 2019). <ref>SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).</ref>
2. [https://dashcamtalk.com/cams/lk-7950-wd/Sony_IMX322.pdf '''SONY IMX322 Datasheet'''] (accessed on 16 May 2019). <ref>SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).</ref>
The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.


3. [http://marlinfw.org/ '''Marlin Open-Source RepRap Firmware'''] (accessed on 16 May 2019). <ref>Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).</ref>
3. [http://marlinfw.org/ '''Marlin Open-Source RepRap Firmware'''] (accessed on 16 May 2019). <ref>Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).</ref>
A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.


4. [https://opencv.org/ '''OpenCV (Open Source Computer Vision Library)'''] (accessed on 20 May 2019). <ref>OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).</ref>
4. [https://opencv.org/ '''OpenCV (Open Source Computer Vision Library)'''] (accessed on 20 May 2019). <ref>OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).</ref>
By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.
x. '''Python PyQt''' (binding of the cross-platform C++ framework used for GUI applications development)
x. '''Python Numpy''' (a library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays)
x. '''Python Imaging Library''' (PIL)
x. '''Python Scikit-image''' (a collection of algorithms for image processing)
x. '''Python Matplotlib''' (a plotting library)


5. [https://visp.inria.fr/ '''ViSP (Visual Servoing Platform),'''] a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). <ref>ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).</ref>
5. [https://visp.inria.fr/ '''ViSP (Visual Servoing Platform),'''] a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). <ref>ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).</ref>
Line 34: Line 73:


6. [http://wohlersassociates.com/2018report.htm '''Wohlers Report.'''] Annual worldwide progress report in 3D Printing, 2018. <ref>Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.</ref>
6. [http://wohlersassociates.com/2018report.htm '''Wohlers Report.'''] Annual worldwide progress report in 3D Printing, 2018. <ref>Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.</ref>
According to Wohlers Report [..] and EY’s Global 3D printing Report [..],  additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.


7. ''U. Delli, S. Chang.'' [https://www.sciencedirect.com/science/article/pii/S2351978918307820 '''Automated processes monitoring in 3D printing using supervised machine learning.'''] Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 <ref>U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111</ref>
7. ''U. Delli, S. Chang.'' [https://www.sciencedirect.com/science/article/pii/S2351978918307820 '''Automated processes monitoring in 3D printing using supervised machine learning.'''] Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 <ref>U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111</ref>
'''Abstract''' Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.


8. ''L. Scime, J. Beuth.'' [https://www.sciencedirect.com/science/article/pii/S221486041730180X '''Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm.'''] Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 <ref>L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009</ref>
8. ''L. Scime, J. Beuth.'' [https://www.sciencedirect.com/science/article/pii/S221486041730180X '''Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm.'''] Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 <ref>L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009</ref>


'''Abstract''' Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.


----
----
Line 45: Line 91:


9. ''B. Wang, F. Zhong, X. Qin.'' [https://dl.acm.org/citation.cfm?id=3095172 '''Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking.'''] CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 <ref>B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172</ref>
9. ''B. Wang, F. Zhong, X. Qin.'' [https://dl.acm.org/citation.cfm?id=3095172 '''Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking.'''] CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 <ref>B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172</ref>
'''Abstract''' This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.


10. ''K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs.'' [https://arxiv.org/abs/1705.00960 '''Foundations of Intelligent Additive Manufacturing.'''] Published in ArXiv, May 2017. <ref>K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.</ref>  
10. ''K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs.'' [https://arxiv.org/abs/1705.00960 '''Foundations of Intelligent Additive Manufacturing.'''] Published in ArXiv, May 2017. <ref>K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.</ref>  
Line 114: Line 163:
30. ''Pierre Alfred Leon Ciraud.'' '''[https://patents.google.com/patent/DE2263777A1/en A method and apparatus for manufacturing objects made of any arbitrary material meltable.]''' German patent application DE2263777A1. December 28, 1971. <ref>Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.</ref>
30. ''Pierre Alfred Leon Ciraud.'' '''[https://patents.google.com/patent/DE2263777A1/en A method and apparatus for manufacturing objects made of any arbitrary material meltable.]''' German patent application DE2263777A1. December 28, 1971. <ref>Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.</ref>


The first patent in the field of additive manufacturing.
The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.


----
----

Revision as of 16:47, 25 May 2019

Literature Review

Aliaksei Petsiuk (talk) 14:32, 20 May 2019 (PDT)

MOST Papers

[1] Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002. [1]

Abstract Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.

Notes

  • ---
  • ---
  • ---


x. Delta 3D Printer (https://www.appropedia.org/Delta_Build_Overview:MOST)

The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform. Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.

A delta-type FFF printer with 250 mm diameter and 240 mm high cylindrical working volume has been used in our experiments. It fuses 1.75 mm Polylactic Acid (PLA) plastic filament under a temperature of 210 °C from a nozzle with 0.4 mm diameter. The printer operates by RAMPS 1.4 print controller with integrated SD card reader. Its working area is under dual surveillance, where the main camera provides a rectified top view and the secondary camera captures a side view of the working zone.

A visual marker plate located on top of the printing bed allows us to determine the spatial position of the working area relatively to cameras. The plate has 9 cm2 printing area, seven contrast square markers (1.5 cm2 and 1 cm2) build a reference frame for the main camera, and four 1.5 cm2 markers allow us to determine the relative position of the side camera.


x. Athena II 3D Printer (https://www.appropedia.org/AthenaII)



2019

x. RepRap (https://reprap.org/wiki/RepRap)


x. Rostock (delta robot 3D printer) (https://www.thingiverse.com/thing:17175)


2. SONY IMX322 Datasheet (accessed on 16 May 2019). [2]

The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.


3. Marlin Open-Source RepRap Firmware (accessed on 16 May 2019). [3]

A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.


4. OpenCV (Open Source Computer Vision Library) (accessed on 20 May 2019). [4]

By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.


x. Python PyQt (binding of the cross-platform C++ framework used for GUI applications development)

x. Python Numpy (a library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays)

x. Python Imaging Library (PIL)

x. Python Scikit-image (a collection of algorithms for image processing)

x. Python Matplotlib (a plotting library)


5. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). [5]


2018

6. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018. [6]

According to Wohlers Report [..] and EY’s Global 3D printing Report [..], additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.


7. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning. Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 [7]

Abstract Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.


8. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 [8]

Abstract Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.


2017

9. B. Wang, F. Zhong, X. Qin. Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking. CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 [9]

Abstract This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.


10. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017. [10]


2016

11. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report. 2016. [11]

12. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2 [12]


2015

13. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28 [13]


2014

14. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436 [14]



2012

15. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065 [15]

16. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3 [16]


2010-2000

17. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62 [17]

18. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4 [18]

19. Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8 [19]

20. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239 [20]

21. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004. [21]

22. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94 [22]

23. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003. ISBN: 0521623049 [23]

24. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620 [24]


2000 - ...

25. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832 [25]

26. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282. [26]


27. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995. [27]

28. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990. [28]

29. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851 [29]

30. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971. [30]

The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.



? 29. M. Lowney, A.S.Raj. Model Based Tracking for Augmented Reality an Mobile Devices. Dept. of Electrical Engineering, Stanford university.


References

  1. Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.
  2. SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).
  3. Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).
  4. OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).
  5. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).
  6. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.
  7. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111
  8. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009
  9. B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172
  10. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.
  11. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report, 2016.
  12. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2
  13. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28
  14. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436
  15. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065
  16. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3
  17. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62
  18. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4
  19. Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8
  20. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239
  21. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.
  22. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94
  23. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049
  24. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620
  25. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832
  26. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.
  27. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.
  28. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.
  29. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851
  30. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.