Literature Review[edit | edit source]

Aliaksei Petsiuk (talk) 14:32, 20 May 2019 (PDT)

Target Journal Publisher Impact Factor Link
Pattern Recognition Elsevier 3.962 https://www.journals.elsevier.com/pattern-recognition
Computer-Aided Design Elsevier 2.947 https://www.journals.elsevier.com/computer-aided-design
Additive Manufacturing Elsevier https://www.journals.elsevier.com/additive-manufacturing

MOST Papers[edit | edit source]

360 Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views[edit | edit source]

[1] Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views.=== J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.[1]

Abstract Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.

Factors effecting real-time optical monitoring of fused filament 3D printing[edit | edit source]

[2] Nuchitprasitchai, S., Roggemann, M. & Pearce, J.M. Factors effecting real-time optical monitoring of fused filament 3D printing. Progress in Additive Manufacturing Journal (2017), Volume 2, Issue 3, pp 133–149. DOI:10.1007/s40964-017-0027-x.[2]

Abstract This study analyzes a low-cost reliable real-time optical monitoring platform for fused filament fabrication-based open source 3D printing. An algorithm for reconstructing 3D images from overlapping 2D intensity measurements with relaxed camera positioning requirements is compared with a single-camera solution for single-side 3D printing monitoring. The algorithms are tested for different 3D object geometries and filament colors. The results showed that both of the algorithms with a single- and double-camera system were effective at detecting a clogged nozzle, incomplete project, or loss of filament for a wide range of 3D object geometries and filament colors. The combined approach was the most effective and achieves 100% detection rate for failures. The combined method analyzed here has a better detection rate and a lower cost compared to previous methods. In addition, this method is generalizable to a wide range of 3D printer geometries, which enables further deployment of desktop 3D printing as wasted print time and filament are reduced, thereby improving the economic advantages of distributed manufacturing.

Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software[edit | edit source]

[3] Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software. 4(1), p.e2.,2016. DOI: http://doi.org/10.5334/jors.78[3]

Abstract RepRap 3-D printers and their derivatives using conventional firmware are limited by: 1) requiring technical knowledge, 2) poor resilience with unreliable hardware, and 3) poor integration in complicated systems. In this paper, a new control system called Franklin, for CNC machines in general and 3-D printers specifically, is presented that enables web-based three-dimensional control of additive, subtractive and analytical tools from any Internet-connected device. Franklin can be set up and controlled entirely from a web interface; it uses a custom protocol which allows it to continue printing when the connection is temporarily lost, and allows communication with scripts.

Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer[edit | edit source]

[4] G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113[4]

Abstract Purpose - The purpose of this paper is to present novel modifications to a RepRap design that increase RepRap capabilities well beyond just fused filament fabrication. Open-source RepRap 3-D printers have made distributed manufacturing and prototyping an affordable reality. Design/methodology/approach - The design is a significantly modified derivative of the Rostock delta-style RepRap 3-D printer. Modifications were made that permit easy and rapid repurposing of the platform for milling, paste extrusion and several other applications. All of the designs are open-source and freely available. Findings - In addition to producing fused filament parts, the platform successfully produced milled printed circuit boards, milled plastic objects, objects made with paste extrudates, such as silicone, food stuffs and ceramics, pen plotted works and cut vinyl products. The multi-purpose tool saved 90-97 per cent of the capital costs of functionally equivalent dedicated tools. Research limitations/implications - While the platform was used primarily for production of hobby and consumer goods, research implications are significant, as the tool is so versatile and the fact that the designs are open-source and eminently available for modification for more purpose-specific applications. Practical implications - The platform vastly broadens capabilities of a RepRap machine at an extraordinarily low price, expanding the potential for distributed manufacturing and prototyping of items that heretofore required large financial investments. Originality/value - The unique combination of relatively simple modifications to an existing platform has produced a machine having capabilities far exceeding that of any single commercial product. The platform provides users the ability to work with a wide variety of materials and fabrication methods at a price of less than $1,000, provided users are willing to build the machine themselves.

Delta 3D Printer[edit | edit source]

[5] Delta 3D Printer (https://www.appropedia.org/Delta_Build_Overview:MOST)[5]

The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform. Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.

A delta-type FFF printer with 250 mm diameter and 240 mm high cylindrical working volume has been used in our experiments. It fuses 1.75 mm Polylactic Acid (PLA) plastic filament under a temperature of 210 °C from a nozzle with 0.4 mm diameter. The printer operates by RAMPS 1.4 print controller with integrated SD card reader. Its working area is under dual surveillance, where the main camera provides a rectified top view and the secondary camera captures a side view of the working zone.

A visual marker plate located on top of the printing bed allows us to determine the spatial position of the working area relatively to cameras. The plate has 9 cm2 printing area, seven contrast square markers (1.5 cm2 and 1 cm2) build a reference frame for the main camera, and four 1.5 cm2 markers allow us to determine the relative position of the side camera.

Athena II 3D Printer[edit | edit source]

[6] Athena II 3D Printer (https://www.appropedia.org/AthenaII)[6]

  • ---

2019[edit | edit source]

RepRap[edit | edit source]

[7] RepRap (https://reprap.org/wiki/RepRap)[7]

Rostock (delta robot 3D printer)[edit | edit source]

[8] Rostock (delta robot 3D printer) (https://www.thingiverse.com/thing:17175)[8]

SONY IMX322 Datasheet[edit | edit source]

[9] SONY IMX322 Datasheet (accessed on 16 May 2019).[9]

The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.

Marlin Open-Source RepRap Firmware[edit | edit source]

[10] Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).[10]

A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special "A-family" of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file.

OpenCV (Open Source Computer Vision Library)[edit | edit source]

[11] OpenCV (Open Source Computer Vision Library) (accessed on 20 May 2019).[11]

By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.

Python PyQt[edit | edit source]

[12] Python PyQt. A Python binding of the cross-platform C++ framework for GUI applications development (accessed on 20 May 2019).[12]

Python Numpy[edit | edit source]

[13] Python Numpy. A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays (accessed on 20 May 2019).[13]

x. Python Imaging Library (PIL)

x. Python Scikit-image (a collection of algorithms for image processing)

x. Python Matplotlib (a plotting library)

ViSP (Visual Servoing Platform)[edit | edit source]

[14] ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019).[14]

From BoW to CNN: Two Decades of Texture Representation for Texture Classification[edit | edit source]

[15] L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z.[15]

Abstract Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.

Kernel Cuts: Kernel and Spectral Clustering Meet Regularization[edit | edit source]

[16] M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1.[16]

Abstract This work bridges the gap between two popular methodologies for data partitioning: kernel clustering and regularization-based segmentation. While addressing closely related practical problems, these general methodologies may seem very different based on how they are covered in the literature. The differences may show up in motivation, formulation, and optimization, e.g. spectral relaxation versus max-flow. We explain how regularization and kernel clustering can work together and why this is useful. Our joint energy combines standard regularization, e.g. MRF potentials, and kernel clustering criteria like normalized cut. Complementarity of such terms is demonstrated in many applications using our bound optimization Kernel Cut algorithm for the joint energy (code is publicly available). While detailing combinatorial move-making, our main focus are new linear kernel and spectral bounds for kernel clustering criteria allowing their integration with any regularization objectives with existing discrete or continuous solvers.

Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning[edit | edit source]

[17] I.A. Okaroa, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, P.L. Green. Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning. Additive Manufacturing Journal, Volume 27, pp. 42-53, 2019. DOI: 10.1016/j.addma.2019.01.006.[17]

Abstract Risk-averse areas such as the medical, aerospace and energy sectors have been somewhat slow towards accepting and applying Additive Manufacturing (AM) in many of their value chains. This is partly because there are still significant uncertainties concerning the quality of AM builds. This paper introduces a machine learning algorithm for the automatic detection of faults in AM products. The approach is semi-supervised in that, during training, it is able to use data from both builds where the resulting components were certified and builds where the quality of the resulting components is unknown. This makes the approach cost-efficient, particularly in scenarios where part certification is costly and time-consuming. The study specifically analyses Laser Powder-Bed Fusion (L-PBF) builds. Key features are extracted from large sets of photodiode data, obtained during the building of 49 tensile test bars. Ultimate tensile strength (UTS) tests were then used to categorize each bar as 'faulty' or 'acceptable'. Using a variety of approaches (Receiver Operating Characteristic (ROC) curves and 2-fold cross-validation), it is shown that, despite utilizing a fraction of the available certification data, the semi-supervised approach can achieve results comparable to a benchmark case where all data points are labeled. The results show that semi-supervised learning is a promising approach for the automatic certification of AM builds that can be implemented at a fraction of the cost currently required.

  • ---
FA info icon.svg Angle down icon.svg Page data
Authors Aliaksei Petsiuk
License CC-BY-SA-3.0
Language English (en)
Related 3 subpages, 4 pages link here
Impact 1,089 page views
Created May 18, 2019 by Aliaksei Petsiuk
Modified April 14, 2023 by Felipe Schenone
  1. Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.
  2. Nuchitprasitchai, S., Roggemann, M. & Pearce, J.M. Factors effecting real-time optical monitoring of fused filament 3D printing. Progress in Additive Manufacturing Journal (2017), Volume 2, Issue 3, pp 133–149. DOI:10.1007/s40964-017-0027-x.
  3. Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., 2016. Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2. DOI: http://doi.org/10.5334/jors.78
  4. G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113
  5. Delta 3D Printer (https://www.appropedia.org/Delta_Build_Overview:MOST)
  6. Athena II 3D Printer (https://www.appropedia.org/AthenaII)
  7. RepRap (https://reprap.org/wiki/RepRap)
  8. Rostock (delta robot 3D printer) (https://www.thingiverse.com/thing:17175)
  9. SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).
  10. Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).
  11. OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).
  12. Python PyQt (A Python binding of the cross-platform C++ framework for GUI applications development). Available online: https://wiki.python.org/moin/PyQt (accessed on 20 May 2019).
  13. Python Numpy (A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays). Available online: https://www.numpy.org/ (accessed on 20 May 2019).
  14. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).
  15. L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z.
  16. M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1.
  17. I.A. Okaroa, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, P.L. Green. Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning. Additive Manufacturing Journal, Volume 27, pp. 42-53, 2019. DOI: 10.1016/j.addma.2019.01.006.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.