mNo edit summary
mNo edit summary
Line 35: Line 35:




<font color="blue">'''[2]'''</font> ''Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M.,'' [https://openresearchsoftware.metajnl.com/articles/10.5334/jors.78/ '''Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software,'''] 4(1), p.e2.,2016. DOI: http://doi.org/10.5334/jors.78 <ref>Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., 2016. Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2. DOI: http://doi.org/10.5334/jors.78</ref>
<font color="blue">'''[3]'''</font> ''Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M.,'' [https://openresearchsoftware.metajnl.com/articles/10.5334/jors.78/ '''Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software,'''] 4(1), p.e2.,2016. DOI: http://doi.org/10.5334/jors.78 <ref>Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., 2016. Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2. DOI: http://doi.org/10.5334/jors.78</ref>


'''Abstract''' RepRap 3-D printers and their derivatives using conventional firmware are limited by: 1) requiring technical knowledge, 2) poor resilience with unreliable hardware, and 3) poor integration in complicated systems. In this paper, a new control system called Franklin, for CNC machines in general and 3-D printers specifically, is presented that enables web-based three-dimensional control of additive, subtractive and analytical tools from any Internet-connected device. Franklin can be set up and controlled entirely from a web interface; it uses a custom protocol which allows it to continue printing when the connection is temporarily lost, and allows communication with scripts.
'''Abstract''' RepRap 3-D printers and their derivatives using conventional firmware are limited by: 1) requiring technical knowledge, 2) poor resilience with unreliable hardware, and 3) poor integration in complicated systems. In this paper, a new control system called Franklin, for CNC machines in general and 3-D printers specifically, is presented that enables web-based three-dimensional control of additive, subtractive and analytical tools from any Internet-connected device. Franklin can be set up and controlled entirely from a web interface; it uses a custom protocol which allows it to continue printing when the connection is temporarily lost, and allows communication with scripts.




<font color="blue">'''[3]'''</font> ''G. C. Anzalone, B. Wijnen, J. M. Pearce.'' [https://www.emeraldinsight.com/doi/full/10.1108/RPJ-09-2014-0113 '''Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer.'''] Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113 <ref>G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113</ref>
<font color="blue">'''[4]'''</font> ''G. C. Anzalone, B. Wijnen, J. M. Pearce.'' [https://www.emeraldinsight.com/doi/full/10.1108/RPJ-09-2014-0113 '''Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer.'''] Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113 <ref>G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113</ref>


'''Abstract''' Purpose - The purpose of this paper is to present novel modifications to a RepRap design that increase RepRap capabilities well beyond just fused filament fabrication. Open-source RepRap 3-D printers have made distributed manufacturing and prototyping an affordable reality. Design/methodology/approach - The design is a significantly modified derivative of the Rostock delta-style RepRap 3-D printer. Modifications were made that permit easy and rapid repurposing of the platform for milling, paste extrusion and several other applications. All of the designs are open-source and freely available. Findings - In addition to producing fused filament parts, the platform successfully produced milled printed circuit boards, milled plastic objects, objects made with paste extrudates, such as silicone, food stuffs and ceramics, pen plotted works and cut vinyl products. The multi-purpose tool saved 90-97 per cent of the capital costs of functionally equivalent dedicated tools. Research limitations/implications - While the platform was used primarily for production of hobby and consumer goods, research implications are significant, as the tool is so versatile and the fact that the designs are open-source and eminently available for modification for more purpose-specific applications. Practical implications - The platform vastly broadens capabilities of a RepRap machine at an extraordinarily low price, expanding the potential for distributed manufacturing and prototyping of items that heretofore required large financial investments. Originality/value - The unique combination of relatively simple modifications to an existing platform has produced a machine having capabilities far exceeding that of any single commercial product. The platform provides users the ability to work with a wide variety of materials and fabrication methods at a price of less than $1,000, provided users are willing to build the machine themselves.
'''Abstract''' Purpose - The purpose of this paper is to present novel modifications to a RepRap design that increase RepRap capabilities well beyond just fused filament fabrication. Open-source RepRap 3-D printers have made distributed manufacturing and prototyping an affordable reality. Design/methodology/approach - The design is a significantly modified derivative of the Rostock delta-style RepRap 3-D printer. Modifications were made that permit easy and rapid repurposing of the platform for milling, paste extrusion and several other applications. All of the designs are open-source and freely available. Findings - In addition to producing fused filament parts, the platform successfully produced milled printed circuit boards, milled plastic objects, objects made with paste extrudates, such as silicone, food stuffs and ceramics, pen plotted works and cut vinyl products. The multi-purpose tool saved 90-97 per cent of the capital costs of functionally equivalent dedicated tools. Research limitations/implications - While the platform was used primarily for production of hobby and consumer goods, research implications are significant, as the tool is so versatile and the fact that the designs are open-source and eminently available for modification for more purpose-specific applications. Practical implications - The platform vastly broadens capabilities of a RepRap machine at an extraordinarily low price, expanding the potential for distributed manufacturing and prototyping of items that heretofore required large financial investments. Originality/value - The unique combination of relatively simple modifications to an existing platform has produced a machine having capabilities far exceeding that of any single commercial product. The platform provides users the ability to work with a wide variety of materials and fabrication methods at a price of less than $1,000, provided users are willing to build the machine themselves.




<font color="blue">'''[4]'''</font> '''Delta 3D Printer''' (https://www.appropedia.org/Delta_Build_Overview:MOST)
<font color="blue">'''[5]'''</font> '''Delta 3D Printer''' (https://www.appropedia.org/Delta_Build_Overview:MOST)


The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform.  Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.
The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform.  Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.
Line 54: Line 54:




<font color="blue">'''[5]'''</font> '''Athena II 3D Printer''' (https://www.appropedia.org/AthenaII)
<font color="blue">'''[6]'''</font> '''Athena II 3D Printer''' (https://www.appropedia.org/AthenaII)




Line 61: Line 61:
===2019===
===2019===


<font color="blue">'''[6]'''</font> '''RepRap''' (https://reprap.org/wiki/RepRap)
<font color="blue">'''[7]'''</font> '''RepRap''' (https://reprap.org/wiki/RepRap)




<font color="blue">'''[7]'''</font> '''Rostock (delta robot 3D printer)''' (https://www.thingiverse.com/thing:17175)
<font color="blue">'''[8]'''</font> '''Rostock (delta robot 3D printer)''' (https://www.thingiverse.com/thing:17175)




<font color="blue">'''[8]'''</font> [https://dashcamtalk.com/cams/lk-7950-wd/Sony_IMX322.pdf '''SONY IMX322 Datasheet'''] (accessed on 16 May 2019). <ref>SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).</ref>
<font color="blue">'''[9]'''</font> [https://dashcamtalk.com/cams/lk-7950-wd/Sony_IMX322.pdf '''SONY IMX322 Datasheet'''] (accessed on 16 May 2019). <ref>SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).</ref>


The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.
The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.




<font color="blue">'''[9]'''</font> [http://marlinfw.org/ '''Marlin Open-Source RepRap Firmware'''] (accessed on 16 May 2019). <ref>Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).</ref>
<font color="blue">'''[10]'''</font> [http://marlinfw.org/ '''Marlin Open-Source RepRap Firmware'''] (accessed on 16 May 2019). <ref>Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).</ref>


A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.
A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.




<font color="blue">'''[10]'''</font> [https://opencv.org/ '''OpenCV (Open Source Computer Vision Library)'''] (accessed on 20 May 2019). <ref>OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).</ref>
<font color="blue">'''[11]'''</font> [https://opencv.org/ '''OpenCV (Open Source Computer Vision Library)'''] (accessed on 20 May 2019). <ref>OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).</ref>


By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.
By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.




<font color="blue">'''[11]'''</font> [https://wiki.python.org/moin/PyQt '''Python PyQt''']. A Python binding of the cross-platform C++ framework for GUI applications development (accessed on 20 May 2019). <ref>Python PyQt (A Python binding of the cross-platform C++ framework for GUI applications development). Available online: https://wiki.python.org/moin/PyQt (accessed on 20 May 2019).</ref>
<font color="blue">'''[12]'''</font> [https://wiki.python.org/moin/PyQt '''Python PyQt''']. A Python binding of the cross-platform C++ framework for GUI applications development (accessed on 20 May 2019). <ref>Python PyQt (A Python binding of the cross-platform C++ framework for GUI applications development). Available online: https://wiki.python.org/moin/PyQt (accessed on 20 May 2019).</ref>




<font color="blue">'''[12]'''</font> [https://www.numpy.org/ '''Python Numpy''']. A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays (accessed on 20 May 2019). <ref>Python Numpy (A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays). Available online: https://www.numpy.org/ (accessed on 20 May 2019).</ref>
<font color="blue">'''[13]'''</font> [https://www.numpy.org/ '''Python Numpy''']. A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays (accessed on 20 May 2019). <ref>Python Numpy (A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays). Available online: https://www.numpy.org/ (accessed on 20 May 2019).</ref>




Line 95: Line 95:




<font color="blue">'''[13]'''</font> [https://visp.inria.fr/ '''ViSP (Visual Servoing Platform),'''] a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). <ref>ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).</ref>
<font color="blue">'''[14]'''</font> [https://visp.inria.fr/ '''ViSP (Visual Servoing Platform),'''] a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). <ref>ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).</ref>




6. ''L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen.'' [https://link.springer.com/article/10.1007/s11263-018-1125-z '''From BoW to CNN: Two Decades of Texture Representation for Texture Classification.'''] International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z. <ref>L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z.</ref>
<font color="blue">'''[15]'''</font> ''L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen.'' [https://link.springer.com/article/10.1007/s11263-018-1125-z '''From BoW to CNN: Two Decades of Texture Representation for Texture Classification.'''] International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z. <ref>L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z.</ref>


'''Abstract''' Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.
'''Abstract''' Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.




6. ''M. Tang, D. Marin, I. B. Ayed, Y. Boykov.'' [https://link.springer.com/article/10.1007/s11263-018-1115-1 '''Kernel Cuts: Kernel and Spectral Clustering Meet Regularization.'''] International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1. <ref>M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1.</ref>
<font color="blue">'''[16]'''</font> ''M. Tang, D. Marin, I. B. Ayed, Y. Boykov.'' [https://link.springer.com/article/10.1007/s11263-018-1115-1 '''Kernel Cuts: Kernel and Spectral Clustering Meet Regularization.'''] International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1. <ref>M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1.</ref>


'''Abstract''' This work bridges the gap between two popular methodologies for data partitioning: kernel clustering and regularization-based segmentation. While addressing closely related practical problems, these general methodologies may seem very different based on how they are covered in the literature. The differences may show up in motivation, formulation, and optimization, e.g. spectral relaxation versus max-flow. We explain how regularization and kernel clustering can work together and why this is useful. Our joint energy combines standard regularization, e.g. MRF potentials, and kernel clustering criteria like normalized cut. Complementarity of such terms is demonstrated in many applications using our bound optimization Kernel Cut algorithm for the joint energy (code is publicly available). While detailing combinatorial move-making, our main focus are new linear kernel and spectral bounds for kernel clustering criteria allowing their integration with any regularization objectives with existing discrete or continuous solvers.
'''Abstract''' This work bridges the gap between two popular methodologies for data partitioning: kernel clustering and regularization-based segmentation. While addressing closely related practical problems, these general methodologies may seem very different based on how they are covered in the literature. The differences may show up in motivation, formulation, and optimization, e.g. spectral relaxation versus max-flow. We explain how regularization and kernel clustering can work together and why this is useful. Our joint energy combines standard regularization, e.g. MRF potentials, and kernel clustering criteria like normalized cut. Complementarity of such terms is demonstrated in many applications using our bound optimization Kernel Cut algorithm for the joint energy (code is publicly available). While detailing combinatorial move-making, our main focus are new linear kernel and spectral bounds for kernel clustering criteria allowing their integration with any regularization objectives with existing discrete or continuous solvers.
<font color="blue">'''[17]'''</font> ''I.A. Okaroa, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, P.L. Green.'' [https://www.sciencedirect.com/science/article/pii/S2214860418306092 '''Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning.'''] Additive Manufacturing Journal, Volume 27, pp. 42-53, 2019. DOI: 10.1016/j.addma.2019.01.006. <ref>I.A. Okaroa, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, P.L. Green. Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning. Additive Manufacturing Journal, Volume 27, pp. 42-53, 2019. DOI: 10.1016/j.addma.2019.01.006.</ref>
'''Abstract''' Risk-averse areas such as the medical, aerospace and energy sectors have been somewhat slow towards accepting and applying Additive Manufacturing (AM) in many of their value chains. This is partly because there are still significant uncertainties concerning the quality of AM builds. This paper introduces a machine learning algorithm for the automatic detection of faults in AM products. The approach is semi-supervised in that, during training, it is able to use data from both builds where the resulting components were certified and builds where the quality of the resulting components is unknown. This makes the approach cost-efficient, particularly in scenarios where part certification is costly and time-consuming. The study specifically analyses Laser Powder-Bed Fusion (L-PBF) builds. Key features are extracted from large sets of photodiode data, obtained during the building of 49 tensile test bars. Ultimate tensile strength (UTS) tests were then used to categorize each bar as ‘faulty’ or ‘acceptable’. Using a variety of approaches (Receiver Operating Characteristic (ROC) curves and 2-fold cross-validation), it is shown that, despite utilizing a fraction of the available certification data, the semi-supervised approach can achieve results comparable to a benchmark case where all data points are labeled. The results show that semi-supervised learning is a promising approach for the automatic certification of AM builds that can be implemented at a fraction of the cost currently required.


----
----
Line 111: Line 117:
===2018===
===2018===


6. [http://wohlersassociates.com/2018report.htm '''Wohlers Report.'''] Annual worldwide progress report in 3D Printing, 2018. <ref>Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.</ref>
<font color="blue">'''[18]'''</font> [http://wohlersassociates.com/2018report.htm '''Wohlers Report.'''] Annual worldwide progress report in 3D Printing, 2018. <ref>Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.</ref>


According to Wohlers Report [..] and EY’s Global 3D printing Report [..],  additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.
According to Wohlers Report [..] and EY’s Global 3D printing Report [..],  additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.




7. ''U. Delli, S. Chang.'' [https://www.sciencedirect.com/science/article/pii/S2351978918307820 '''Automated processes monitoring in 3D printing using supervised machine learning.'''] Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 <ref>U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111</ref>
<font color="blue">'''[19]'''</font> ''U. Delli, S. Chang.'' [https://www.sciencedirect.com/science/article/pii/S2351978918307820 '''Automated processes monitoring in 3D printing using supervised machine learning.'''] Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 <ref>U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111</ref>


'''Abstract''' Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.
'''Abstract''' Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.




8. ''L. Scime, J. Beuth.'' [https://www.sciencedirect.com/science/article/pii/S221486041730180X '''Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm.'''] Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 <ref>L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009</ref>
<font color="blue">'''[20]'''</font> ''L. Scime, J. Beuth.'' [https://www.sciencedirect.com/science/article/pii/S221486041730180X '''Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm.'''] Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 <ref>L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009</ref>


'''Abstract''' Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.
'''Abstract''' Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.




6. ''L. Zhong, L. Zhang.'' [https://link.springer.com/article/10.1007/s11263-018-1119-x '''A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints.'''] International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x. <ref>L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x.</ref>
<font color="blue">'''[21]'''</font> ''L. Zhong, L. Zhang.'' [https://link.springer.com/article/10.1007/s11263-018-1119-x '''A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints.'''] International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x. <ref>L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x.</ref>


'''Abstract''' Both region-based methods and direct methods have become popular in recent years for tracking the 6-dof pose of an object from monocular video sequences. Region-based methods estimate the pose of the object by maximizing the discrimination between statistical foreground and background appearance models, while direct methods aim to minimize the photometric error through direct image alignment. In practice, region-based methods only care about the pixels within a narrow band of the object contour due to the level-set-based probabilistic formulation, leaving the foreground pixels beyond the evaluation band unused. On the other hand, direct methods only utilize the raw pixel information of the object, but ignore the statistical properties of foreground and background regions. In this paper, we find it beneficial to combine these two kinds of methods together. We construct a new probabilistic formulation for 3D object tracking by combining statistical constraints from region-based methods and photometric constraints from direct methods. In this way, we take advantage of both statistical property and raw pixel values of the image in a complementary manner. Moreover, in order to achieve better performance when tracking heterogeneous objects in complex scenes, we propose to increase the distinctiveness of foreground and background statistical models by partitioning the global foreground and background regions into a small number of sub-regions around the object contour. We demonstrate the effectiveness of the proposed novel strategies on a newly constructed real-world dataset containing different types of objects with ground-truth poses. Further experiments on several challenging public datasets also show that our method obtains competitive or even superior tracking results compared to previous works. In comparison with the recent state-of-art region-based method, the proposed hybrid method is proved to be more stable under silhouette pose ambiguities with a slightly lower tracking accuracy.
'''Abstract''' Both region-based methods and direct methods have become popular in recent years for tracking the 6-dof pose of an object from monocular video sequences. Region-based methods estimate the pose of the object by maximizing the discrimination between statistical foreground and background appearance models, while direct methods aim to minimize the photometric error through direct image alignment. In practice, region-based methods only care about the pixels within a narrow band of the object contour due to the level-set-based probabilistic formulation, leaving the foreground pixels beyond the evaluation band unused. On the other hand, direct methods only utilize the raw pixel information of the object, but ignore the statistical properties of foreground and background regions. In this paper, we find it beneficial to combine these two kinds of methods together. We construct a new probabilistic formulation for 3D object tracking by combining statistical constraints from region-based methods and photometric constraints from direct methods. In this way, we take advantage of both statistical property and raw pixel values of the image in a complementary manner. Moreover, in order to achieve better performance when tracking heterogeneous objects in complex scenes, we propose to increase the distinctiveness of foreground and background statistical models by partitioning the global foreground and background regions into a small number of sub-regions around the object contour. We demonstrate the effectiveness of the proposed novel strategies on a newly constructed real-world dataset containing different types of objects with ground-truth poses. Further experiments on several challenging public datasets also show that our method obtains competitive or even superior tracking results compared to previous works. In comparison with the recent state-of-art region-based method, the proposed hybrid method is proved to be more stable under silhouette pose ambiguities with a slightly lower tracking accuracy.
<font color="blue">'''[22]'''</font> ''K. Garanger, T. Khamvilai, E. Feron.'' [https://ieeexplore.ieee.org/document/8511509 '''3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing.'''] 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 465-470. DOI: 10.1109/CCTA.2018.8511509. <ref>K. Garanger, T. Khamvilai, E. Feron. 3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing. 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 465-470. DOI: 10.1109/CCTA.2018.8511509.</ref>
'''Abstract''' 3D printing is rapidly becoming a commodity. However, the quality of the printed parts is not always even nor predictable. Feedback control is demonstrated during the printing of a plastic object using additive manufacturing as a means to improve macroscopic mechanical properties of the object. The printed object is a leaf spring made of several parts of different infill density values, which are the control variables in this problem. In order to achieve a desired objective stiffness, measurements are taken after each part is completed and the infill density is adjusted accordingly in a closed-loop framework. With feedback control, the absolute error of the measured part stiffness is reduced from 11.63% to 1.34% relative to the specified stiffness. This experiment is therefore a proof of concept to show the relevance of using feedback control in additive manufacturing. By considering the printing process and the measurements as stochastic processes, we show how stochastic optimal control and Kalman filtering can be used to improve the quality of objects manufactured with rudimentary printers.
<font color="blue">'''[23]'''</font> ''B. Yuan,  G.M. Guss,  A.C. Wilson et al.'' [https://onlinelibrary.wiley.com/doi/abs/10.1002/admt.201800136 '''Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion.'''] United States, 2018. DOI:10.1002/admt.201800136. <ref>B. Yuan,  G.M. Guss,  A.C. Wilson et al. Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion. United States, 2018. DOI:10.1002/admt.201800136.</ref>
'''Abstract''' A two‐step machine learning approach to monitoring laser powder bed fusion (LPBF) additive manufacturing is demonstrated that enables on‐the‐fly assessments of laser track welds. First, in situ video melt pool data acquired during LPBF is labeled according to the (1) average and (2) standard deviation of individual track width and also (3) whether or not the track is continuous, measured postbuild through an ex situ height map analysis algorithm. This procedure generates three ground truth labeled datasets for supervised machine learning. Using a portion of the labeled 10 ms video clips, a single convolutional neural network architecture is trained to generate three distinct networks. With the remaining in situ LPBF data, the trained neural networks are tested and evaluated and found to predict track width, standard deviation, and continuity without the need for ex situ measurements. This two‐step approach should benefit any LPBF system – or any additive manufacturing technology – where height‐map‐derived properties can serve as useful labels for in situ sensor data.


----
----
Line 134: Line 150:
===2017===
===2017===


9. ''B. Wang, F. Zhong, X. Qin.'' [https://dl.acm.org/citation.cfm?id=3095172 '''Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking.'''] CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 <ref>B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172</ref>
<font color="blue">'''[24]'''</font> ''B. Wang, F. Zhong, X. Qin.'' [https://dl.acm.org/citation.cfm?id=3095172 '''Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking.'''] CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 <ref>B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172</ref>


'''Abstract''' This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.
'''Abstract''' This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.




10. ''K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs.'' [https://arxiv.org/abs/1705.00960 '''Foundations of Intelligent Additive Manufacturing.'''] Published in ArXiv, May 2017. <ref>K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.</ref>  
<font color="blue">'''[25]'''</font> ''K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs.'' [https://arxiv.org/abs/1705.00960 '''Foundations of Intelligent Additive Manufacturing.'''] Published in ArXiv, May 2017. <ref>K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.</ref>  


'''Abstract''' During the last decade, additive manufacturing has become increasingly popular for rapid prototyping, but has remained relatively marginal beyond the scope of prototyping when it comes to applications with tight tolerance specifications, such as in aerospace. Despite a strong desire to supplant many aerospace structures with printed builds, additive manufacturing has largely remained limited to prototyping, tooling, fixtures, and non-critical components. There are numerous fundamental challenges inherent to additive processing to be addressed before this promise is realized. One ubiquitous challenge across all AM motifs is to develop processing-property relationships through precise, in situ monitoring coupled with formal methods and feedback control. We suggest a significant component of this vision is a set of semantic layers within 3D printing files relevant to the desired material specifications. This semantic layer provides the feedback laws of the control system, which then evaluates the component during processing and intelligently evolves the build parameters within boundaries defined by semantic specifications. This evaluation and correction loop requires on-the-fly coupling of finite element analysis and topology optimization. The required parameters for this analysis are all extracted from the semantic layer and can be modified in situ to satisfy the global specifications. Therefore, the representation of what is printed changes during the printing process to compensate for eventual imprecision or drift arising during the manufacturing process.
'''Abstract''' During the last decade, additive manufacturing has become increasingly popular for rapid prototyping, but has remained relatively marginal beyond the scope of prototyping when it comes to applications with tight tolerance specifications, such as in aerospace. Despite a strong desire to supplant many aerospace structures with printed builds, additive manufacturing has largely remained limited to prototyping, tooling, fixtures, and non-critical components. There are numerous fundamental challenges inherent to additive processing to be addressed before this promise is realized. One ubiquitous challenge across all AM motifs is to develop processing-property relationships through precise, in situ monitoring coupled with formal methods and feedback control. We suggest a significant component of this vision is a set of semantic layers within 3D printing files relevant to the desired material specifications. This semantic layer provides the feedback laws of the control system, which then evaluates the component during processing and intelligently evolves the build parameters within boundaries defined by semantic specifications. This evaluation and correction loop requires on-the-fly coupling of finite element analysis and topology optimization. The required parameters for this analysis are all extracted from the semantic layer and can be modified in situ to satisfy the global specifications. Therefore, the representation of what is printed changes during the printing process to compensate for eventual imprecision or drift arising during the manufacturing process.




10. ''R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz.'' [https://ieeexplore.ieee.org/document/8278071 '''Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops.'''] 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009. <ref>R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009.</ref>
<font color="blue">'''[26]'''</font> ''R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz.'' [https://ieeexplore.ieee.org/document/8278071 '''Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops.'''] 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009. <ref>R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009.</ref>


'''Abstract''' Every year, efficient maize production is very important to the economy of many countries. Since nutritional deficiencies in maize plants are directly reflected in their grains productivity, early detection is needed to maximize the chances of proper recovery of these plants. Traditional texture methods recently showed interesting results in the identification of nutritional deficiencies. On the other hand, deep learning techniques are increasingly outperforming hand-crafted features on many tasks. In this paper, we propose a simple transfer learning approach from pre-trained cnn models and compare their results with those from traditional texture methods in the task of nitrogen deficiency identification. We perform experiments in a real-world dataset that contains digitalized images of maize leaves at different growth stages and with different levels of nitrogen fertilization. The results show that deep learning based descriptors achieve better success rates than traditional texture methods.
'''Abstract''' Every year, efficient maize production is very important to the economy of many countries. Since nutritional deficiencies in maize plants are directly reflected in their grains productivity, early detection is needed to maximize the chances of proper recovery of these plants. Traditional texture methods recently showed interesting results in the identification of nutritional deficiencies. On the other hand, deep learning techniques are increasingly outperforming hand-crafted features on many tasks. In this paper, we propose a simple transfer learning approach from pre-trained cnn models and compare their results with those from traditional texture methods in the task of nitrogen deficiency identification. We perform experiments in a real-world dataset that contains digitalized images of maize leaves at different growth stages and with different levels of nitrogen fertilization. The results show that deep learning based descriptors achieve better success rates than traditional texture methods.
<font color="blue">'''[27]'''</font> ''F.-C. Ghesu, B. Georgescu, Y. Zheng et al.'' [https://ieeexplore.ieee.org/document/8187667 '''Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Issue 1, pp. 176 - 189, 2017. DOI: 10.1109/TPAMI.2017.2782687. <ref>F.-C. Ghesu, B. Georgescu, Y. Zheng et al. Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Issue 1, pp. 176 - 189, 2017. DOI: 10.1109/TPAMI.2017.2782687.</ref>
'''Abstract''' Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.
<font color="blue">'''[28]'''</font> ''R.J. Jevnisek, S. Avidan.'' [https://ieeexplore.ieee.org/document/8099889 '''Co-Occurrence Filter.'''] 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3184-3192, 2017. DOI: 10.1109/CVPR.2017.406. <ref>R.J. Jevnisek, S. Avidan. Co-Occurrence Filter. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3184-3192, 2017. DOI: 10.1109/CVPR.2017.406.</ref>
'''Abstract''' Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.


----
----
Line 152: Line 178:
===2016===
===2016===


11. ''F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss.'' [https://www.ey.com/Publication/vwLUAssets/ey-3d-printing-report/$FILE/ey-3d-printing-report.pdf '''EY’s Global 3D printing Report.'''] 2016. <ref>F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report, 2016.</ref>
<font color="blue">'''[29]'''</font> ''F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss.'' [https://www.ey.com/Publication/vwLUAssets/ey-3d-printing-report/$FILE/ey-3d-printing-report.pdf '''EY’s Global 3D printing Report.'''] 2016. <ref>F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report, 2016.</ref>


12. ''J. Fastowicz, K. Okarma.'' [https://link.springer.com/chapter/10.1007%2F978-3-319-46418-3_2 '''Texture based quality assessment of 3D prints for different lighting conditions.'''] In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2 <ref>J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2</ref>
<font color="blue">'''[30]'''</font> ''J. Fastowicz, K. Okarma.'' [https://link.springer.com/chapter/10.1007%2F978-3-319-46418-3_2 '''Texture based quality assessment of 3D prints for different lighting conditions.'''] In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2 <ref>J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2</ref>


'''Abstract''' In the paper the method of “blind” quality assessment of 3D prints based on texture analysis using the GLCM and chosen Haralick features is discussed. As the proposed approach has been verified using the images obtained by scanning the 3D printed plates, some dependencies related to the transparency of filaments may be noticed. Furthermore, considering the influence of lighting conditions, some other experiments have been made using the images acquired by a camera mounted on a 3D printer. Due to the influence of lighting conditions on the obtained images in comparison to the results of scanning, some modifications of the method have also been proposed leading to promising results allowing further extensions of our approach to no-reference quality assessment of 3D prints. Achieved results confirm the usefulness of the proposed approach for live monitoring of the progress of 3D printing process and the quality of 3D prints.
'''Abstract''' In the paper the method of “blind” quality assessment of 3D prints based on texture analysis using the GLCM and chosen Haralick features is discussed. As the proposed approach has been verified using the images obtained by scanning the 3D printed plates, some dependencies related to the transparency of filaments may be noticed. Furthermore, considering the influence of lighting conditions, some other experiments have been made using the images acquired by a camera mounted on a 3D printer. Due to the influence of lighting conditions on the obtained images in comparison to the results of scanning, some modifications of the method have also been proposed leading to promising results allowing further extensions of our approach to no-reference quality assessment of 3D prints. Achieved results confirm the usefulness of the proposed approach for live monitoring of the progress of 3D printing process and the quality of 3D prints.


<font color="blue">'''[31]'''</font> ''C. Caetano, J.A. dos Santos, W.R. Schwartz.'' [https://ieeexplore.ieee.org/document/7899921 '''Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor.'''] 2016 23rd International Conference on Pattern Recognition (ICPR). DOI: 10.1109/ICPR.2016.7899921. <ref>C. Caetano, J.A. dos Santos, W.R. Schwartz. Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor. 2016 23rd International Conference on Pattern Recognition (ICPR). DOI: 10.1109/ICPR.2016.7899921.</ref>
'''Abstract''' Suitable feature representation is essential for performing video analysis and understanding in applications within the smart surveillance domain. In this paper, we propose a novel spatiotemporal feature descriptor based on co-occurrence matrices computed from the optical flow magnitude and orientation. Our method, called Optical Flow Co-occurrence Matrices (OFCM), extracts a robust set of measures known as Haralick features to describe the flow patterns by measuring meaningful properties such as contrast, entropy and homogeneity of co-occurrence matrices to capture local space-time characteristics of the motion through the neighboring optical flow magnitude and orientation. We evaluate the proposed method on the action recognition problem by applying a visual recognition pipeline involving bag of local spatiotemporal features and SVM classification. The experimental results, carried on three well-known datasets (KTH, UCF Sports and HMDB51), demonstrate that OFCM outperforms the results achieved by several widely employed spatiotemporal feature descriptors such as HOF, HOG3D and MBH, indicating its suitability to be used as video representation.


----
----
Line 163: Line 193:
===2015===
===2015===


13. ''S. Xie, Z. Tu.'' [https://ieeexplore.ieee.org/abstract/document/7410521 '''Holistically-Nested Edge Detection.'''] 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164. <ref>S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164.</ref>
<font color="blue">'''[32]'''</font> ''S. Xie, Z. Tu.'' [https://ieeexplore.ieee.org/abstract/document/7410521 '''Holistically-Nested Edge Detection.'''] 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164. <ref>S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164.</ref>


'''Abstract''' We develop a new edge detection algorithm that addresses two critical issues in this long-standing vision problem: (1) holistic image training, and (2) multi-scale feature learning. Our proposed method, holistically-nested edge detection (HED), turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are crucially important in order to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of 0.782) and the NYU Depth dataset (ODS F-score of 0.746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than recent CNN-based edge detection algorithms.
'''Abstract''' We develop a new edge detection algorithm that addresses two critical issues in this long-standing vision problem: (1) holistic image training, and (2) multi-scale feature learning. Our proposed method, holistically-nested edge detection (HED), turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are crucially important in order to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of 0.782) and the NYU Depth dataset (ODS F-score of 0.746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than recent CNN-based edge detection algorithms.




13. ''O. Ronneberger, P. Fischer, T. Brox.'' [https://arxiv.org/abs/1505.04597 '''U-Net: Convolutional Networks for Biomedical Image Segmentation.'''] MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28 <ref>O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28</ref>
<font color="blue">'''[33]'''</font> ''O. Ronneberger, P. Fischer, T. Brox.'' [https://arxiv.org/abs/1505.04597 '''U-Net: Convolutional Networks for Biomedical Image Segmentation.'''] MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28 <ref>O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28</ref>


'''Abstract''' There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.
'''Abstract''' There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.




6. ''A. Kadambi, V. Taamazyan, B. Shi, R. Raskar.'' [https://ieeexplore.ieee.org/abstract/document/7410742 '''Polarized 3D: High-Quality Depth Sensing with Polarization Cues.'''] 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385. <ref>A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385.</ref>
<font color="blue">'''[34]'''</font> ''A. Kadambi, V. Taamazyan, B. Shi, R. Raskar.'' [https://ieeexplore.ieee.org/abstract/document/7410742 '''Polarized 3D: High-Quality Depth Sensing with Polarization Cues.'''] 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385. <ref>A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385.</ref>


'''Abstract''' Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.
'''Abstract''' Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.




7. ''P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al.'' [https://dl.acm.org/citation.cfm?id=2766962 '''MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing.'''] Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962. <ref>P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962.</ref>
<font color="blue">'''[35]'''</font> ''P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al.'' [https://dl.acm.org/citation.cfm?id=2766962 '''MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing.'''] Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962. <ref>P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962.</ref>


'''Abstract''' We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.
'''Abstract''' We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.
Line 186: Line 216:
===2014===
===2014===


14. ''A. Crivellaro; V. Lepetit.'' [https://ieeexplore.ieee.org/abstract/document/6909832 '''Robust 3D Tracking with Descriptor Fields.'''] 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436 <ref>A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436</ref>
<font color="blue">'''[36]'''</font> ''A. Crivellaro; V. Lepetit.'' [https://ieeexplore.ieee.org/abstract/document/6909832 '''Robust 3D Tracking with Descriptor Fields.'''] 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436 <ref>A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436</ref>


'''Abstract''' We introduce a method that can register challenging images from specular and poorly textured 3D environments, on which previous approaches fail. We assume that a small set of reference images of the environment and a partial 3D model are available. Like previous approaches, we register the input images by aligning them with one of the reference images using the 3D information. However, these approaches typically rely on the pixel intensities for the alignment, which is prone to fail in presence of specularities or in absence of texture. Our main contribution is an efficient novel local descriptor that we use to describe each image location. We show that we can rely on this descriptor in place of the intensities to significantly improve the alignment robustness at a minor increase of the computational cost, and we analyze the reasons behind the success of our descriptor.
'''Abstract''' We introduce a method that can register challenging images from specular and poorly textured 3D environments, on which previous approaches fail. We assume that a small set of reference images of the environment and a partial 3D model are available. Like previous approaches, we register the input images by aligning them with one of the reference images using the 3D information. However, these approaches typically rely on the pixel intensities for the alignment, which is prone to fail in presence of specularities or in absence of texture. Our main contribution is an efficient novel local descriptor that we use to describe each image location. We show that we can rely on this descriptor in place of the intensities to significantly improve the alignment robustness at a minor increase of the computational cost, and we analyze the reasons behind the success of our descriptor.
Line 194: Line 224:
===2013===
===2013===


16. ''M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy.'' [https://thesai.org/Publications/ViewPaper?Volume=4&Issue=10&Code=IJACSA&SerialNo=3 '''Automated Edge Detection Using Convolutional Neural Network.'''] (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003 <ref>M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy. Automated Edge Detection Using Convolutional Neural Network. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003</ref>
<font color="blue">'''[37]'''</font> ''M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy.'' [https://thesai.org/Publications/ViewPaper?Volume=4&Issue=10&Code=IJACSA&SerialNo=3 '''Automated Edge Detection Using Convolutional Neural Network.'''] (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003 <ref>M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy. Automated Edge Detection Using Convolutional Neural Network. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003</ref>


'''Abstract''' The edge detection on the images is so important for image processing. It is used in various fields of applications ranging from real-time video surveillance and traffic management to medical imaging applications. Currently, there is not a single edge detector that has both efficiency and reliability. Traditional differential filter-based algorithms have the advantage of theoretical strictness, but require excessive post-processing. Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods.
'''Abstract''' The edge detection on the images is so important for image processing. It is used in various fields of applications ranging from real-time video surveillance and traffic management to medical imaging applications. Currently, there is not a single edge detector that has both efficiency and reliability. Traditional differential filter-based algorithms have the advantage of theoretical strictness, but require excessive post-processing. Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods.




17. ''A. Karpathy, S. Miller, L. Fei-Fei.'' [https://ieeexplore.ieee.org/document/6630857 '''Object discovery in 3D scenes via shape analysis.'''] 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857 <ref>A. Karpathy, S. Miller, L. Fei-Fei. Object discovery in 3D scenes via shape analysis. 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857</ref>
<font color="blue">'''[38]'''</font> ''A. Karpathy, S. Miller, L. Fei-Fei.'' [https://ieeexplore.ieee.org/document/6630857 '''Object discovery in 3D scenes via shape analysis.'''] 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857 <ref>A. Karpathy, S. Miller, L. Fei-Fei. Object discovery in 3D scenes via shape analysis. 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857</ref>


'''Abstract''' We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.
'''Abstract''' We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.
<font color="blue">'''[39]'''</font> ''Nanni L., Brahnam S., Ghidoni S., Menegatti E., Barrier T.'' [https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0083554 '''Different Approaches for Extracting Information from the Co-Occurrence Matrix.'''] PLoS ONE 8(12): e83554, 2013. DOI:10.1371/journal.pone.0083554. <ref>Nanni L., Brahnam S., Ghidoni S., Menegatti E., Barrier T. Different Approaches for Extracting Information from the Co-Occurrence Matrix. PLoS ONE 8(12): e83554, 2013. DOI:10.1371/journal.pone.0083554.</ref>
'''Abstract''' In 1979 Haralick famously introduced a method for analyzing the texture of an image: a set of statistics extracted from the co-occurrence matrix. In this paper we investigate novel sets of texture descriptors extracted from the co-occurrence matrix; in addition, we compare and combine different strategies for extending these descriptors. The following approaches are compared: the standard approach proposed by Haralick, two methods that consider the co-occurrence matrix as a three-dimensional shape, a gray-level run-length set of features and the direct use of the co-occurrence matrix projected onto a lower dimensional subspace by principal component analysis. Texture descriptors are extracted from the co-occurrence matrix evaluated at multiple scales. Moreover, the descriptors are extracted not only from the entire co-occurrence matrix but also from subwindows. The resulting texture descriptors are used to train a support vector machine and ensembles. Results show that our novel extraction methods improve the performance of standard methods. We validate our approach across six medical datasets representing different image classification problems using the Wilcoxon signed rank test.


----
----
Line 208: Line 243:
===2012===
===2012===


15. ''C. Choi, H. Christensen.'' [https://ieeexplore.ieee.org/document/6386065 '''3D Textureless Object Detection and Tracking: An Edge-based Approach.'''] IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065 <ref>C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065</ref>
<font color="blue">'''[40]'''</font> ''C. Choi, H. Christensen.'' [https://ieeexplore.ieee.org/document/6386065 '''3D Textureless Object Detection and Tracking: An Edge-based Approach.'''] IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065 <ref>C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065</ref>


'''Abstract''' This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.
'''Abstract''' This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.




16. ''M.Oikawa, M.Fujisawa.'' [https://ieeexplore.ieee.org/document/6297536 '''Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects.'''] 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3 <ref>M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3</ref>
<font color="blue">'''[41]'''</font> ''M.Oikawa, M.Fujisawa.'' [https://ieeexplore.ieee.org/document/6297536 '''Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects.'''] 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3 <ref>M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3</ref>


'''Abstract''' This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects and use them inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics for each patch in the mesh to give local approximations of the object shape. The novelty of our research lies in using curves that represent the quadrics projection in the current viewpoint for distance evaluation instead of using the standard method which compares edges from mesh and detected edges in the video image. This representation allows to considerably reduce the level of detail of the polygonal mesh and led us to the development of a novel method for evaluating the distance between projected and detected features. The experiments results show the comparison between our approach and the traditional method using sparse and dense meshes. They are presented using both synthetic and real image data.
'''Abstract''' This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects and use them inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics for each patch in the mesh to give local approximations of the object shape. The novelty of our research lies in using curves that represent the quadrics projection in the current viewpoint for distance evaluation instead of using the standard method which compares edges from mesh and detected edges in the video image. This representation allows to considerably reduce the level of detail of the polygonal mesh and led us to the development of a novel method for evaluating the distance between projected and detected features. The experiments results show the comparison between our approach and the traditional method using sparse and dense meshes. They are presented using both synthetic and real image data.


<font color="blue">'''[42]'''</font> ''L. Sevilla-Lara, E. Learned-Miller.'' [https://ieeexplore.ieee.org/document/6247891 '''Distribution fields for tracking.'''] Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910-1917. DOI:10.1109/CVPR.2012.6247891. <ref>L. Sevilla-Lara, E. Learned-Miller. Distribution fields for tracking. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910-1917. DOI:10.1109/CVPR.2012.6247891.</ref>
'''Abstract''' Visual tracking of general objects often relies on the assumption that gradient descent of the alignment function will reach the global optimum. A common technique to smooth the objective function is to blur the image. However, blurring the image destroys image information, which can cause the target to be lost. To address this problem we introduce a method for building an image descriptor using distribution fields (DFs), a representation that allows smoothing the objective function without destroying information about pixel values. We present experimental evidence on the superiority of the width of the basin of attraction around the global optimum of DFs over other descriptors. DFs also allow the representation of uncertainty about the tracked object. This helps in disregarding outliers during tracking (like occlusions or small misalignments) without modeling them explicitly. Finally, this provides a convenient way to aggregate the observations of the object through time and maintain an updated model. We present a simple tracking algorithm that uses DFs and obtains state-of-the-art results on standard benchmarks.
----
----


===2011===
===2011===


18. ''G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang.'' [https://www.sciencedirect.com/science/article/pii/S0278612511000562?via%3Dihub '''Adaptive tool-path generation of rapid prototyping for complex product models.'''] Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007. <ref>G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang. Adaptive tool-path generation of rapid prototyping for complex product models. Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007.</ref>
<font color="blue">'''[43]'''</font> ''G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang.'' [https://www.sciencedirect.com/science/article/pii/S0278612511000562?via%3Dihub '''Adaptive tool-path generation of rapid prototyping for complex product models.'''] Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007. <ref>G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang. Adaptive tool-path generation of rapid prototyping for complex product models. Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007.</ref>


'''Abstract''' Rapid prototyping (RP) provides an effective method for model verification and product development collaboration. A challenging research issue in RP is how to shorten the build time and improve the surface accuracy especially for complex product models. In this paper, systematic adaptive algorithms and strategies have been developed to address the challenge. A slicing algorithm has been first developed for directly slicing a Computer-Aided Design (CAD) model as a number of RP layers. Closed Non-Uniform Rational B-Spline (NURBS) curves have been introduced to represent the contours of the layers to maintain the surface accuracy of the CAD model. Based on it, a mixed and adaptive tool-path generation algorithm, which is aimed to optimize both the surface quality and fabrication efficiency in RP, has been then developed. The algorithm can generate contour tool-paths for the boundary of each RP sliced layer to reduce the surface errors of the model, and zigzag tool-paths for the internal area of the layer to speed up fabrication. In addition, based on developed build time analysis mathematical models, adaptive strategies have been devised to generate variable speeds for contour tool-paths to address the geometric characteristics in each layer to reduce build time, and to identify the best slope degree of zigzag tool-paths to further minimize the build time. In the end, case studies of complex product models have been used to validate and showcase the performance of the developed algorithms in terms of processing effectiveness and surface accuracy.
'''Abstract''' Rapid prototyping (RP) provides an effective method for model verification and product development collaboration. A challenging research issue in RP is how to shorten the build time and improve the surface accuracy especially for complex product models. In this paper, systematic adaptive algorithms and strategies have been developed to address the challenge. A slicing algorithm has been first developed for directly slicing a Computer-Aided Design (CAD) model as a number of RP layers. Closed Non-Uniform Rational B-Spline (NURBS) curves have been introduced to represent the contours of the layers to maintain the surface accuracy of the CAD model. Based on it, a mixed and adaptive tool-path generation algorithm, which is aimed to optimize both the surface quality and fabrication efficiency in RP, has been then developed. The algorithm can generate contour tool-paths for the boundary of each RP sliced layer to reduce the surface errors of the model, and zigzag tool-paths for the internal area of the layer to speed up fabrication. In addition, based on developed build time analysis mathematical models, adaptive strategies have been devised to generate variable speeds for contour tool-paths to address the geometric characteristics in each layer to reduce build time, and to identify the best slope degree of zigzag tool-paths to further minimize the build time. In the end, case studies of complex product models have been used to validate and showcase the performance of the developed algorithms in terms of processing effectiveness and surface accuracy.
<font color="blue">'''[44]'''</font> ''C. Choi, H.I. Christensen.'' [https://ieeexplore.ieee.org/abstract/document/5980245 '''Robust 3D visual tracking using particle filtering on the SE(3) group.'''] 2011 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2011.5980245. <ref>C. Choi, H.I. Christensen. Robust 3D visual tracking using particle filtering on the SE(3) group. 2011 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2011.5980245.</ref>
'''Abstract''' In this paper, we present a 3D model-based object tracking approach using edge and keypoint features in a particle filtering framework. Edge points provide 1D information for pose estimation and it is natural to consider multiple hypotheses. Recently, particle filtering based approaches have been proposed to integrate multiple hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To remove this assumption, we employ keypoint features for initialization of the filter. Given 2D-3D keypoint correspondences, we choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. For better performance, we employ an autoregressive state dynamics and apply it to a coordinate-invariant particle filter on the SE(3) group. Based on the number of effective particles calculated during tracking, the proposed system re-initializes particles when the tracked object goes out of sight or is occluded. The robustness and accuracy of our approach is demonstrated via comparative experiments.


----
----
Line 230: Line 274:
===2010-2000===
===2010-2000===


17. ''W. Zeng, D. Samaras, D. Gu.'' [https://ieeexplore.ieee.org/abstract/document/5374410 '''Ricci Flow for 3D Shape Analysis.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201. <ref>W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201.</ref>
<font color="blue">'''[45]'''</font> ''W. Zeng, D. Samaras, D. Gu.'' [https://ieeexplore.ieee.org/abstract/document/5374410 '''Ricci Flow for 3D Shape Analysis.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201. <ref>W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201.</ref>


'''Abstract''' Ricci flow is a powerful curvature flow method, which is invariant to rigid motion, scaling, isometric, and conformal deformations. We present the first application of surface Ricci flow in computer vision. Previous methods based on conformal geometry, which only handle 3D shapes with simple topology, are subsumed by the Ricci flow-based method, which handles surfaces with arbitrary topology. We present a general framework for the computation of Ricci flow, which can design any Riemannian metric by user-defined curvature. The solution to Ricci flow is unique and robust to noise. We provide implementation details for Ricci flow on discrete surfaces of either Euclidean or hyperbolic background geometry. Our Ricci flow-based method can convert all 3D problems into 2D domains and offers a general framework for 3D shape analysis. We demonstrate the applicability of this intrinsic shape representation through standard shape analysis problems, such as 3D shape matching and registration, and shape indexing. Surfaces with large nonrigid anisotropic deformations can be registered using Ricci flow with constraints of feature points and curves. We show how conformal equivalence can be used to index shapes in a 3D surface shape space with the use of Teichmuller space coordinates. Experimental results are shown on 3D face data sets with large expression deformations and on dynamic heart data.
'''Abstract''' Ricci flow is a powerful curvature flow method, which is invariant to rigid motion, scaling, isometric, and conformal deformations. We present the first application of surface Ricci flow in computer vision. Previous methods based on conformal geometry, which only handle 3D shapes with simple topology, are subsumed by the Ricci flow-based method, which handles surfaces with arbitrary topology. We present a general framework for the computation of Ricci flow, which can design any Riemannian metric by user-defined curvature. The solution to Ricci flow is unique and robust to noise. We provide implementation details for Ricci flow on discrete surfaces of either Euclidean or hyperbolic background geometry. Our Ricci flow-based method can convert all 3D problems into 2D domains and offers a general framework for 3D shape analysis. We demonstrate the applicability of this intrinsic shape representation through standard shape analysis problems, such as 3D shape matching and registration, and shape indexing. Surfaces with large nonrigid anisotropic deformations can be registered using Ricci flow with constraints of feature points and curves. We show how conformal equivalence can be used to index shapes in a 3D surface shape space with the use of Teichmuller space coordinates. Experimental results are shown on 3D face data sets with large expression deformations and on dynamic heart data.




17. ''J. Barandiaran, D. Borro.'' [https://ieeexplore.ieee.org/document/4414647 '''Edge-based markerless 3D tracking of rigid objects.'''] 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62 <ref>J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62</ref>
<font color="blue">'''[46]'''</font> ''J. Barandiaran, D. Borro.'' [https://ieeexplore.ieee.org/document/4414647 '''Edge-based markerless 3D tracking of rigid objects.'''] 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62 <ref>J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62</ref>


'''Abstract''' In this paper we present a real-time 3D object tracking algorithm based on edges and using a single pre-calibrated camera. During the tracking process, the algorithm is continuously projecting the 3D model to the current frame by using the pose estimated in the previous frame. Once projected, some control points are generated along the visible edges of the object. The next pose is estimated by minimizing the distances between the control points and the edges detected in the image.
'''Abstract''' In this paper we present a real-time 3D object tracking algorithm based on edges and using a single pre-calibrated camera. During the tracking process, the algorithm is continuously projecting the 3D model to the current frame by using the pose estimated in the previous frame. Once projected, some control points are generated along the visible edges of the object. The next pose is estimated by minimizing the distances between the control points and the edges detected in the image.




18. ''Gordon I., Lowe D.G.'' [https://link.springer.com/chapter/10.1007/11957959_4 '''What and Where: 3D Object Recognition with Accurate Pose.'''] In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4 <ref>Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4</ref>
<font color="blue">'''[47]'''</font> ''Gordon I., Lowe D.G.'' [https://link.springer.com/chapter/10.1007/11957959_4 '''What and Where: 3D Object Recognition with Accurate Pose.'''] In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4 <ref>Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4</ref>


'''Abstract''' Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.
'''Abstract''' Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.




19. ''H. Wuest, F. Vial, and D. Strieker.'' [https://ieeexplore.ieee.org/abstract/document/1544665 '''Adaptive line tracking with multiple hypotheses for augmented reality.'''] Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8 <ref>Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8</ref>
<font color="blue">'''[48]'''</font> ''M. Pressigout, E. Marchand.'' [https://ieeexplore.ieee.org/document/1642113 '''Real-time 3D model-based tracking: combining edge and texture information.'''] Proceedings 2006 IEEE International Conference on Robotics and Automation, pp. 2726-2731, 2006. ICRA 2006. DOI: 10.1109/ROBOT.2006.1642113. <ref>M. Pressigout, E. Marchand. Real-time 3D model-based tracking: combining edge and texture information. Proceedings 2006 IEEE International Conference on Robotics and Automation, pp. 2726-2731, 2006. ICRA 2006. DOI: 10.1109/ROBOT.2006.1642113.</ref>
 
'''Abstract''' This paper proposes a real-time, robust and efficient 3D model-based tracking algorithm. A nonlinear minimization approach is used to register 2D and 3D cues for monocular 3D tracking. The integration of texture information in a more classical nonlinear edge-based pose computation highly increases the reliability of more conventional edge-based 3D tracker. Robustness is enforced by integrating a M-estimator into the minimization process via an iteratively re-weighted least squares implementation. The method presented in this paper has been validated on several video sequences as well as in visual servoing experiments considering various objects. Results show the method to be robust to large motions and textured environments.
 
 
<font color="blue">'''[49]'''</font> ''A.I. Comport, E. Marchand, M. Pressigout, F. Chaumette.'' [https://ieeexplore.ieee.org/document/1634325 '''Real-time markerless tracking for augmented reality: the virtual visual servoing framework.'''] IEEE Transactions on Visualization and Computer Graphics, Volume 12, Issue 4, pp. 615-628, 2006. DOI: 10.1109/TVCG.2006.78. <ref>A.I. Comport, E. Marchand, M. Pressigout, F. Chaumette. Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, Volume 12, Issue 4, pp. 615-628, 2006. DOI: 10.1109/TVCG.2006.78.</ref>
 
'''Abstract''' Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
 
 
<font color="blue">'''[50]'''</font> ''L. Setia, A. Teynor, A. Halawani, H. Burkhardt.'' [https://dl.acm.org/citation.cfm?doid=1178677.1178703 '''Image classification using cluster cooccurrence matrices of local relational features.'''] Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pp. 173-182, 2006. DOI: 10.1145/1178677.1178703. <ref>L. Setia, A. Teynor, A. Halawani, H. Burkhardt. Image classification using cluster cooccurrence matrices of local relational features. Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pp. 173-182, 2006. DOI: 10.1145/1178677.1178703.</ref>
 
'''Abstract''' Image classification systems have received a recent boost from methods using local features generated over interest points, delivering higher robustness against partial occlusion and cluttered backgrounds. We propose in this paper to use relational features calculated over multiple directions and scales around these interest points. Furthermore, a very important design issue is the choice of similarity measure to compare the bags of local feature vectors generated by each image, for which we propose a novel approach by computing image similarity using cluster co-occurrence matrices of local features. Excellent results are achieved for a widely used medical image classification task, and ideas to generalize to other tasks are discussed.
 
 
<font color="blue">'''[51]'''</font> ''H. Wuest, F. Vial, and D. Strieker.'' [https://ieeexplore.ieee.org/abstract/document/1544665 '''Adaptive line tracking with multiple hypotheses for augmented reality.'''] Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8 <ref>Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8</ref>


'''Abstract''' We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.
'''Abstract''' We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.




20. ''K. Grauman and T. Darrell.'' [https://ieeexplore.ieee.org/document/1544890 '''The pyramid match kernel: Discriminative classification with sets of image features.'''] Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239 <ref>K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239</ref>
<font color="blue">'''[52]'''</font> ''K. Grauman and T. Darrell.'' [https://ieeexplore.ieee.org/document/1544890 '''The pyramid match kernel: Discriminative classification with sets of image features.'''] Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239 <ref>K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239</ref>


'''Abstract''' Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This "pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.
'''Abstract''' Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This "pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.




21. ''I. Gordon and D. Lowe.'' [https://www.cs.ubc.ca/~lowe/papers/gordon04.pdf '''Scene Modelling, Recognition and Tracking with Invariant Image Features.'''] Conference: Mixed and Augmented Reality, 2004. ISMAR 2004. <ref>I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.</ref>
<font color="blue">'''[53]'''</font> ''I. Gordon and D. Lowe.'' [https://www.cs.ubc.ca/~lowe/papers/gordon04.pdf '''Scene Modelling, Recognition and Tracking with Invariant Image Features.'''] Conference: Mixed and Augmented Reality, 2004. ISMAR 2004. <ref>I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.</ref>


'''Abstract'''  
'''Abstract'''  
Line 262: Line 321:




22. ''D. G. Lowe.'' [https://link.springer.com/article/10.1023/B:VISI.0000029664.99615.94 '''Distinctive image features from scale-invariant keypoints.'''] International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94 <ref>D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94</ref>
<font color="blue">'''[54]'''</font> ''D. G. Lowe.'' [https://link.springer.com/article/10.1023/B:VISI.0000029664.99615.94 '''Distinctive image features from scale-invariant keypoints.'''] International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94 <ref>D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94</ref>


'''Abstract''' This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
'''Abstract''' This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.




23. ''R. Hartley, A. Zisserman.'' '''Multiple View Geometry in Computer Vision. Cambridge University Press,''' 2003. ISBN: 0521623049 <ref>R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049</ref>
<font color="blue">'''[55]'''</font> ''K. Grauman, T. Darrell.'' [https://ieeexplore.ieee.org/document/1315035 '''Fast contour matching using approximate earth mover's distance.'''] Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. DOI: 10.1109/CVPR.2004.1315035. <ref>K. Grauman, T. Darrell. Fast contour matching using approximate earth mover's distance. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. DOI: 10.1109/CVPR.2004.1315035.</ref>


'''Abstract''' Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost matching between two shapes' features often reveals how similar the shapes are. However due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the earth mover's distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search with locality-sensitive hashing (LSH). We demonstrate our shape matching method on a database of 136,500 images of human figures. Our method achieves a speedup of four orders of magnitude over the exact method, at the cost of only a 4% reduction in accuracy.


24. ''T. Drummond, R. Cipolla.'' [https://ieeexplore.ieee.org/abstract/document/1017620 '''Real-Time Visual Tracking of Complex Structures.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620 <ref>T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620</ref>
 
<font color="blue">'''[56]'''</font> ''R. Hartley, A. Zisserman.'' '''Multiple View Geometry in Computer Vision. Cambridge University Press,''' 2003. ISBN: 0521623049 <ref>R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049</ref>
 
 
<font color="blue">'''[57]'''</font> ''T. Adamek, N. O'Connor.'' [https://dl.acm.org/citation.cfm?doid=973264.973287 '''Efficient contour-based shape representation and matching.'''] Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pp. 138-143, 2003. DOI:10.1145/973264.973287. <ref>T. Adamek, N. O'Connor. Efficient contour-based shape representation and matching. Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pp. 138-143, 2003. DOI:10.1145/973264.973287.</ref>
 
'''Abstract''' This paper presents an efficient method for calculating the similarity between 2D closed shape contours. The proposed algorithm is invariant to translation, scale change and rotation. It can be used for database retrieval or for detecting regions with a particular shape in video sequences. The proposed algorithm is suitable for real-time applications. In the first stage of the algorithm, an ordered sequence of contour points approximating the shapes is extracted from the input binary images. The contours are translation and scale-size normalized, and small sets of the most likely starting points for both shapes are extracted. In the second stage, the starting points from both shapes are assigned into pairs and rotation alignment is performed. The dissimilarity measure is based on the geometrical distances between corresponding contour points. A fast sub-optimal method for solving the correspondence problem between contour points from two shapes is proposed. The dissimilarity measure is calculated for each pair of starting points. The lowest dissimilarity is taken as the final dissimilarity measure between two shapes. Three different experiments are carried out using the proposed approach: letter recognition using a web camera, our own simulation of Part B of the MPEG-7 core experiment "CE-Shape1" and detection of characters in cartoon video sequences. Results indicate that the proposed dissimilarity measure is aligned with human intuition.
 
 
<font color="blue">'''[58]'''</font> ''T. Drummond, R. Cipolla.'' [https://ieeexplore.ieee.org/abstract/document/1017620 '''Real-Time Visual Tracking of Complex Structures.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620 <ref>T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620</ref>


'''Abstract''' Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.
'''Abstract''' Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.




25. ''M.A. Ruzon, C. Tomasi.'' [https://dl.acm.org/citation.cfm?id=505477 '''Edge, Junction, and Corner Detection Using Color Distributions.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118. <ref>M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118.</ref>
<font color="blue">'''[59]'''</font> ''S. Belongie, J. Malik, J. Puzicha.'' [https://ieeexplore.ieee.org/document/993558 '''Shape matching and object recognition using shape contexts.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 4, pp. 509-522, 2002. DOI: 10.1109/34.993558. <ref>S. Belongie, J. Malik, J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 4, pp. 509-522, 2002. DOI: 10.1109/34.993558.</ref>
 
'''Abstract''' We present a novel approach to measuring the similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.
 
 
<font color="blue">'''[60]'''</font> ''M.A. Ruzon, C. Tomasi.'' [https://dl.acm.org/citation.cfm?id=505477 '''Edge, Junction, and Corner Detection Using Color Distributions.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118. <ref>M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118.</ref>


'''Abstract''' For over 30 years researchers in computer vision have been proposing new methods for performing low-level vision tasks such as detecting edges and corners. One key element shared by most methods is that they represent local image neighborhoods as constant in color or intensity with deviations modeled as noise. Due to computational considerations that encourage the use of small neighborhoods where this assumption holds, these methods remain popular. This research models a neighborhood as a distribution of colors. Our goal is to show that the increase in accuracy of this representation translates into higher-quality results for low-level vision tasks on difficult, natural images, especially as neighborhood size increases. We emphasize large neighborhoods because small ones often do not contain enough information. We emphasize color because it subsumes gray scale as an image range and because it is the dominant form of human perception. We discuss distributions in the context of detecting edges, corners, and junctions, and we show results for each.
'''Abstract''' For over 30 years researchers in computer vision have been proposing new methods for performing low-level vision tasks such as detecting edges and corners. One key element shared by most methods is that they represent local image neighborhoods as constant in color or intensity with deviations modeled as noise. Due to computational considerations that encourage the use of small neighborhoods where this assumption holds, these methods remain popular. This research models a neighborhood as a distribution of colors. Our goal is to show that the increase in accuracy of this representation translates into higher-quality results for low-level vision tasks on difficult, natural images, especially as neighborhood size increases. We emphasize large neighborhoods because small ones often do not contain enough information. We emphasize color because it subsumes gray scale as an image range and because it is the dominant form of human perception. We discuss distributions in the context of detecting edges, corners, and junctions, and we show results for each.
Line 283: Line 357:
===2000 - ...===
===2000 - ...===


25. ''M. Isard and A. Blake.'' [https://link.springer.com/article/10.1023/A:1008078328650 '''CONDENSATION – Conditional Density Propagation for Visual Tracking.'''] In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832 <ref>M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832</ref>
<font color="blue">'''[61]'''</font> ''M. Isard and A. Blake.'' [https://link.springer.com/article/10.1023/A:1008078328650 '''CONDENSATION – Conditional Density Propagation for Visual Tracking.'''] In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832 <ref>M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832</ref>


'''Abstract''' The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.
'''Abstract''' The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.




26. ''R. Storn, K. Price.'' [https://link.springer.com/article/10.1023/A:1008202821328 '''Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces.'''] Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282. <ref>R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.</ref>
<font color="blue">'''[62]'''</font> ''R. Storn, K. Price.'' [https://link.springer.com/article/10.1023/A:1008202821328 '''Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces.'''] Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282. <ref>R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.</ref>


'''Abstract''' A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive test bed, it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
'''Abstract''' A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive test bed, it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.




27. ''M. Armstrong A. Zisserman.'' [http://www.robots.ox.ac.uk:5000/~vgg/publications/1995/Armstrong95/armstrong95.pdf '''Robust Object Tracking.'''] Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995. <ref>M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.</ref>
<font color="blue">'''[63]'''</font> ''M. Armstrong A. Zisserman.'' [http://www.robots.ox.ac.uk:5000/~vgg/publications/1995/Armstrong95/armstrong95.pdf '''Robust Object Tracking.'''] Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995. <ref>M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.</ref>


'''Abstract''' We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by
'''Abstract''' We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by
Line 299: Line 373:




28. ''C. Harris and C. Stennet.'' [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.364.2382&rep=rep1&type=pdf '''RAPID – A Video-Rate Object Tracker.'''] British Machine Vision Conference, 1990. <ref>C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.</ref>
<font color="blue">'''[64]'''</font> ''C. Harris and C. Stennet.'' [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.364.2382&rep=rep1&type=pdf '''RAPID – A Video-Rate Object Tracker.'''] British Machine Vision Conference, 1990. <ref>C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.</ref>


'''Abstract''' RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three-dimensional object executing arbitrary motion and viewed
'''Abstract''' RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three-dimensional object executing arbitrary motion and viewed
Line 305: Line 379:




29. ''J.S.J. Lee, R.M. Haralick, L.G. Shapiro.'' [https://www.sciencedirect.com/science/article/pii/S1474667017575047 '''Morphologic Edge Detection.'''] IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7. <ref>J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7.</ref>
<font color="blue">'''[65]'''</font> ''J.S.J. Lee, R.M. Haralick, L.G. Shapiro.'' [https://www.sciencedirect.com/science/article/pii/S1474667017575047 '''Morphologic Edge Detection.'''] IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7. <ref>J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7.</ref>


'''Abstract''' Edge operators based on grayscale morphologic operations are introduced. These operators can be efficiently implemented in near real time machine vision systems which have special hardware support for grayscale morphologic operations. The simplest morphologic edge detectors are the dilation residue and erosion residue operators. The underlying motivation for these is discussed. Finally, the blur minimum morphologic edge operator is defined. Its inherent noise sensitivity is less than the dilation or the erosion residue operators.
'''Abstract''' Edge operators based on grayscale morphologic operations are introduced. These operators can be efficiently implemented in near real time machine vision systems which have special hardware support for grayscale morphologic operations. The simplest morphologic edge detectors are the dilation residue and erosion residue operators. The underlying motivation for these is discussed. Finally, the blur minimum morphologic edge operator is defined. Its inherent noise sensitivity is less than the dilation or the erosion residue operators.
Line 312: Line 386:




29. ''J. Canny.'' [https://ieeexplore.ieee.org/document/4767851 '''A computational approach to edge detection.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851 <ref>J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851</ref>
<font color="blue">'''[66]'''</font> ''J. Canny.'' [https://ieeexplore.ieee.org/document/4767851 '''A computational approach to edge detection.'''] IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851 <ref>J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851</ref>


'''Abstract''' This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle, we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally, we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.
'''Abstract''' This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle, we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally, we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.




30. ''Pierre Alfred Leon Ciraud.'' '''[https://patents.google.com/patent/DE2263777A1/en A method and apparatus for manufacturing objects made of any arbitrary material meltable.]''' German patent application DE2263777A1. December 28, 1971. <ref>Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.</ref>
<font color="blue">'''[67]'''</font> ''Pierre Alfred Leon Ciraud.'' '''[https://patents.google.com/patent/DE2263777A1/en A method and apparatus for manufacturing objects made of any arbitrary material meltable.]''' German patent application DE2263777A1. December 28, 1971. <ref>Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.</ref>


The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.
The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.


----
----
? 29. M. Lowney, A.S.Raj. Model Based Tracking for Augmented Reality an Mobile Devices. Dept. of Electrical Engineering, Stanford university.




==References==
==References==

Revision as of 03:39, 26 May 2019

Literature Review

Aliaksei Petsiuk (talk) 14:32, 20 May 2019 (PDT)

Target Journal Publisher Impact Factor Link
Pattern Recognition Elsevier 3.962 https://www.journals.elsevier.com/pattern-recognition
Computer-Aided Design Elsevier 2.947 https://www.journals.elsevier.com/computer-aided-design

MOST Papers

[1] Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002. [1]

Abstract Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.


[2] Nuchitprasitchai, S., Roggemann, M. & Pearce, J.M. Factors effecting real-time optical monitoring of fused filament 3D printing. Progress in Additive Manufacturing Journal (2017), Volume 2, Issue 3, pp 133–149. DOI:10.1007/s40964-017-0027-x. [2]

Abstract This study analyzes a low-cost reliable real-time optical monitoring platform for fused filament fabrication-based open source 3D printing. An algorithm for reconstructing 3D images from overlapping 2D intensity measurements with relaxed camera positioning requirements is compared with a single-camera solution for single-side 3D printing monitoring. The algorithms are tested for different 3D object geometries and filament colors. The results showed that both of the algorithms with a single- and double-camera system were effective at detecting a clogged nozzle, incomplete project, or loss of filament for a wide range of 3D object geometries and filament colors. The combined approach was the most effective and achieves 100% detection rate for failures. The combined method analyzed here has a better detection rate and a lower cost compared to previous methods. In addition, this method is generalizable to a wide range of 3D printer geometries, which enables further deployment of desktop 3D printing as wasted print time and filament are reduced, thereby improving the economic advantages of distributed manufacturing.


[3] Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2.,2016. DOI: http://doi.org/10.5334/jors.78 [3]

Abstract RepRap 3-D printers and their derivatives using conventional firmware are limited by: 1) requiring technical knowledge, 2) poor resilience with unreliable hardware, and 3) poor integration in complicated systems. In this paper, a new control system called Franklin, for CNC machines in general and 3-D printers specifically, is presented that enables web-based three-dimensional control of additive, subtractive and analytical tools from any Internet-connected device. Franklin can be set up and controlled entirely from a web interface; it uses a custom protocol which allows it to continue printing when the connection is temporarily lost, and allows communication with scripts.


[4] G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113 [4]

Abstract Purpose - The purpose of this paper is to present novel modifications to a RepRap design that increase RepRap capabilities well beyond just fused filament fabrication. Open-source RepRap 3-D printers have made distributed manufacturing and prototyping an affordable reality. Design/methodology/approach - The design is a significantly modified derivative of the Rostock delta-style RepRap 3-D printer. Modifications were made that permit easy and rapid repurposing of the platform for milling, paste extrusion and several other applications. All of the designs are open-source and freely available. Findings - In addition to producing fused filament parts, the platform successfully produced milled printed circuit boards, milled plastic objects, objects made with paste extrudates, such as silicone, food stuffs and ceramics, pen plotted works and cut vinyl products. The multi-purpose tool saved 90-97 per cent of the capital costs of functionally equivalent dedicated tools. Research limitations/implications - While the platform was used primarily for production of hobby and consumer goods, research implications are significant, as the tool is so versatile and the fact that the designs are open-source and eminently available for modification for more purpose-specific applications. Practical implications - The platform vastly broadens capabilities of a RepRap machine at an extraordinarily low price, expanding the potential for distributed manufacturing and prototyping of items that heretofore required large financial investments. Originality/value - The unique combination of relatively simple modifications to an existing platform has produced a machine having capabilities far exceeding that of any single commercial product. The platform provides users the ability to work with a wide variety of materials and fabrication methods at a price of less than $1,000, provided users are willing to build the machine themselves.


[5] Delta 3D Printer (https://www.appropedia.org/Delta_Build_Overview:MOST)

The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform. Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.

A delta-type FFF printer with 250 mm diameter and 240 mm high cylindrical working volume has been used in our experiments. It fuses 1.75 mm Polylactic Acid (PLA) plastic filament under a temperature of 210 °C from a nozzle with 0.4 mm diameter. The printer operates by RAMPS 1.4 print controller with integrated SD card reader. Its working area is under dual surveillance, where the main camera provides a rectified top view and the secondary camera captures a side view of the working zone.

A visual marker plate located on top of the printing bed allows us to determine the spatial position of the working area relatively to cameras. The plate has 9 cm2 printing area, seven contrast square markers (1.5 cm2 and 1 cm2) build a reference frame for the main camera, and four 1.5 cm2 markers allow us to determine the relative position of the side camera.


[6] Athena II 3D Printer (https://www.appropedia.org/AthenaII)



2019

[7] RepRap (https://reprap.org/wiki/RepRap)


[8] Rostock (delta robot 3D printer) (https://www.thingiverse.com/thing:17175)


[9] SONY IMX322 Datasheet (accessed on 16 May 2019). [5]

The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.


[10] Marlin Open-Source RepRap Firmware (accessed on 16 May 2019). [6]

A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.


[11] OpenCV (Open Source Computer Vision Library) (accessed on 20 May 2019). [7]

By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.


[12] Python PyQt. A Python binding of the cross-platform C++ framework for GUI applications development (accessed on 20 May 2019). [8]


[13] Python Numpy. A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays (accessed on 20 May 2019). [9]


x. Python Imaging Library (PIL)

x. Python Scikit-image (a collection of algorithms for image processing)

x. Python Matplotlib (a plotting library)


[14] ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). [10]


[15] L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z. [11]

Abstract Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.


[16] M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1. [12]

Abstract This work bridges the gap between two popular methodologies for data partitioning: kernel clustering and regularization-based segmentation. While addressing closely related practical problems, these general methodologies may seem very different based on how they are covered in the literature. The differences may show up in motivation, formulation, and optimization, e.g. spectral relaxation versus max-flow. We explain how regularization and kernel clustering can work together and why this is useful. Our joint energy combines standard regularization, e.g. MRF potentials, and kernel clustering criteria like normalized cut. Complementarity of such terms is demonstrated in many applications using our bound optimization Kernel Cut algorithm for the joint energy (code is publicly available). While detailing combinatorial move-making, our main focus are new linear kernel and spectral bounds for kernel clustering criteria allowing their integration with any regularization objectives with existing discrete or continuous solvers.


[17] I.A. Okaroa, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, P.L. Green. Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning. Additive Manufacturing Journal, Volume 27, pp. 42-53, 2019. DOI: 10.1016/j.addma.2019.01.006. [13]

Abstract Risk-averse areas such as the medical, aerospace and energy sectors have been somewhat slow towards accepting and applying Additive Manufacturing (AM) in many of their value chains. This is partly because there are still significant uncertainties concerning the quality of AM builds. This paper introduces a machine learning algorithm for the automatic detection of faults in AM products. The approach is semi-supervised in that, during training, it is able to use data from both builds where the resulting components were certified and builds where the quality of the resulting components is unknown. This makes the approach cost-efficient, particularly in scenarios where part certification is costly and time-consuming. The study specifically analyses Laser Powder-Bed Fusion (L-PBF) builds. Key features are extracted from large sets of photodiode data, obtained during the building of 49 tensile test bars. Ultimate tensile strength (UTS) tests were then used to categorize each bar as ‘faulty’ or ‘acceptable’. Using a variety of approaches (Receiver Operating Characteristic (ROC) curves and 2-fold cross-validation), it is shown that, despite utilizing a fraction of the available certification data, the semi-supervised approach can achieve results comparable to a benchmark case where all data points are labeled. The results show that semi-supervised learning is a promising approach for the automatic certification of AM builds that can be implemented at a fraction of the cost currently required.



2018

[18] Wohlers Report. Annual worldwide progress report in 3D Printing, 2018. [14]

According to Wohlers Report [..] and EY’s Global 3D printing Report [..], additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.


[19] U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning. Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 [15]

Abstract Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.


[20] L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 [16]

Abstract Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.


[21] L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x. [17]

Abstract Both region-based methods and direct methods have become popular in recent years for tracking the 6-dof pose of an object from monocular video sequences. Region-based methods estimate the pose of the object by maximizing the discrimination between statistical foreground and background appearance models, while direct methods aim to minimize the photometric error through direct image alignment. In practice, region-based methods only care about the pixels within a narrow band of the object contour due to the level-set-based probabilistic formulation, leaving the foreground pixels beyond the evaluation band unused. On the other hand, direct methods only utilize the raw pixel information of the object, but ignore the statistical properties of foreground and background regions. In this paper, we find it beneficial to combine these two kinds of methods together. We construct a new probabilistic formulation for 3D object tracking by combining statistical constraints from region-based methods and photometric constraints from direct methods. In this way, we take advantage of both statistical property and raw pixel values of the image in a complementary manner. Moreover, in order to achieve better performance when tracking heterogeneous objects in complex scenes, we propose to increase the distinctiveness of foreground and background statistical models by partitioning the global foreground and background regions into a small number of sub-regions around the object contour. We demonstrate the effectiveness of the proposed novel strategies on a newly constructed real-world dataset containing different types of objects with ground-truth poses. Further experiments on several challenging public datasets also show that our method obtains competitive or even superior tracking results compared to previous works. In comparison with the recent state-of-art region-based method, the proposed hybrid method is proved to be more stable under silhouette pose ambiguities with a slightly lower tracking accuracy.


[22] K. Garanger, T. Khamvilai, E. Feron. 3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing. 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 465-470. DOI: 10.1109/CCTA.2018.8511509. [18]

Abstract 3D printing is rapidly becoming a commodity. However, the quality of the printed parts is not always even nor predictable. Feedback control is demonstrated during the printing of a plastic object using additive manufacturing as a means to improve macroscopic mechanical properties of the object. The printed object is a leaf spring made of several parts of different infill density values, which are the control variables in this problem. In order to achieve a desired objective stiffness, measurements are taken after each part is completed and the infill density is adjusted accordingly in a closed-loop framework. With feedback control, the absolute error of the measured part stiffness is reduced from 11.63% to 1.34% relative to the specified stiffness. This experiment is therefore a proof of concept to show the relevance of using feedback control in additive manufacturing. By considering the printing process and the measurements as stochastic processes, we show how stochastic optimal control and Kalman filtering can be used to improve the quality of objects manufactured with rudimentary printers.


[23] B. Yuan, G.M. Guss, A.C. Wilson et al. Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion. United States, 2018. DOI:10.1002/admt.201800136. [19]

Abstract A two‐step machine learning approach to monitoring laser powder bed fusion (LPBF) additive manufacturing is demonstrated that enables on‐the‐fly assessments of laser track welds. First, in situ video melt pool data acquired during LPBF is labeled according to the (1) average and (2) standard deviation of individual track width and also (3) whether or not the track is continuous, measured postbuild through an ex situ height map analysis algorithm. This procedure generates three ground truth labeled datasets for supervised machine learning. Using a portion of the labeled 10 ms video clips, a single convolutional neural network architecture is trained to generate three distinct networks. With the remaining in situ LPBF data, the trained neural networks are tested and evaluated and found to predict track width, standard deviation, and continuity without the need for ex situ measurements. This two‐step approach should benefit any LPBF system – or any additive manufacturing technology – where height‐map‐derived properties can serve as useful labels for in situ sensor data.


2017

[24] B. Wang, F. Zhong, X. Qin. Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking. CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 [20]

Abstract This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.


[25] K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017. [21]

Abstract During the last decade, additive manufacturing has become increasingly popular for rapid prototyping, but has remained relatively marginal beyond the scope of prototyping when it comes to applications with tight tolerance specifications, such as in aerospace. Despite a strong desire to supplant many aerospace structures with printed builds, additive manufacturing has largely remained limited to prototyping, tooling, fixtures, and non-critical components. There are numerous fundamental challenges inherent to additive processing to be addressed before this promise is realized. One ubiquitous challenge across all AM motifs is to develop processing-property relationships through precise, in situ monitoring coupled with formal methods and feedback control. We suggest a significant component of this vision is a set of semantic layers within 3D printing files relevant to the desired material specifications. This semantic layer provides the feedback laws of the control system, which then evaluates the component during processing and intelligently evolves the build parameters within boundaries defined by semantic specifications. This evaluation and correction loop requires on-the-fly coupling of finite element analysis and topology optimization. The required parameters for this analysis are all extracted from the semantic layer and can be modified in situ to satisfy the global specifications. Therefore, the representation of what is printed changes during the printing process to compensate for eventual imprecision or drift arising during the manufacturing process.


[26] R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009. [22]

Abstract Every year, efficient maize production is very important to the economy of many countries. Since nutritional deficiencies in maize plants are directly reflected in their grains productivity, early detection is needed to maximize the chances of proper recovery of these plants. Traditional texture methods recently showed interesting results in the identification of nutritional deficiencies. On the other hand, deep learning techniques are increasingly outperforming hand-crafted features on many tasks. In this paper, we propose a simple transfer learning approach from pre-trained cnn models and compare their results with those from traditional texture methods in the task of nitrogen deficiency identification. We perform experiments in a real-world dataset that contains digitalized images of maize leaves at different growth stages and with different levels of nitrogen fertilization. The results show that deep learning based descriptors achieve better success rates than traditional texture methods.


[27] F.-C. Ghesu, B. Georgescu, Y. Zheng et al. Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Issue 1, pp. 176 - 189, 2017. DOI: 10.1109/TPAMI.2017.2782687. [23]

Abstract Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.


[28] R.J. Jevnisek, S. Avidan. Co-Occurrence Filter. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3184-3192, 2017. DOI: 10.1109/CVPR.2017.406. [24]

Abstract Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.


2016

[29] F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report. 2016. [25]

[30] J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2 [26]

Abstract In the paper the method of “blind” quality assessment of 3D prints based on texture analysis using the GLCM and chosen Haralick features is discussed. As the proposed approach has been verified using the images obtained by scanning the 3D printed plates, some dependencies related to the transparency of filaments may be noticed. Furthermore, considering the influence of lighting conditions, some other experiments have been made using the images acquired by a camera mounted on a 3D printer. Due to the influence of lighting conditions on the obtained images in comparison to the results of scanning, some modifications of the method have also been proposed leading to promising results allowing further extensions of our approach to no-reference quality assessment of 3D prints. Achieved results confirm the usefulness of the proposed approach for live monitoring of the progress of 3D printing process and the quality of 3D prints.


[31] C. Caetano, J.A. dos Santos, W.R. Schwartz. Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor. 2016 23rd International Conference on Pattern Recognition (ICPR). DOI: 10.1109/ICPR.2016.7899921. [27]

Abstract Suitable feature representation is essential for performing video analysis and understanding in applications within the smart surveillance domain. In this paper, we propose a novel spatiotemporal feature descriptor based on co-occurrence matrices computed from the optical flow magnitude and orientation. Our method, called Optical Flow Co-occurrence Matrices (OFCM), extracts a robust set of measures known as Haralick features to describe the flow patterns by measuring meaningful properties such as contrast, entropy and homogeneity of co-occurrence matrices to capture local space-time characteristics of the motion through the neighboring optical flow magnitude and orientation. We evaluate the proposed method on the action recognition problem by applying a visual recognition pipeline involving bag of local spatiotemporal features and SVM classification. The experimental results, carried on three well-known datasets (KTH, UCF Sports and HMDB51), demonstrate that OFCM outperforms the results achieved by several widely employed spatiotemporal feature descriptors such as HOF, HOG3D and MBH, indicating its suitability to be used as video representation.


2015

[32] S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164. [28]

Abstract We develop a new edge detection algorithm that addresses two critical issues in this long-standing vision problem: (1) holistic image training, and (2) multi-scale feature learning. Our proposed method, holistically-nested edge detection (HED), turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are crucially important in order to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of 0.782) and the NYU Depth dataset (ODS F-score of 0.746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than recent CNN-based edge detection algorithms.


[33] O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28 [29]

Abstract There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.


[34] A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385. [30]

Abstract Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.


[35] P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962. [31]

Abstract We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.


2014

[36] A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436 [32]

Abstract We introduce a method that can register challenging images from specular and poorly textured 3D environments, on which previous approaches fail. We assume that a small set of reference images of the environment and a partial 3D model are available. Like previous approaches, we register the input images by aligning them with one of the reference images using the 3D information. However, these approaches typically rely on the pixel intensities for the alignment, which is prone to fail in presence of specularities or in absence of texture. Our main contribution is an efficient novel local descriptor that we use to describe each image location. We show that we can rely on this descriptor in place of the intensities to significantly improve the alignment robustness at a minor increase of the computational cost, and we analyze the reasons behind the success of our descriptor.


2013

[37] M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy. Automated Edge Detection Using Convolutional Neural Network. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003 [33]

Abstract The edge detection on the images is so important for image processing. It is used in various fields of applications ranging from real-time video surveillance and traffic management to medical imaging applications. Currently, there is not a single edge detector that has both efficiency and reliability. Traditional differential filter-based algorithms have the advantage of theoretical strictness, but require excessive post-processing. Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods.


[38] A. Karpathy, S. Miller, L. Fei-Fei. Object discovery in 3D scenes via shape analysis. 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857 [34]

Abstract We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.


[39] Nanni L., Brahnam S., Ghidoni S., Menegatti E., Barrier T. Different Approaches for Extracting Information from the Co-Occurrence Matrix. PLoS ONE 8(12): e83554, 2013. DOI:10.1371/journal.pone.0083554. [35]

Abstract In 1979 Haralick famously introduced a method for analyzing the texture of an image: a set of statistics extracted from the co-occurrence matrix. In this paper we investigate novel sets of texture descriptors extracted from the co-occurrence matrix; in addition, we compare and combine different strategies for extending these descriptors. The following approaches are compared: the standard approach proposed by Haralick, two methods that consider the co-occurrence matrix as a three-dimensional shape, a gray-level run-length set of features and the direct use of the co-occurrence matrix projected onto a lower dimensional subspace by principal component analysis. Texture descriptors are extracted from the co-occurrence matrix evaluated at multiple scales. Moreover, the descriptors are extracted not only from the entire co-occurrence matrix but also from subwindows. The resulting texture descriptors are used to train a support vector machine and ensembles. Results show that our novel extraction methods improve the performance of standard methods. We validate our approach across six medical datasets representing different image classification problems using the Wilcoxon signed rank test.



2012

[40] C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065 [36]

Abstract This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.


[41] M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3 [37]

Abstract This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects and use them inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics for each patch in the mesh to give local approximations of the object shape. The novelty of our research lies in using curves that represent the quadrics projection in the current viewpoint for distance evaluation instead of using the standard method which compares edges from mesh and detected edges in the video image. This representation allows to considerably reduce the level of detail of the polygonal mesh and led us to the development of a novel method for evaluating the distance between projected and detected features. The experiments results show the comparison between our approach and the traditional method using sparse and dense meshes. They are presented using both synthetic and real image data.


[42] L. Sevilla-Lara, E. Learned-Miller. Distribution fields for tracking. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910-1917. DOI:10.1109/CVPR.2012.6247891. [38]

Abstract Visual tracking of general objects often relies on the assumption that gradient descent of the alignment function will reach the global optimum. A common technique to smooth the objective function is to blur the image. However, blurring the image destroys image information, which can cause the target to be lost. To address this problem we introduce a method for building an image descriptor using distribution fields (DFs), a representation that allows smoothing the objective function without destroying information about pixel values. We present experimental evidence on the superiority of the width of the basin of attraction around the global optimum of DFs over other descriptors. DFs also allow the representation of uncertainty about the tracked object. This helps in disregarding outliers during tracking (like occlusions or small misalignments) without modeling them explicitly. Finally, this provides a convenient way to aggregate the observations of the object through time and maintain an updated model. We present a simple tracking algorithm that uses DFs and obtains state-of-the-art results on standard benchmarks.


2011

[43] G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang. Adaptive tool-path generation of rapid prototyping for complex product models. Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007. [39]

Abstract Rapid prototyping (RP) provides an effective method for model verification and product development collaboration. A challenging research issue in RP is how to shorten the build time and improve the surface accuracy especially for complex product models. In this paper, systematic adaptive algorithms and strategies have been developed to address the challenge. A slicing algorithm has been first developed for directly slicing a Computer-Aided Design (CAD) model as a number of RP layers. Closed Non-Uniform Rational B-Spline (NURBS) curves have been introduced to represent the contours of the layers to maintain the surface accuracy of the CAD model. Based on it, a mixed and adaptive tool-path generation algorithm, which is aimed to optimize both the surface quality and fabrication efficiency in RP, has been then developed. The algorithm can generate contour tool-paths for the boundary of each RP sliced layer to reduce the surface errors of the model, and zigzag tool-paths for the internal area of the layer to speed up fabrication. In addition, based on developed build time analysis mathematical models, adaptive strategies have been devised to generate variable speeds for contour tool-paths to address the geometric characteristics in each layer to reduce build time, and to identify the best slope degree of zigzag tool-paths to further minimize the build time. In the end, case studies of complex product models have been used to validate and showcase the performance of the developed algorithms in terms of processing effectiveness and surface accuracy.


[44] C. Choi, H.I. Christensen. Robust 3D visual tracking using particle filtering on the SE(3) group. 2011 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2011.5980245. [40]

Abstract In this paper, we present a 3D model-based object tracking approach using edge and keypoint features in a particle filtering framework. Edge points provide 1D information for pose estimation and it is natural to consider multiple hypotheses. Recently, particle filtering based approaches have been proposed to integrate multiple hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To remove this assumption, we employ keypoint features for initialization of the filter. Given 2D-3D keypoint correspondences, we choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. For better performance, we employ an autoregressive state dynamics and apply it to a coordinate-invariant particle filter on the SE(3) group. Based on the number of effective particles calculated during tracking, the proposed system re-initializes particles when the tracked object goes out of sight or is occluded. The robustness and accuracy of our approach is demonstrated via comparative experiments.



2010-2000

[45] W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201. [41]

Abstract Ricci flow is a powerful curvature flow method, which is invariant to rigid motion, scaling, isometric, and conformal deformations. We present the first application of surface Ricci flow in computer vision. Previous methods based on conformal geometry, which only handle 3D shapes with simple topology, are subsumed by the Ricci flow-based method, which handles surfaces with arbitrary topology. We present a general framework for the computation of Ricci flow, which can design any Riemannian metric by user-defined curvature. The solution to Ricci flow is unique and robust to noise. We provide implementation details for Ricci flow on discrete surfaces of either Euclidean or hyperbolic background geometry. Our Ricci flow-based method can convert all 3D problems into 2D domains and offers a general framework for 3D shape analysis. We demonstrate the applicability of this intrinsic shape representation through standard shape analysis problems, such as 3D shape matching and registration, and shape indexing. Surfaces with large nonrigid anisotropic deformations can be registered using Ricci flow with constraints of feature points and curves. We show how conformal equivalence can be used to index shapes in a 3D surface shape space with the use of Teichmuller space coordinates. Experimental results are shown on 3D face data sets with large expression deformations and on dynamic heart data.


[46] J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62 [42]

Abstract In this paper we present a real-time 3D object tracking algorithm based on edges and using a single pre-calibrated camera. During the tracking process, the algorithm is continuously projecting the 3D model to the current frame by using the pose estimated in the previous frame. Once projected, some control points are generated along the visible edges of the object. The next pose is estimated by minimizing the distances between the control points and the edges detected in the image.


[47] Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4 [43]

Abstract Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.


[48] M. Pressigout, E. Marchand. Real-time 3D model-based tracking: combining edge and texture information. Proceedings 2006 IEEE International Conference on Robotics and Automation, pp. 2726-2731, 2006. ICRA 2006. DOI: 10.1109/ROBOT.2006.1642113. [44]

Abstract This paper proposes a real-time, robust and efficient 3D model-based tracking algorithm. A nonlinear minimization approach is used to register 2D and 3D cues for monocular 3D tracking. The integration of texture information in a more classical nonlinear edge-based pose computation highly increases the reliability of more conventional edge-based 3D tracker. Robustness is enforced by integrating a M-estimator into the minimization process via an iteratively re-weighted least squares implementation. The method presented in this paper has been validated on several video sequences as well as in visual servoing experiments considering various objects. Results show the method to be robust to large motions and textured environments.


[49] A.I. Comport, E. Marchand, M. Pressigout, F. Chaumette. Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, Volume 12, Issue 4, pp. 615-628, 2006. DOI: 10.1109/TVCG.2006.78. [45]

Abstract Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.


[50] L. Setia, A. Teynor, A. Halawani, H. Burkhardt. Image classification using cluster cooccurrence matrices of local relational features. Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pp. 173-182, 2006. DOI: 10.1145/1178677.1178703. [46]

Abstract Image classification systems have received a recent boost from methods using local features generated over interest points, delivering higher robustness against partial occlusion and cluttered backgrounds. We propose in this paper to use relational features calculated over multiple directions and scales around these interest points. Furthermore, a very important design issue is the choice of similarity measure to compare the bags of local feature vectors generated by each image, for which we propose a novel approach by computing image similarity using cluster co-occurrence matrices of local features. Excellent results are achieved for a widely used medical image classification task, and ideas to generalize to other tasks are discussed.


[51] H. Wuest, F. Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8 [47]

Abstract We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.


[52] K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239 [48]

Abstract Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This "pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.


[53] I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004. [49]

Abstract We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences.


[54] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94 [50]

Abstract This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.


[55] K. Grauman, T. Darrell. Fast contour matching using approximate earth mover's distance. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. DOI: 10.1109/CVPR.2004.1315035. [51]

Abstract Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost matching between two shapes' features often reveals how similar the shapes are. However due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the earth mover's distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search with locality-sensitive hashing (LSH). We demonstrate our shape matching method on a database of 136,500 images of human figures. Our method achieves a speedup of four orders of magnitude over the exact method, at the cost of only a 4% reduction in accuracy.


[56] R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003. ISBN: 0521623049 [52]


[57] T. Adamek, N. O'Connor. Efficient contour-based shape representation and matching. Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pp. 138-143, 2003. DOI:10.1145/973264.973287. [53]

Abstract This paper presents an efficient method for calculating the similarity between 2D closed shape contours. The proposed algorithm is invariant to translation, scale change and rotation. It can be used for database retrieval or for detecting regions with a particular shape in video sequences. The proposed algorithm is suitable for real-time applications. In the first stage of the algorithm, an ordered sequence of contour points approximating the shapes is extracted from the input binary images. The contours are translation and scale-size normalized, and small sets of the most likely starting points for both shapes are extracted. In the second stage, the starting points from both shapes are assigned into pairs and rotation alignment is performed. The dissimilarity measure is based on the geometrical distances between corresponding contour points. A fast sub-optimal method for solving the correspondence problem between contour points from two shapes is proposed. The dissimilarity measure is calculated for each pair of starting points. The lowest dissimilarity is taken as the final dissimilarity measure between two shapes. Three different experiments are carried out using the proposed approach: letter recognition using a web camera, our own simulation of Part B of the MPEG-7 core experiment "CE-Shape1" and detection of characters in cartoon video sequences. Results indicate that the proposed dissimilarity measure is aligned with human intuition.


[58] T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620 [54]

Abstract Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.


[59] S. Belongie, J. Malik, J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 4, pp. 509-522, 2002. DOI: 10.1109/34.993558. [55]

Abstract We present a novel approach to measuring the similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.


[60] M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118. [56]

Abstract For over 30 years researchers in computer vision have been proposing new methods for performing low-level vision tasks such as detecting edges and corners. One key element shared by most methods is that they represent local image neighborhoods as constant in color or intensity with deviations modeled as noise. Due to computational considerations that encourage the use of small neighborhoods where this assumption holds, these methods remain popular. This research models a neighborhood as a distribution of colors. Our goal is to show that the increase in accuracy of this representation translates into higher-quality results for low-level vision tasks on difficult, natural images, especially as neighborhood size increases. We emphasize large neighborhoods because small ones often do not contain enough information. We emphasize color because it subsumes gray scale as an image range and because it is the dominant form of human perception. We discuss distributions in the context of detecting edges, corners, and junctions, and we show results for each.


2000 - ...

[61] M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832 [57]

Abstract The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.


[62] R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282. [58]

Abstract A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive test bed, it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.


[63] M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995. [59]

Abstract We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by describing the object as a set of related geometric primitives (lines, conics, etc.), and using redundant measurements to facilitate the detection of outliers. This improves the overall tracking performance. Results are given for frame rate tracking on image sequences.


[64] C. Harris and C. Stennet. RAPID – A Video-Rate Object Tracker. British Machine Vision Conference, 1990. [60]

Abstract RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three-dimensional object executing arbitrary motion and viewed by a single video camera. The 3D object model consists of selected control points on high contrast edges, which can be surface markings, folds or profile edges. The use of either an alpha-beta tracker or a Kalman filter permits large object motion to be tracked and produces more stable tracking results. The RAPID tracker runs at video-rate on a standard minicomputer equipped with an image capture board.


[65] J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7. [61]

Abstract Edge operators based on grayscale morphologic operations are introduced. These operators can be efficiently implemented in near real time machine vision systems which have special hardware support for grayscale morphologic operations. The simplest morphologic edge detectors are the dilation residue and erosion residue operators. The underlying motivation for these is discussed. Finally, the blur minimum morphologic edge operator is defined. Its inherent noise sensitivity is less than the dilation or the erosion residue operators.

Some experimental results are provided to show the validity of the blur minimum morphologic operator. When compared with the cubic facet second derivative zero-crossing edge operator, the results show that they have similar performance. The advantage of the blur-minimum edge operator is that it is less computationally complex than the facet edge operator.


[66] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851 [62]

Abstract This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle, we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally, we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.


[67] Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971. [63]

The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.



References

  1. Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.
  2. Nuchitprasitchai, S., Roggemann, M. & Pearce, J.M. Factors effecting real-time optical monitoring of fused filament 3D printing. Progress in Additive Manufacturing Journal (2017), Volume 2, Issue 3, pp 133–149. DOI:10.1007/s40964-017-0027-x.
  3. Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., 2016. Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2. DOI: http://doi.org/10.5334/jors.78
  4. G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113
  5. SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).
  6. Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).
  7. OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).
  8. Python PyQt (A Python binding of the cross-platform C++ framework for GUI applications development). Available online: https://wiki.python.org/moin/PyQt (accessed on 20 May 2019).
  9. Python Numpy (A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays). Available online: https://www.numpy.org/ (accessed on 20 May 2019).
  10. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).
  11. L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z.
  12. M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1.
  13. I.A. Okaroa, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, P.L. Green. Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning. Additive Manufacturing Journal, Volume 27, pp. 42-53, 2019. DOI: 10.1016/j.addma.2019.01.006.
  14. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.
  15. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111
  16. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009
  17. L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x.
  18. K. Garanger, T. Khamvilai, E. Feron. 3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing. 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 465-470. DOI: 10.1109/CCTA.2018.8511509.
  19. B. Yuan, G.M. Guss, A.C. Wilson et al. Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion. United States, 2018. DOI:10.1002/admt.201800136.
  20. B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172
  21. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.
  22. R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009.
  23. F.-C. Ghesu, B. Georgescu, Y. Zheng et al. Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Issue 1, pp. 176 - 189, 2017. DOI: 10.1109/TPAMI.2017.2782687.
  24. R.J. Jevnisek, S. Avidan. Co-Occurrence Filter. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3184-3192, 2017. DOI: 10.1109/CVPR.2017.406.
  25. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report, 2016.
  26. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2
  27. C. Caetano, J.A. dos Santos, W.R. Schwartz. Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor. 2016 23rd International Conference on Pattern Recognition (ICPR). DOI: 10.1109/ICPR.2016.7899921.
  28. S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164.
  29. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28
  30. A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385.
  31. P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962.
  32. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436
  33. M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy. Automated Edge Detection Using Convolutional Neural Network. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003
  34. A. Karpathy, S. Miller, L. Fei-Fei. Object discovery in 3D scenes via shape analysis. 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857
  35. Nanni L., Brahnam S., Ghidoni S., Menegatti E., Barrier T. Different Approaches for Extracting Information from the Co-Occurrence Matrix. PLoS ONE 8(12): e83554, 2013. DOI:10.1371/journal.pone.0083554.
  36. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065
  37. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3
  38. L. Sevilla-Lara, E. Learned-Miller. Distribution fields for tracking. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910-1917. DOI:10.1109/CVPR.2012.6247891.
  39. G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang. Adaptive tool-path generation of rapid prototyping for complex product models. Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007.
  40. C. Choi, H.I. Christensen. Robust 3D visual tracking using particle filtering on the SE(3) group. 2011 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2011.5980245.
  41. W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201.
  42. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62
  43. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4
  44. M. Pressigout, E. Marchand. Real-time 3D model-based tracking: combining edge and texture information. Proceedings 2006 IEEE International Conference on Robotics and Automation, pp. 2726-2731, 2006. ICRA 2006. DOI: 10.1109/ROBOT.2006.1642113.
  45. A.I. Comport, E. Marchand, M. Pressigout, F. Chaumette. Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, Volume 12, Issue 4, pp. 615-628, 2006. DOI: 10.1109/TVCG.2006.78.
  46. L. Setia, A. Teynor, A. Halawani, H. Burkhardt. Image classification using cluster cooccurrence matrices of local relational features. Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pp. 173-182, 2006. DOI: 10.1145/1178677.1178703.
  47. Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8
  48. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239
  49. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.
  50. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94
  51. K. Grauman, T. Darrell. Fast contour matching using approximate earth mover's distance. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. DOI: 10.1109/CVPR.2004.1315035.
  52. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049
  53. T. Adamek, N. O'Connor. Efficient contour-based shape representation and matching. Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pp. 138-143, 2003. DOI:10.1145/973264.973287.
  54. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620
  55. S. Belongie, J. Malik, J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 4, pp. 509-522, 2002. DOI: 10.1109/34.993558.
  56. M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118.
  57. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832
  58. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.
  59. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.
  60. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.
  61. J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7.
  62. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851
  63. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.