Literature Review

Aliaksei Petsiuk (talk) 14:32, 20 May 2019 (PDT)

Target Journal Publisher Impact Factor Link
Pattern Recognition Elsevier 3.962 https://www.journals.elsevier.com/pattern-recognition
Computer-Aided Design Elsevier 2.947 https://www.journals.elsevier.com/computer-aided-design

MOST Papers

[1] Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002. [1]

Abstract Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.


[2] Nuchitprasitchai, S., Roggemann, M. & Pearce, J.M. Factors effecting real-time optical monitoring of fused filament 3D printing. Progress in Additive Manufacturing Journal (2017), Volume 2, Issue 3, pp 133–149. DOI:10.1007/s40964-017-0027-x. [2]

Abstract This study analyzes a low-cost reliable real-time optical monitoring platform for fused filament fabrication-based open source 3D printing. An algorithm for reconstructing 3D images from overlapping 2D intensity measurements with relaxed camera positioning requirements is compared with a single-camera solution for single-side 3D printing monitoring. The algorithms are tested for different 3D object geometries and filament colors. The results showed that both of the algorithms with a single- and double-camera system were effective at detecting a clogged nozzle, incomplete project, or loss of filament for a wide range of 3D object geometries and filament colors. The combined approach was the most effective and achieves 100% detection rate for failures. The combined method analyzed here has a better detection rate and a lower cost compared to previous methods. In addition, this method is generalizable to a wide range of 3D printer geometries, which enables further deployment of desktop 3D printing as wasted print time and filament are reduced, thereby improving the economic advantages of distributed manufacturing.


[2] Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2.,2016. DOI: http://doi.org/10.5334/jors.78 [3]

Abstract RepRap 3-D printers and their derivatives using conventional firmware are limited by: 1) requiring technical knowledge, 2) poor resilience with unreliable hardware, and 3) poor integration in complicated systems. In this paper, a new control system called Franklin, for CNC machines in general and 3-D printers specifically, is presented that enables web-based three-dimensional control of additive, subtractive and analytical tools from any Internet-connected device. Franklin can be set up and controlled entirely from a web interface; it uses a custom protocol which allows it to continue printing when the connection is temporarily lost, and allows communication with scripts.


[3] G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113 [4]

Abstract Purpose - The purpose of this paper is to present novel modifications to a RepRap design that increase RepRap capabilities well beyond just fused filament fabrication. Open-source RepRap 3-D printers have made distributed manufacturing and prototyping an affordable reality. Design/methodology/approach - The design is a significantly modified derivative of the Rostock delta-style RepRap 3-D printer. Modifications were made that permit easy and rapid repurposing of the platform for milling, paste extrusion and several other applications. All of the designs are open-source and freely available. Findings - In addition to producing fused filament parts, the platform successfully produced milled printed circuit boards, milled plastic objects, objects made with paste extrudates, such as silicone, food stuffs and ceramics, pen plotted works and cut vinyl products. The multi-purpose tool saved 90-97 per cent of the capital costs of functionally equivalent dedicated tools. Research limitations/implications - While the platform was used primarily for production of hobby and consumer goods, research implications are significant, as the tool is so versatile and the fact that the designs are open-source and eminently available for modification for more purpose-specific applications. Practical implications - The platform vastly broadens capabilities of a RepRap machine at an extraordinarily low price, expanding the potential for distributed manufacturing and prototyping of items that heretofore required large financial investments. Originality/value - The unique combination of relatively simple modifications to an existing platform has produced a machine having capabilities far exceeding that of any single commercial product. The platform provides users the ability to work with a wide variety of materials and fabrication methods at a price of less than $1,000, provided users are willing to build the machine themselves.


[4] Delta 3D Printer (https://www.appropedia.org/Delta_Build_Overview:MOST)

The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform. Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.

A delta-type FFF printer with 250 mm diameter and 240 mm high cylindrical working volume has been used in our experiments. It fuses 1.75 mm Polylactic Acid (PLA) plastic filament under a temperature of 210 °C from a nozzle with 0.4 mm diameter. The printer operates by RAMPS 1.4 print controller with integrated SD card reader. Its working area is under dual surveillance, where the main camera provides a rectified top view and the secondary camera captures a side view of the working zone.

A visual marker plate located on top of the printing bed allows us to determine the spatial position of the working area relatively to cameras. The plate has 9 cm2 printing area, seven contrast square markers (1.5 cm2 and 1 cm2) build a reference frame for the main camera, and four 1.5 cm2 markers allow us to determine the relative position of the side camera.


[5] Athena II 3D Printer (https://www.appropedia.org/AthenaII)



2019

[6] RepRap (https://reprap.org/wiki/RepRap)


[7] Rostock (delta robot 3D printer) (https://www.thingiverse.com/thing:17175)


[8] SONY IMX322 Datasheet (accessed on 16 May 2019). [5]

The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.


[9] Marlin Open-Source RepRap Firmware (accessed on 16 May 2019). [6]

A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.


[10] OpenCV (Open Source Computer Vision Library) (accessed on 20 May 2019). [7]

By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.


[11] Python PyQt. A Python binding of the cross-platform C++ framework for GUI applications development (accessed on 20 May 2019). [8]


[12] Python Numpy. A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays (accessed on 20 May 2019). [9]


x. Python Imaging Library (PIL)

x. Python Scikit-image (a collection of algorithms for image processing)

x. Python Matplotlib (a plotting library)


[13] ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). [10]


6. L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z. [11]

Abstract Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention over several decades. Since 2000, texture representations based on Bag of Words and on Convolutional Neural Networks have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 250 major publications are cited in this survey covering different aspects of the research, including benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research.


6. M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1. [12]

Abstract This work bridges the gap between two popular methodologies for data partitioning: kernel clustering and regularization-based segmentation. While addressing closely related practical problems, these general methodologies may seem very different based on how they are covered in the literature. The differences may show up in motivation, formulation, and optimization, e.g. spectral relaxation versus max-flow. We explain how regularization and kernel clustering can work together and why this is useful. Our joint energy combines standard regularization, e.g. MRF potentials, and kernel clustering criteria like normalized cut. Complementarity of such terms is demonstrated in many applications using our bound optimization Kernel Cut algorithm for the joint energy (code is publicly available). While detailing combinatorial move-making, our main focus are new linear kernel and spectral bounds for kernel clustering criteria allowing their integration with any regularization objectives with existing discrete or continuous solvers.


2018

6. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018. [13]

According to Wohlers Report [..] and EY’s Global 3D printing Report [..], additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.


7. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning. Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 [14]

Abstract Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.


8. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 [15]

Abstract Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.


6. L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x. [16]

Abstract Both region-based methods and direct methods have become popular in recent years for tracking the 6-dof pose of an object from monocular video sequences. Region-based methods estimate the pose of the object by maximizing the discrimination between statistical foreground and background appearance models, while direct methods aim to minimize the photometric error through direct image alignment. In practice, region-based methods only care about the pixels within a narrow band of the object contour due to the level-set-based probabilistic formulation, leaving the foreground pixels beyond the evaluation band unused. On the other hand, direct methods only utilize the raw pixel information of the object, but ignore the statistical properties of foreground and background regions. In this paper, we find it beneficial to combine these two kinds of methods together. We construct a new probabilistic formulation for 3D object tracking by combining statistical constraints from region-based methods and photometric constraints from direct methods. In this way, we take advantage of both statistical property and raw pixel values of the image in a complementary manner. Moreover, in order to achieve better performance when tracking heterogeneous objects in complex scenes, we propose to increase the distinctiveness of foreground and background statistical models by partitioning the global foreground and background regions into a small number of sub-regions around the object contour. We demonstrate the effectiveness of the proposed novel strategies on a newly constructed real-world dataset containing different types of objects with ground-truth poses. Further experiments on several challenging public datasets also show that our method obtains competitive or even superior tracking results compared to previous works. In comparison with the recent state-of-art region-based method, the proposed hybrid method is proved to be more stable under silhouette pose ambiguities with a slightly lower tracking accuracy.


2017

9. B. Wang, F. Zhong, X. Qin. Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking. CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 [17]

Abstract This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.


10. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017. [18]

Abstract During the last decade, additive manufacturing has become increasingly popular for rapid prototyping, but has remained relatively marginal beyond the scope of prototyping when it comes to applications with tight tolerance specifications, such as in aerospace. Despite a strong desire to supplant many aerospace structures with printed builds, additive manufacturing has largely remained limited to prototyping, tooling, fixtures, and non-critical components. There are numerous fundamental challenges inherent to additive processing to be addressed before this promise is realized. One ubiquitous challenge across all AM motifs is to develop processing-property relationships through precise, in situ monitoring coupled with formal methods and feedback control. We suggest a significant component of this vision is a set of semantic layers within 3D printing files relevant to the desired material specifications. This semantic layer provides the feedback laws of the control system, which then evaluates the component during processing and intelligently evolves the build parameters within boundaries defined by semantic specifications. This evaluation and correction loop requires on-the-fly coupling of finite element analysis and topology optimization. The required parameters for this analysis are all extracted from the semantic layer and can be modified in situ to satisfy the global specifications. Therefore, the representation of what is printed changes during the printing process to compensate for eventual imprecision or drift arising during the manufacturing process.


10. R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009. [19]

Abstract Every year, efficient maize production is very important to the economy of many countries. Since nutritional deficiencies in maize plants are directly reflected in their grains productivity, early detection is needed to maximize the chances of proper recovery of these plants. Traditional texture methods recently showed interesting results in the identification of nutritional deficiencies. On the other hand, deep learning techniques are increasingly outperforming hand-crafted features on many tasks. In this paper, we propose a simple transfer learning approach from pre-trained cnn models and compare their results with those from traditional texture methods in the task of nitrogen deficiency identification. We perform experiments in a real-world dataset that contains digitalized images of maize leaves at different growth stages and with different levels of nitrogen fertilization. The results show that deep learning based descriptors achieve better success rates than traditional texture methods.


2016

11. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report. 2016. [20]

12. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2 [21]

Abstract In the paper the method of “blind” quality assessment of 3D prints based on texture analysis using the GLCM and chosen Haralick features is discussed. As the proposed approach has been verified using the images obtained by scanning the 3D printed plates, some dependencies related to the transparency of filaments may be noticed. Furthermore, considering the influence of lighting conditions, some other experiments have been made using the images acquired by a camera mounted on a 3D printer. Due to the influence of lighting conditions on the obtained images in comparison to the results of scanning, some modifications of the method have also been proposed leading to promising results allowing further extensions of our approach to no-reference quality assessment of 3D prints. Achieved results confirm the usefulness of the proposed approach for live monitoring of the progress of 3D printing process and the quality of 3D prints.



2015

13. S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164. [22]

Abstract We develop a new edge detection algorithm that addresses two critical issues in this long-standing vision problem: (1) holistic image training, and (2) multi-scale feature learning. Our proposed method, holistically-nested edge detection (HED), turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are crucially important in order to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of 0.782) and the NYU Depth dataset (ODS F-score of 0.746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than recent CNN-based edge detection algorithms.


13. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28 [23]

Abstract There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.


6. A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385. [24]

Abstract Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.


7. P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962. [25]

Abstract We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.


2014

14. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436 [26]

Abstract We introduce a method that can register challenging images from specular and poorly textured 3D environments, on which previous approaches fail. We assume that a small set of reference images of the environment and a partial 3D model are available. Like previous approaches, we register the input images by aligning them with one of the reference images using the 3D information. However, these approaches typically rely on the pixel intensities for the alignment, which is prone to fail in presence of specularities or in absence of texture. Our main contribution is an efficient novel local descriptor that we use to describe each image location. We show that we can rely on this descriptor in place of the intensities to significantly improve the alignment robustness at a minor increase of the computational cost, and we analyze the reasons behind the success of our descriptor.


2013

16. M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy. Automated Edge Detection Using Convolutional Neural Network. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003 [27]

Abstract The edge detection on the images is so important for image processing. It is used in various fields of applications ranging from real-time video surveillance and traffic management to medical imaging applications. Currently, there is not a single edge detector that has both efficiency and reliability. Traditional differential filter-based algorithms have the advantage of theoretical strictness, but require excessive post-processing. Proposed CNN technique is used to realize edge detection task it takes the advantage of momentum features extraction, it can process any input image of any size with no more training required, the results are very promising when compared to both classical methods and other ANN based methods.


17. A. Karpathy, S. Miller, L. Fei-Fei. Object discovery in 3D scenes via shape analysis. 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857 [28]

Abstract We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public.



2012

15. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065 [29]

Abstract This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.


16. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3 [30]

Abstract This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects and use them inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics for each patch in the mesh to give local approximations of the object shape. The novelty of our research lies in using curves that represent the quadrics projection in the current viewpoint for distance evaluation instead of using the standard method which compares edges from mesh and detected edges in the video image. This representation allows to considerably reduce the level of detail of the polygonal mesh and led us to the development of a novel method for evaluating the distance between projected and detected features. The experiments results show the comparison between our approach and the traditional method using sparse and dense meshes. They are presented using both synthetic and real image data.


2011

18. G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang. Adaptive tool-path generation of rapid prototyping for complex product models. Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007. [31]

Abstract Rapid prototyping (RP) provides an effective method for model verification and product development collaboration. A challenging research issue in RP is how to shorten the build time and improve the surface accuracy especially for complex product models. In this paper, systematic adaptive algorithms and strategies have been developed to address the challenge. A slicing algorithm has been first developed for directly slicing a Computer-Aided Design (CAD) model as a number of RP layers. Closed Non-Uniform Rational B-Spline (NURBS) curves have been introduced to represent the contours of the layers to maintain the surface accuracy of the CAD model. Based on it, a mixed and adaptive tool-path generation algorithm, which is aimed to optimize both the surface quality and fabrication efficiency in RP, has been then developed. The algorithm can generate contour tool-paths for the boundary of each RP sliced layer to reduce the surface errors of the model, and zigzag tool-paths for the internal area of the layer to speed up fabrication. In addition, based on developed build time analysis mathematical models, adaptive strategies have been devised to generate variable speeds for contour tool-paths to address the geometric characteristics in each layer to reduce build time, and to identify the best slope degree of zigzag tool-paths to further minimize the build time. In the end, case studies of complex product models have been used to validate and showcase the performance of the developed algorithms in terms of processing effectiveness and surface accuracy.



2010-2000

17. W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201. [32]

Abstract Ricci flow is a powerful curvature flow method, which is invariant to rigid motion, scaling, isometric, and conformal deformations. We present the first application of surface Ricci flow in computer vision. Previous methods based on conformal geometry, which only handle 3D shapes with simple topology, are subsumed by the Ricci flow-based method, which handles surfaces with arbitrary topology. We present a general framework for the computation of Ricci flow, which can design any Riemannian metric by user-defined curvature. The solution to Ricci flow is unique and robust to noise. We provide implementation details for Ricci flow on discrete surfaces of either Euclidean or hyperbolic background geometry. Our Ricci flow-based method can convert all 3D problems into 2D domains and offers a general framework for 3D shape analysis. We demonstrate the applicability of this intrinsic shape representation through standard shape analysis problems, such as 3D shape matching and registration, and shape indexing. Surfaces with large nonrigid anisotropic deformations can be registered using Ricci flow with constraints of feature points and curves. We show how conformal equivalence can be used to index shapes in a 3D surface shape space with the use of Teichmuller space coordinates. Experimental results are shown on 3D face data sets with large expression deformations and on dynamic heart data.


17. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62 [33]

Abstract In this paper we present a real-time 3D object tracking algorithm based on edges and using a single pre-calibrated camera. During the tracking process, the algorithm is continuously projecting the 3D model to the current frame by using the pose estimated in the previous frame. Once projected, some control points are generated along the visible edges of the object. The next pose is estimated by minimizing the distances between the control points and the edges detected in the image.


18. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4 [34]

Abstract Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.


19. H. Wuest, F. Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8 [35]

Abstract We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.


20. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239 [36]

Abstract Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This "pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.


21. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004. [37]

Abstract We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences.


22. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94 [38]

Abstract This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.


23. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003. ISBN: 0521623049 [39]


24. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620 [40]

Abstract Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.


25. M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118. [41]

Abstract For over 30 years researchers in computer vision have been proposing new methods for performing low-level vision tasks such as detecting edges and corners. One key element shared by most methods is that they represent local image neighborhoods as constant in color or intensity with deviations modeled as noise. Due to computational considerations that encourage the use of small neighborhoods where this assumption holds, these methods remain popular. This research models a neighborhood as a distribution of colors. Our goal is to show that the increase in accuracy of this representation translates into higher-quality results for low-level vision tasks on difficult, natural images, especially as neighborhood size increases. We emphasize large neighborhoods because small ones often do not contain enough information. We emphasize color because it subsumes gray scale as an image range and because it is the dominant form of human perception. We discuss distributions in the context of detecting edges, corners, and junctions, and we show results for each.


2000 - ...

25. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832 [42]

Abstract The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.


26. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282. [43]

Abstract A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive test bed, it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.


27. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995. [44]

Abstract We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by describing the object as a set of related geometric primitives (lines, conics, etc.), and using redundant measurements to facilitate the detection of outliers. This improves the overall tracking performance. Results are given for frame rate tracking on image sequences.


28. C. Harris and C. Stennet. RAPID – A Video-Rate Object Tracker. British Machine Vision Conference, 1990. [45]

Abstract RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three-dimensional object executing arbitrary motion and viewed by a single video camera. The 3D object model consists of selected control points on high contrast edges, which can be surface markings, folds or profile edges. The use of either an alpha-beta tracker or a Kalman filter permits large object motion to be tracked and produces more stable tracking results. The RAPID tracker runs at video-rate on a standard minicomputer equipped with an image capture board.


29. J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7. [46]

Abstract Edge operators based on grayscale morphologic operations are introduced. These operators can be efficiently implemented in near real time machine vision systems which have special hardware support for grayscale morphologic operations. The simplest morphologic edge detectors are the dilation residue and erosion residue operators. The underlying motivation for these is discussed. Finally, the blur minimum morphologic edge operator is defined. Its inherent noise sensitivity is less than the dilation or the erosion residue operators.

Some experimental results are provided to show the validity of the blur minimum morphologic operator. When compared with the cubic facet second derivative zero-crossing edge operator, the results show that they have similar performance. The advantage of the blur-minimum edge operator is that it is less computationally complex than the facet edge operator.


29. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851 [47]

Abstract This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle, we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally, we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.


30. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971. [48]

The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.



? 29. M. Lowney, A.S.Raj. Model Based Tracking for Augmented Reality an Mobile Devices. Dept. of Electrical Engineering, Stanford university.


References

  1. Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.
  2. Nuchitprasitchai, S., Roggemann, M. & Pearce, J.M. Factors effecting real-time optical monitoring of fused filament 3D printing. Progress in Additive Manufacturing Journal (2017), Volume 2, Issue 3, pp 133–149. DOI:10.1007/s40964-017-0027-x.
  3. Wijnen, B., Anzalone, G.C., Haselhuhn, A.S., Sanders, P.G. and Pearce, J.M., 2016. Free and Open-source Control Software for 3-D Motion and Processing. Journal of Open Research Software, 4(1), p.e2. DOI: http://doi.org/10.5334/jors.78
  4. G. C. Anzalone, B. Wijnen, J. M. Pearce. Multi-material additive and subtractive prosumer digital fabrication with a free and open-source convertible delta RepRap 3-D printer. Rapid Prototyping Journal, Vol. 21 Issue: 5, pp.506-519, 2015. DOI:10.1108/RPJ-09-2014-0113
  5. SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).
  6. Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).
  7. OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).
  8. Python PyQt (A Python binding of the cross-platform C++ framework for GUI applications development). Available online: https://wiki.python.org/moin/PyQt (accessed on 20 May 2019).
  9. Python Numpy (A library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays). Available online: https://www.numpy.org/ (accessed on 20 May 2019).
  10. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).
  11. L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, M. Pietikäinen. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. International Journal of Computer Vision, Volume 127, Issue 1, pp 74–109, 2019. DOI: 10.1007/s11263-018-1125-z.
  12. M. Tang, D. Marin, I. B. Ayed, Y. Boykov. Kernel Cuts: Kernel and Spectral Clustering Meet Regularization. International Journal of Computer Vision, Volume 127, Issue 5, pp. 477–511, 2019. DOI: 10.1007/s11263-018-1115-1.
  13. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.
  14. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111
  15. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009
  16. L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x.
  17. B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172
  18. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.
  19. R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009.
  20. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report, 2016.
  21. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2
  22. S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164.
  23. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28
  24. A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385.
  25. P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962.
  26. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436
  27. M. A. El-Sayed, Y. A. Estaitia, M. A. Khafagy. Automated Edge Detection Using Convolutional Neural Network. (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 10, pp. 11-17, 2013. DOI:10.14569/IJACSA.2013.041003
  28. A. Karpathy, S. Miller, L. Fei-Fei. Object discovery in 3D scenes via shape analysis. 2013 IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2013.6630857
  29. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065
  30. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3
  31. G.Q. Jin, W.D. Li, C.F.Tsai, L.Wang. Adaptive tool-path generation of rapid prototyping for complex product models. Journal of Manufacturing Systems, Volume 30, Issue 3, 2011, pp. 154-164. DOI:10.1016/j.jmsy.2011.05.007.
  32. W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201.
  33. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62
  34. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4
  35. Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8
  36. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239
  37. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.
  38. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94
  39. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049
  40. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620
  41. M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118.
  42. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832
  43. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.
  44. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.
  45. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.
  46. J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7.
  47. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851
  48. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.