Literature Review

Aliaksei Petsiuk (talk) 14:32, 20 May 2019 (PDT)

Target Journal Publisher Impact Factor Link
Pattern Recognition Elsevier 3.962 https://www.journals.elsevier.com/pattern-recognition
Computer-Aided Design Elsevier 2.947 https://www.journals.elsevier.com/computer-aided-design

MOST Papers

[1] Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002. [1]

Abstract Prosumer (producing consumer)-based desktop additive manufacturing has been enabled by the recent radical reduction in 3-D printer capital costs created by the open-source release of the self-replicating rapid prototype (RepRap). To continue this success, there have been some efforts to improve reliability, which are either too expensive or lacked automation. A promising method to improve reliability is to use computer vision, although the success rates are still too low for widespread use. To overcome these challenges an open source low-cost reliable real-time optimal monitoring platform for 3-D printing from double cameras is presented here. This error detection system is implemented with low-cost web cameras and covers 360 degrees around the printed object from three different perspectives. The algorithm is developed in Python and run on a Raspberry Pi3 mini-computer to reduce costs. For 3-D printing monitoring in three different perspectives, the systems are tested with four different 3-D object geometries for normal operation and failure modes. This system is tested with two different techniques in the image pre-processing step: SIFT and RANSAC rescale and rectification, and non-rescale and rectification. The error calculations were determined from the horizontal and vertical magnitude methods of 3-D reconstruction images. The non-rescale and rectification technique successfully detects the normal printing and failure state for all models with 100% accuracy, which is better than the single camera set up only. The computation time of the non-rescale and rectification technique is two times faster than the SIFT and RANSAC rescale and rectification technique.

Notes

  • ---
  • ---
  • ---


[2] Delta 3D Printer (https://www.appropedia.org/Delta_Build_Overview:MOST)

The MOST Delta printer is a RepRap [..] derived from the Rostock printer [..].Print resolution in the x-y plane is a function of distance from apexes, so it changes with distance from the center of the build platform. Printer resolution in the z-direction is always equal to that of the carriages (100 steps/mm, where 10μm is the smallest step). This does not depend on the location. Because of the geometry, the error in Z is at most 5μm; i.e. there are no planes spaced 10μm apart with unreachable space in between, but instead, the nozzle can go only to a specific point in that 10μm range, depending on the location in X and Y. MOST Delta (12 tooth T5 belt) operates at 53.33 steps/mm for a z-precision of about 19 microns.

A delta-type FFF printer with 250 mm diameter and 240 mm high cylindrical working volume has been used in our experiments. It fuses 1.75 mm Polylactic Acid (PLA) plastic filament under a temperature of 210 °C from a nozzle with 0.4 mm diameter. The printer operates by RAMPS 1.4 print controller with integrated SD card reader. Its working area is under dual surveillance, where the main camera provides a rectified top view and the secondary camera captures a side view of the working zone.

A visual marker plate located on top of the printing bed allows us to determine the spatial position of the working area relatively to cameras. The plate has 9 cm2 printing area, seven contrast square markers (1.5 cm2 and 1 cm2) build a reference frame for the main camera, and four 1.5 cm2 markers allow us to determine the relative position of the side camera.


[3] Athena II 3D Printer (https://www.appropedia.org/AthenaII)



2019

[4] RepRap (https://reprap.org/wiki/RepRap)


[5] Rostock (delta robot 3D printer) (https://www.thingiverse.com/thing:17175)


[6] SONY IMX322 Datasheet (accessed on 16 May 2019). [2]

The main camera is based on 1/2.9 inch (6.23 mm in diagonal) Sony IMX322 CMOS Image Sensor [..]. This sensor consists of 2.24M square 2.8x2.8um pixels, 2000 pixels per horizontal line and 1121 pixels per vertical line. IMX322 has a Bayer RGBG color filter pattern (50% green, 25% red, and 25% blue) with 0.46÷0.61 Red-to-Green and 0.34÷0.49 Blue-to-Green sensitivity ratios. In an operating mode, the camera captures 1280x720 pixel frames at a frequency of 30 Hz.


[7] Marlin Open-Source RepRap Firmware (accessed on 16 May 2019). [3]

A developed computer vision program was synchronized with the printer driven by Marlin, an open-source firmware [..]. We developed a special “A-family” of G-Code commands by modifying the process_parsed_command() function in MarlinMain.cpp file. It allows us to process a special A0 code which sends a “$” symbol to the computer vision program after each printed layer. In the main computer program, in turn, we send a special G-Code injection after receiving each “$” sign. These G-Code commands pause the print and move the extruder outside for a short period of time, and triggers the shutters of the cameras which capture top and side view images of the complete layer without any visual obstacle. Moreover, after each layer, the analytical projective plane in the image shifts accordingly with the layer number, so the rectified image frame remains perpendicular to the optical axis of the camera.


[8] OpenCV (Open Source Computer Vision Library) (accessed on 20 May 2019). [4]

By utilizing a rich group of image processing techniques, it becomes possible to segment meaningful contour and texture regions with their exact three-dimensional spatial reference. At the end of the printing process, a layered set of images is available, which provides additional data for further analysis of any cross section of the printed object.


x. Python PyQt (binding of the cross-platform C++ framework used for GUI applications development)

x. Python Numpy (a library to support large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays)

x. Python Imaging Library (PIL)

x. Python Scikit-image (a collection of algorithms for image processing)

x. Python Matplotlib (a plotting library)


5. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks (accessed on 20 May 2019). [5]


2018

6. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018. [6]

According to Wohlers Report [..] and EY’s Global 3D printing Report [..], additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.


7. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning. Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111 [7]

Abstract Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either ‘good’ or ‘defective’ category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.


8. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009 [8]

Abstract Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.


2017

9. B. Wang, F. Zhong, X. Qin. Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking. CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172 [9]

Abstract This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.


10. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017. [10]

Abstract During the last decade, additive manufacturing has become increasingly popular for rapid prototyping, but has remained relatively marginal beyond the scope of prototyping when it comes to applications with tight tolerance specifications, such as in aerospace. Despite a strong desire to supplant many aerospace structures with printed builds, additive manufacturing has largely remained limited to prototyping, tooling, fixtures, and non-critical components. There are numerous fundamental challenges inherent to additive processing to be addressed before this promise is realized. One ubiquitous challenge across all AM motifs is to develop processing-property relationships through precise, in situ monitoring coupled with formal methods and feedback control. We suggest a significant component of this vision is a set of semantic layers within 3D printing files relevant to the desired material specifications. This semantic layer provides the feedback laws of the control system, which then evaluates the component during processing and intelligently evolves the build parameters within boundaries defined by semantic specifications. This evaluation and correction loop requires on-the-fly coupling of finite element analysis and topology optimization. The required parameters for this analysis are all extracted from the semantic layer and can be modified in situ to satisfy the global specifications. Therefore, the representation of what is printed changes during the printing process to compensate for eventual imprecision or drift arising during the manufacturing process.


2016

11. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report. 2016. [11]

12. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2 [12]

Abstract In the paper the method of “blind” quality assessment of 3D prints based on texture analysis using the GLCM and chosen Haralick features is discussed. As the proposed approach has been verified using the images obtained by scanning the 3D printed plates, some dependencies related to the transparency of filaments may be noticed. Furthermore, considering the influence of lighting conditions, some other experiments have been made using the images acquired by a camera mounted on a 3D printer. Due to the influence of lighting conditions on the obtained images in comparison to the results of scanning, some modifications of the method have also been proposed leading to promising results allowing further extensions of our approach to no-reference quality assessment of 3D prints. Achieved results confirm the usefulness of the proposed approach for live monitoring of the progress of 3D printing process and the quality of 3D prints.



2015

13. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28 [13]

Abstract There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.


2014

14. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436 [14]

Abstract We introduce a method that can register challenging images from specular and poorly textured 3D environments, on which previous approaches fail. We assume that a small set of reference images of the environment and a partial 3D model are available. Like previous approaches, we register the input images by aligning them with one of the reference images using the 3D information. However, these approaches typically rely on the pixel intensities for the alignment, which is prone to fail in presence of specularities or in absence of texture. Our main contribution is an efficient novel local descriptor that we use to describe each image location. We show that we can rely on this descriptor in place of the intensities to significantly improve the alignment robustness at a minor increase of the computational cost, and we analyze the reasons behind the success of our descriptor.



2012

15. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065 [15]

Abstract This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.


16. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3 [16]

Abstract This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects and use them inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics for each patch in the mesh to give local approximations of the object shape. The novelty of our research lies in using curves that represent the quadrics projection in the current viewpoint for distance evaluation instead of using the standard method which compares edges from mesh and detected edges in the video image. This representation allows to considerably reduce the level of detail of the polygonal mesh and led us to the development of a novel method for evaluating the distance between projected and detected features. The experiments results show the comparison between our approach and the traditional method using sparse and dense meshes. They are presented using both synthetic and real image data.


2010-2000

17. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62 [17]

Abstract In this paper we present a real-time 3D object tracking algorithm based on edges and using a single pre-calibrated camera. During the tracking process, the algorithm is continuously projecting the 3D model to the current frame by using the pose estimated in the previous frame. Once projected, some control points are generated along the visible edges of the object. The next pose is estimated by minimizing the distances between the control points and the edges detected in the image.


18. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4 [18]

Abstract Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.


19. H. Wuest, F. Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8 [19]

Abstract We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.


20. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239 [20]

Abstract Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This "pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.


21. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004. [21]

Abstract We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences.


22. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94 [22]

Abstract This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.


23. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003. ISBN: 0521623049 [23]


24. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620 [24]

Abstract Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.


2000 - ...

25. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832 [25]

Abstract The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.


26. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282. [26]

Abstract A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive test bed, it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.


27. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995. [27]

Abstract We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by describing the object as a set of related geometric primitives (lines, conics, etc.), and using redundant measurements to facilitate the detection of outliers. This improves the overall tracking performance. Results are given for frame rate tracking on image sequences.


28. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990. [28]

Abstract RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three-dimensional object executing arbitrary motion and viewed by a single video camera. The 3D object model consists of selected control points on high contrast edges, which can be surface markings, folds or profile edges. The use of either an alpha-beta tracker or a Kalman filter permits large object motion to be tracked and produces more stable tracking results. The RAPID tracker runs at video-rate on a standard minicomputer equipped with an image capture board.


29. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851 [29]

Abstract This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle, we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally, we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.


30. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971. [30]

The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.



? 29. M. Lowney, A.S.Raj. Model Based Tracking for Augmented Reality an Mobile Devices. Dept. of Electrical Engineering, Stanford university.


References

  1. Nuchitprasitchai, S., Roggemann, M.C. & Pearce, J.M. Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views. J. Manuf. Mater. Process. 2017, 1(1), 2; doi:10.3390/jmmp1010002.
  2. SONY IMX322 Datasheet. SONY, 2019 (accessed on 16 May 2019).
  3. Marlin Open-Source RepRap Firmware (accessed on 16 May 2019).
  4. OpenCV (Open Source Computer Vision Library). Available online: https://opencv.org/ (accessed on 20 May 2019).
  5. ViSP (Visual Servoing Platform), a modular cross-platform library for visual servoing tasks. Available online: https://visp.inria.fr/ (accessed on 20 May 2019).
  6. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.
  7. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111
  8. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009
  9. B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172
  10. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.
  11. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY’s Global 3D printing Report, 2016.
  12. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2
  13. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28
  14. A. Crivellaro; V. Lepetit. Robust 3D Tracking with Descriptor Fields. 2014. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), 2014. doi:10.1109/CVPR.2014.436
  15. C. Choi, H. Christensen. 3D Textureless Object Detection and Tracking: An Edge-based Approach. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. doi:10.1109/IROS.2012.6386065
  16. M.Oikawa, M.Fujisawa. Local quadrics surface approximation for real-time tracking of textureless 3D rigid curved objects. 14th Symposium on Virtual and Augmented Reality, 2012. doi:10.1109/SVR.2012.3
  17. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62
  18. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4
  19. Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8
  20. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239
  21. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.
  22. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94
  23. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049
  24. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620
  25. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832
  26. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.
  27. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.
  28. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.
  29. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851
  30. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.