2010-2000[edit | edit source]

Ricci Flow for 3D Shape Analysis[edit | edit source]

[45] W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201.[1]

Abstract Ricci flow is a powerful curvature flow method, which is invariant to rigid motion, scaling, isometric, and conformal deformations. We present the first application of surface Ricci flow in computer vision. Previous methods based on conformal geometry, which only handle 3D shapes with simple topology, are subsumed by the Ricci flow-based method, which handles surfaces with arbitrary topology. We present a general framework for the computation of Ricci flow, which can design any Riemannian metric by user-defined curvature. The solution to Ricci flow is unique and robust to noise. We provide implementation details for Ricci flow on discrete surfaces of either Euclidean or hyperbolic background geometry. Our Ricci flow-based method can convert all 3D problems into 2D domains and offers a general framework for 3D shape analysis. We demonstrate the applicability of this intrinsic shape representation through standard shape analysis problems, such as 3D shape matching and registration, and shape indexing. Surfaces with large nonrigid anisotropic deformations can be registered using Ricci flow with constraints of feature points and curves. We show how conformal equivalence can be used to index shapes in a 3D surface shape space with the use of Teichmuller space coordinates. Experimental results are shown on 3D face data sets with large expression deformations and on dynamic heart data.

Edge-based markerless 3D tracking of rigid objects[edit | edit source]

[46] J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62[2]

Abstract In this paper we present a real-time 3D object tracking algorithm based on edges and using a single pre-calibrated camera. During the tracking process, the algorithm is continuously projecting the 3D model to the current frame by using the pose estimated in the previous frame. Once projected, some control points are generated along the visible edges of the object. The next pose is estimated by minimizing the distances between the control points and the edges detected in the image.

What and Where: 3D Object Recognition with Accurate Pose[edit | edit source]

[47] Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4[3]

Abstract Many applications of 3D object recognition, such as augmented reality or robotic manipulation, require an accurate solution for the 3D pose of the recognized objects. This is best accomplished by building a metrically accurate 3D model of the object and all its feature locations, and then fitting this model to features detected in new images. In this chapter, we describe a system for constructing 3D metric models from multiple images taken with an uncalibrated handheld camera, recognizing these models in new images, and precisely solving for object pose. This is demonstrated in an augmented reality application where objects must be recognized, tracked, and superimposed on new images taken from arbitrary viewpoints without perceptible jitter. This approach not only provides for accurate pose, but also allows for integration of features from multiple training images into a single model that provides for more reliable recognition.

Real-time 3D model-based tracking: combining edge and texture information[edit | edit source]

[48] M. Pressigout, E. Marchand. Real-time 3D model-based tracking: combining edge and texture information. Proceedings 2006 IEEE International Conference on Robotics and Automation, pp. 2726-2731, 2006. ICRA 2006. DOI: 10.1109/ROBOT.2006.1642113.[4]

Abstract This paper proposes a real-time, robust and efficient 3D model-based tracking algorithm. A nonlinear minimization approach is used to register 2D and 3D cues for monocular 3D tracking. The integration of texture information in a more classical nonlinear edge-based pose computation highly increases the reliability of more conventional edge-based 3D tracker. Robustness is enforced by integrating a M-estimator into the minimization process via an iteratively re-weighted least squares implementation. The method presented in this paper has been validated on several video sequences as well as in visual servoing experiments considering various objects. Results show the method to be robust to large motions and textured environments.

Real-time markerless tracking for augmented reality: the virtual visual servoing framework[edit | edit source]

[49] A.I. Comport, E. Marchand, M. Pressigout, F. Chaumette. Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, Volume 12, Issue 4, pp. 615-628, 2006. DOI: 10.1109/TVCG.2006.78.[5]

Abstract Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

Image classification using cluster cooccurrence matrices of local relational features[edit | edit source]

[50] L. Setia, A. Teynor, A. Halawani, H. Burkhardt. Image classification using cluster cooccurrence matrices of local relational features. Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pp. 173-182, 2006. DOI: 10.1145/1178677.1178703.[6]

Abstract Image classification systems have received a recent boost from methods using local features generated over interest points, delivering higher robustness against partial occlusion and cluttered backgrounds. We propose in this paper to use relational features calculated over multiple directions and scales around these interest points. Furthermore, a very important design issue is the choice of similarity measure to compare the bags of local feature vectors generated by each image, for which we propose a novel approach by computing image similarity using cluster co-occurrence matrices of local features. Excellent results are achieved for a widely used medical image classification task, and ideas to generalize to other tasks are discussed.

Adaptive line tracking with multiple hypotheses for augmented reality[edit | edit source]

[51] H. Wuest, F. Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8[7]

Abstract We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.

The pyramid match kernel: Discriminative classification with sets of image features[edit | edit source]

[52] K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239[8]

Abstract Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This "pyramid match" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches.

Scene Modelling, Recognition and Tracking with Invariant Image Features[edit | edit source]

[53] I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.[9]

Abstract We present a complete system architecture for fully automated markerless augmented reality (AR). The system constructs a sparse metric model of the real-world environment, provides interactive means for specifying the pose of a virtual object and performs model-based camera tracking with visually pleasing augmentation results. Our approach does not require camera pre-calibration, prior knowledge of scene geometry, manual initialization of the tracker or placement of special markers. Robust tracking in the presence of occlusions and scene changes is achieved by using highly distinctive natural features to establish image correspondences.

Distinctive image features from scale-invariant keypoints[edit | edit source]

[54] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94[10]

Abstract This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

Fast contour matching using approximate earth mover's distance[edit | edit source]

[55] K. Grauman, T. Darrell. Fast contour matching using approximate earth mover's distance. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. DOI: 10.1109/CVPR.2004.1315035.[11]

Abstract Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost matching between two shapes' features often reveals how similar the shapes are. However due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the earth mover's distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search with locality-sensitive hashing (LSH). We demonstrate our shape matching method on a database of 136,500 images of human figures. Our method achieves a speedup of four orders of magnitude over the exact method, at the cost of only a 4% reduction in accuracy.

Multiple View Geometry in Computer Vision[edit | edit source]

[56] R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003. ISBN: 0521623049[12]

Efficient contour-based shape representation and matching[edit | edit source]

[57] T. Adamek, N. O'Connor. Efficient contour-based shape representation and matching. Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pp. 138-143, 2003. DOI:10.1145/973264.973287.[13]

Abstract This paper presents an efficient method for calculating the similarity between 2D closed shape contours. The proposed algorithm is invariant to translation, scale change and rotation. It can be used for database retrieval or for detecting regions with a particular shape in video sequences. The proposed algorithm is suitable for real-time applications. In the first stage of the algorithm, an ordered sequence of contour points approximating the shapes is extracted from the input binary images. The contours are translation and scale-size normalized, and small sets of the most likely starting points for both shapes are extracted. In the second stage, the starting points from both shapes are assigned into pairs and rotation alignment is performed. The dissimilarity measure is based on the geometrical distances between corresponding contour points. A fast sub-optimal method for solving the correspondence problem between contour points from two shapes is proposed. The dissimilarity measure is calculated for each pair of starting points. The lowest dissimilarity is taken as the final dissimilarity measure between two shapes. Three different experiments are carried out using the proposed approach: letter recognition using a web camera, our own simulation of Part B of the MPEG-7 core experiment "CE-Shape1" and detection of characters in cartoon video sequences. Results indicate that the proposed dissimilarity measure is aligned with human intuition.

Real-Time Visual Tracking of Complex Structures[edit | edit source]

[58] T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620[14]

Abstract Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.

Shape matching and object recognition using shape contexts[edit | edit source]

[59] S. Belongie, J. Malik, J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 4, pp. 509-522, 2002. DOI: 10.1109/34.993558.[15]

Abstract We present a novel approach to measuring the similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.

Edge, Junction, and Corner Detection Using Color Distributions[edit | edit source]

[60] M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118.[16]

Abstract For over 30 years researchers in computer vision have been proposing new methods for performing low-level vision tasks such as detecting edges and corners. One key element shared by most methods is that they represent local image neighborhoods as constant in color or intensity with deviations modeled as noise. Due to computational considerations that encourage the use of small neighborhoods where this assumption holds, these methods remain popular. This research models a neighborhood as a distribution of colors. Our goal is to show that the increase in accuracy of this representation translates into higher-quality results for low-level vision tasks on difficult, natural images, especially as neighborhood size increases. We emphasize large neighborhoods because small ones often do not contain enough information. We emphasize color because it subsumes gray scale as an image range and because it is the dominant form of human perception. We discuss distributions in the context of detecting edges, corners, and junctions, and we show results for each.

  • ---

2000 - ...[edit | edit source]

CONDENSATION – Conditional Density Propagation for Visual Tracking[edit | edit source]

[61] M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832[17]

Abstract The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses "factored sampling", previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.

Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces[edit | edit source]

[62] R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.[18]

Abstract A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive test bed, it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

Robust Object Tracking[edit | edit source]

[63] M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, vol. 1. pp. 58–62, 1995.[19]

Abstract We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by describing the object as a set of related geometric primitives (lines, conics, etc.), and using redundant measurements to facilitate the detection of outliers. This improves the overall tracking performance. Results are given for frame rate tracking on image sequences.

RAPID – A Video-Rate Object Tracker[edit | edit source]

[64] C. Harris and C. Stennet. RAPID – A Video-Rate Object Tracker. British Machine Vision Conference, 1990.[20]

Abstract RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three-dimensional object executing arbitrary motion and viewed by a single video camera. The 3D object model consists of selected control points on high contrast edges, which can be surface markings, folds or profile edges. The use of either an alpha-beta tracker or a Kalman filter permits large object motion to be tracked and produces more stable tracking results. The RAPID tracker runs at video-rate on a standard minicomputer equipped with an image capture board.

Morphologic Edge Detection[edit | edit source]

[65] J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7.[21]

Abstract Edge operators based on grayscale morphologic operations are introduced. These operators can be efficiently implemented in near real time machine vision systems which have special hardware support for grayscale morphologic operations. The simplest morphologic edge detectors are the dilation residue and erosion residue operators. The underlying motivation for these is discussed. Finally, the blur minimum morphologic edge operator is defined. Its inherent noise sensitivity is less than the dilation or the erosion residue operators.

Some experimental results are provided to show the validity of the blur minimum morphologic operator. When compared with the cubic facet second derivative zero-crossing edge operator, the results show that they have similar performance. The advantage of the blur-minimum edge operator is that it is less computationally complex than the facet edge operator.

A computational approach to edge detection[edit | edit source]

[66] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851[22]

Abstract This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle, we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally, we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

A method and apparatus for manufacturing objects made of any arbitrary material meltable[edit | edit source]

[67] Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.[23]

The first patent in the field of additive manufacturing. Despite a long evolution of additive manufacturing, starting from the first patent in 1971 [..], the technology is still challenging researchers from the perspectives of material structure, mechanical properties, and computational efficiency.

  • ---

References[edit | edit source]

  1. W. Zeng, D. Samaras, D. Gu. Ricci Flow for 3D Shape Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, Issue 4, pp. 662-677, 2010. DOI: 10.1109/TPAMI.2009.201.
  2. J. Barandiaran, D. Borro. Edge-based markerless 3D tracking of rigid objects. 17th International Conference on Artificial Reality and Telexistence (ICAT 2007). doi:10.1109/ICAT.2007.62
  3. Gordon I., Lowe D.G. What and Where: 3D Object Recognition with Accurate Pose. In: Ponce J., Hebert M., Schmid C., Zisserman A. (eds) Toward Category-Level Object Recognition. Lecture Notes in Computer Science, vol 4170. Springer, 2006. doi:10.1007/11957959_4
  4. M. Pressigout, E. Marchand. Real-time 3D model-based tracking: combining edge and texture information. Proceedings 2006 IEEE International Conference on Robotics and Automation, pp. 2726-2731, 2006. ICRA 2006. DOI: 10.1109/ROBOT.2006.1642113.
  5. A.I. Comport, E. Marchand, M. Pressigout, F. Chaumette. Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, Volume 12, Issue 4, pp. 615-628, 2006. DOI: 10.1109/TVCG.2006.78.
  6. L. Setia, A. Teynor, A. Halawani, H. Burkhardt. Image classification using cluster cooccurrence matrices of local relational features. Proceedings of the 8th ACM international workshop on Multimedia information retrieval, pp. 173-182, 2006. DOI: 10.1145/1178677.1178703.
  7. Wuest, Harald, Florent Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05), 2005. doi:10.1109/ISMAR.2005.8
  8. K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, 2005. doi:10.1109/ICCV.2005.239
  9. I. Gordon and D. Lowe. Scene Modelling, Recognition and Tracking with Invariant Image Features. Conference: Mixed and Augmented Reality, 2004. ISMAR 2004.
  10. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) vol. 60, no. 2, pp. 91–110. doi:10.1023/B:VISI.0000029664.99615.94
  11. K. Grauman, T. Darrell. Fast contour matching using approximate earth mover's distance. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. DOI: 10.1109/CVPR.2004.1315035.
  12. R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.ISBN: 0521623049
  13. T. Adamek, N. O'Connor. Efficient contour-based shape representation and matching. Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pp. 138-143, 2003. DOI:10.1145/973264.973287.
  14. T. Drummond, R. Cipolla. Real-Time Visual Tracking of Complex Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, i. 7, July 2002. doi:10.1109/TPAMI.2002.1017620
  15. S. Belongie, J. Malik, J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 24, Issue 4, pp. 509-522, 2002. DOI: 10.1109/34.993558.
  16. M.A. Ruzon, C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 23 Issue 11, pp. 1281-1295, 2001. DOI:10.1109/34.969118.
  17. M. Isard and A. Blake. CONDENSATION – Conditional Density Propagation for Visual Tracking. In International Journal of Computer Vision. August 1998, Volume 29, Issue 1, pp 5–28. doi:10.1023/A:100807832
  18. R. Storn, K. Price. Differential Evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, December 1997, Volume 11, Issue 4. pp 341–359. doi:10.1023/A:100820282.
  19. M. Armstrong A. Zisserman. Robust Object Tracking. Proceedings of the 2nd Asian Conference on Computer Vision, 1995, vol. 1. pp. 58–62, 1995.
  20. C. Harris and C. Stennet. RAPID – A video-Rate Object Tracker. British Machine Vision Conference, 1990.
  21. J.S.J. Lee, R.M. Haralick, L.G. Shapiro. Morphologic Edge Detection. IFAC (International Federation of Automatic Control) Proceedings Volumes, Volume 19, Issue 9, pp. 7-14, 1986. DOI:10.1016/S1474-6670(17)57504-7.
  22. J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: PAMI-8, Issue: 6, Nov. 1986). doi:10.1109/TPAMI.1986.4767851
  23. Pierre Alfred Leon Ciraud. A method and apparatus for manufacturing objects made of any arbitrary material meltable. German patent application DE2263777A1. December 28, 1971.
FA info icon.svgAngle down icon.svgPage data
Authors Aliaksei Petsiuk
License CC-BY-SA-4.0
Language English (en)
Related 0 subpages, 4 pages link here
Impact 241 page views
Created May 14, 2022 by Irene Delgado
Modified June 8, 2023 by StandardWikitext bot
Cookies help us deliver our services. By using our services, you agree to our use of cookies.