Point Selection for Triangular 2-D Mesh Design Using Adaptive Forward Tracking Algorithm
Nastaran Borjian, Rassoul Amirfattahi, Saeed Sadri
DOI: 10.4236/pos.2011.21003   PDF   HTML   XML   4,258 Downloads   9,223 Views  


Two-dimensional mesh-based motion tracking preserves neighboring relations (through connectivity of the mesh) and also allows warping transformations between pairs of frames; thus, it effectively eliminates blocking artifacts that are common in motion compensation by block matching. However, available uniform 2-D mesh model enforces connec-tivity everywhere within a frame, which is clearly not suitable across occlusion boundaries. To overcome this limitation, BTBC (background to be covered) detection and MF (model failure) detection algorithms are being used. In this algorithm, connectivity of the mesh elements (patches) across covered and uncovered region boundaries are broken. This is achieved by allowing no node points within the background to be covered and refining the mesh structure within the model failure region at each frame. We modify the occlusion-adaptive, content-based mesh design and forward tracking algorithm used by Yucel Altunbasak for selection of points for triangular 2-D mesh design. Then, we propose a new triangulation procedure for mesh structure and also a new algorithm to justify connectivity of mesh structure after motion vector estimation of the mesh points. The modified content-based mesh is adaptive which eliminates the necessity of transmission of all node locations at each frame.

Share and Cite:

N. Borjian, R. Amirfattahi and S. Sadri, "Point Selection for Triangular 2-D Mesh Design Using Adaptive Forward Tracking Algorithm," Positioning, Vol. 2 No. 1, 2011, pp. 22-35. doi: 10.4236/pos.2011.21003.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] J. K. Aggarwal and N. Nandhahumar, “Computation of Motion from Sequences of Images,” Proceeding of the IEEE, Vol. 76, 1988, pp. 917-935. doi:10.1109/5.5965
[2] H. G. Musmann, P. Pirsch and H. J. Grallert, “Advances in Picture Coding,” Proceeing of the IEEE, Vol. 73, 1985, pp. 523-548. doi:10.1109/PROC.1985.13183
[3] C. Stiller and I. Konrad, “Estimating Motion in Image Sequences,” IEEE Signal Processing Magazine, Vol. 16, 1999, pp. 70-91. doi:10.1109/79.774934
[4] Y. Wang, J. Ostermann and Y. Zhang, “Video Processing and Communications,” Chapter 1, Prentice Hall, New Jersey, 2002.
[5] H. H. Nagel, “Displacement Vectors Derived from Second-Order Intensity Variations in Images Sequences,” Computer Graphics and Image Processing, Vol. 21, 1983, pp. 85-117. doi:10.1016/S0734-189X(83)80030-9
[6] A. Mitiche, Y. F. Wang and J. K. Aggarwal, “Experiments in Computing Optical Flow with Gradient-Based, Multiconstraint Method,” Pattern Recognition, Vol. 20, 1987, pp. 173-179. doi:10.1016/0031-3203(87)90051-3
[7] R. M. Haralick and J. S. Lee, “The Facet Approach to Optical Flow,” Image Understanding Workshop, 1993.
[8] Y. Wang, X. M. Hsieh, J. H. Hu and O. Lee, “Region Segmentation Based on Active Mesh Representation of Motion,” IEEE International Conference on Image Processing, 1995, pp. 185-188.
[9] B. Girod, “Motion Compensation: Visual Aspects, Accuracy, and Fundamental Limits,” Motion Analysis and Image Sequence Processing, 1993, pp. 126-152.
[10] M. Ghoniem and A. Haggag, “Adaptive Motion Estimation Block Matching Algorithms for Video Coding,” International Symposium on Intelligent signal Processing and Communication, IEEE, Vol. 6, 2006, pp. 427-430.
[11] Y. Altunbasak, A. Murat Tekalp, “Occlusion-ADAPTIVE, Content-Based Mesh Design and Forward Tracking,” IEEE, Transaction on Image Processing, Vol. 6, No. 9, 1997, pp. 1270-1280. doi:10.1109/83.623190
[12] Y. Altunbasak, A. Murat Tekalp, “Occlusion-Adaptive 2-D Mesh Tracking,” IEEE, Vol. 4, No. 1, 1996, pp. 2108-2111.
[13] M. Sayed, W. Badawy, “A Novel Motion Estimation Method for Mesh-Based Video Motion Tracking,” IEEE, Vol. 4, 2004, pp. 337-340.
[14] P. Beek, A. Murat Tekalp, “Hierarchical 2-D Mesh Representation, Tracking, and Compression for Object-Based Video,” IEEE, Transactions on Circuits and Systems for Video Technology, Vol. 9, No. 2, 1993, pp. 353-369. doi:10.1109/76.752101
[15] Y. Altunbasak, G. Al-Regib, “2-D Motion Estimation with Hierarchical Content-Based Meshes,” IEEE, 2001, pp. 1621-1624.
[16] N. Laurent, “Hierarchical Mesh-Based Global Motion Estimation, including Occlusion Areas Detection,” IEEE, 2000, pp. 620-623.
[17] Y. Wei, W. Badawy, “A New Moving Object Contour Detection Approach,” IEEE, International Workshop on Computer Architectures for Machine Processing, Vol. 3, 2003, pp. 231-236.
[18] S. A. Coleman, B. W. Scotney, “Image Feature Detection on Contents-Based Meshes,” IEEE, International Computer Image Processing, Vol. 1, 2002, pp. 844-847.
[19] J. Weizhao, P. Wang, “An Object Tracking Algorithm Based On Occlusion Mesh Model,” In: Proceeding First International Conference on Machine Learning and Cybernetics, Vol. 1, 2002, pp. 288-292.
[20] C. Toklu, “2-D Mesh-Based Tracking of Dformable Objects with Occlusion,” IEEE, Vol. 1, 1996, pp. 933-936.
[21] C. Toklu, “2-D Mesh-Based Synthetic Transfiguration of an Object with Occlusion,” IEEE, Vol. 4, 1997, pp. 2649-2652.
[22] T. Hai, “Optical Flow,” Image Analysis and Computer Vision, University of California.
[23] D. J. Fleet and A. D. Jepson, “Computation of Component Image Velocity from Local Phase Information,” International Journal of Computer Vision, Vol. 5, 1990, pp. 77- 104. doi:10.1007/BF00056772
[24] Y. Wang, “Motion Estimation for Video Coding,” Polytechnic University, Brooklyn, 2003.
[25] J. R. Jain, “Displacement Measurement and Its Application in Inter Frame Image Coding,” IEEE, transaction on image processing, Vol. 29, 1981, pp. 1799-1808.
[26] T. Koga, “Motion Compensated Inter Frame Coding for Video,” Telecommunication Conference, Vol. 3, 1981, pp. 1-5.
[27] O. Lee and Y. Wang, “Motion Compensated Prediction Using Nodal Based Deformable Block Matching,” Journal of Visual Communications and Image Representation, Vol. 6, 1995, pp. 26-34. doi:10.1006/jvci.1995.1002
[28] V. Seferidis and M. Ghanbari, “General Approach to Block Matching Motion Estimation,” Optical Engineering, Vol. 32, No. 7, 1993, pp. 1464-1474. doi:10.1117/12.138613
[29] P. Leong, “4 Multimedia,” Imperial College, London.
[30] M. Sayed, W. Badawy, “A Novel Motion Estimation Method for Mesh-Based Video Motion Tracking,” ICA- SSP IEEE, Vol. 4, 2004, pp. 337-340.
[31] O. C. Zienkewicz and R. L. Taylor, “The Finite Element Method,” Vol. 1, 4th Ed., Upper Saddle River, NJ: Prentice Hall, 1989.
[32] B. K. P. Horn, B. G. Schunck, “Determining Optical Flow,” Vol. 17, 1981, pp. 185-203.
[33] A. M. Tekalp, “The Book,” Digital Video Processing, Englewood Cliffs, Prentice-Hall, 1st Edition, 1995.
[34] B. D. Locas, T. Kanade , “An Iterative Image Registration Technique with an Application to Stereo Vision,” In: Proceeding DARPA Image Understanding Workshop, 1981, pp. 121- 130.
[35] T. Koga, “Motion Compensated Inter Frame Coding for Video Conferencing,” In: National Telecommunication. Conference, G5.3.1-5, New Orleans, 1981.
[36] P. Gerken, “Object-Based Analysis-Synthesis Coding of Image Sequences at Very Low Bit-Rates,” IEEE Trans. Circuits Syst Video. Technol., Vol. 4, 1994, pp. 228-235. doi:10.1109/76.305868
[37] R. Srikanth and A. G. Ramakrishnan, “MR Image Coding Using Content-Based Mesh and Context,” IEEE, Vol. 1, No. 3, 2003, pp. 85-88.

Copyright © 2022 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.