International journal of

ADVANCED AND APPLIED SCIENCES

EISSN: 2313-3724, Print ISSN:2313-626X

Frequency: 12

line decor
  
line decor

 Volume 6, Issue 8 (August 2019), Pages: 45-52

----------------------------------------------

 Original Research Paper

 Title: An innovative approach to automatically identify control point set for model deformation rectification

 Author(s): Huynh Cao Tuan 1, *, Do Nang Toan 2, Lam Thanh Hien 3, Thanh-Lam Nguyen 4

 Affiliation(s):

 1Center of Information and Resources, Lac Hong University, Dong Nai, Vietnam
 2Institute of Information Technology, Vietnam National University, Hanoi, Vietnam
 3Board of Rectorate, Lac Hong University, Dong Nai, Vietnam
 4Office of International Affairs, Lac Hong University, Dong Nai, Vietnam

  Full Text - PDF          XML

 * Corresponding Author. 

  Corresponding author's ORCID profile: https://orcid.org/0000-0003-2051-4466

 Digital Object Identifier: 

 https://doi.org/10.21833/ijaas.2019.08.007

 Abstract:

Rectifying 3D model deformation based on control point set is one of the most important problems frequently applied in virtual reality fields, in which control point set is the key for implementing deformed manipulation. This paper presents a technique to automatically identify control point set by analyzing the changes of each point in a 3D model through its variations, then gathering clusters and selecting important points to become a control point set. Our proposed approach was tested and proved to be effective in our practical experiments with a deforming technique based on the interpolation of the radial basis function. 

 © 2019 The Authors. Published by IASE.

 This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

 Keywords: Control point set, Model deformation, Radial basis function, Rectify deformation

 Article History: Received 7 April 2019, Received in revised form 7 June 2019, Accepted 7 June 2019

 Acknowledgement:

This paper shows some results from the technology science research project ID B2018-TNA-61. This paper is also partially supported by Lac Hong University under the Decision No. 879/QĐ-ĐHLH.

 Compliance with ethical standards

 Conflict of interest:  The authors declare that they have no conflict of interest.

 Citation:

 Tuan HC, Toan DN, and Hien LT et al. (2019). An innovative approach to automatically identify control point set for model deformation rectification. International Journal of Advanced and Applied Sciences, 6(8): 45-52

 Permanent Link to this page

 Figures

 Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10 Fig. 11 Fig. 12 

 Tables

 No Table

----------------------------------------------

 References (22) 

  1. Akimoto T, Suenaga Y, and Wallace RS (1993). Automatic creation of 3D facial models. IEEE Computer Graphics and Applications, 13(5): 16-22. https://doi.org/10.1109/38.232096   [Google Scholar]
  2. Ansari AN, and Abdel-Mottaleb M (2003). 3D face modeling using two views and a generic face model with application to 3D face recognition. In the IEEE Conference on Advanced Video and Signal Based Surveillance, 2003. IEEE, Miami, USA: 37-44. https://doi.org/10.1109/ICME.2003.1221305   [Google Scholar]
  3. Blanz V and Vetter T (1999). A morphable model for the synthesis of 3D faces. In the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, USA: 187–194. https://doi.org/10.1145/311535.311556   [Google Scholar]
  4. Buhmann MD (2003). Radial basis functions: Theory and implementations. Vol. 12, Cambridge University Press, Cambridge, UK. https://doi.org/10.1017/CBO9780511543241   [Google Scholar]
  5. Cao C, Hou Q, and Zhou K (2014). Displaced dynamic expression regression for real-time facial tracking and animation. ACM Transactions on Graphics (TOG), 33(4). https://doi.org/10.1145/2601097.2601204   [Google Scholar]
  6. Cerveró MÀ, Vinacua A, and Brunet P (2016). 3D Model deformations with arbitrary control points. Computers and Graphics, 57: 92-101. https://doi.org/10.1016/j.cag.2016.03.010   [Google Scholar]
  7. Fan H, Su H, and Guibas LJ (2017). A point set generation network for 3d object reconstruction from a single image. In the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, USA: 605-613. https://doi.org/10.1109/CVPR.2017.264   [Google Scholar]
  8. Fua P (1997). From multiple stereo views to multiple 3-d surfaces. International Journal of Computer Vision, 24(1): 19-35. https://doi.org/10.1023/A:1007918123901   [Google Scholar]
  9. Hwang J, Kim W, Ban Y, and Lee S (2011). Robust 3D face shape estimation using multiple deformable models. In the 6th IEEE Conference on Industrial Electronics and Applications, IEEE, Beijing, China: 1953-1958. https://doi.org/10.1109/ICIEA.2011.5975912   [Google Scholar]
  10. Hwang J, Yu S, Kim J, and Lee S (2012). 3D face modeling using the multi-deformable method. Sensors, 12(10): 12870-12889. https://doi.org/10.3390/s121012870   [Google Scholar] PMid:23201976 PMCid:PMC3545547
  11. Ip HH and Yin L (1996). Constructing a 3D individualized head model from two orthogonal views. The Visual Computer, 12(5): 254-266. https://doi.org/10.1007/s003710050063   [Google Scholar]
  12. Jacobson A, Baran I, Popovic J, and Sorkine O (2011). Bounded biharmonic weights for real-time deformation. ACM Transactions on Graphics, 30(4). https://doi.org/10.1145/2010324.1964973   [Google Scholar]
  13. Lee TY, Lin PH, and Yang TH (2004). Photo-realistic 3d head modeling using multi-view images. In the International Conference on Computational Science and Its Applications, Springer, Melbourne, Australia: 713-720. https://doi.org/10.1007/978-3-540-24709-8_75   [Google Scholar]
  14. Lee Y, Terzopoulos D, and Waters K (1995). Realistic modeling for facial animation. In the 22nd Annual Conference on Computer Graphics and Interactive Techniques, ACM: 55-62. https://doi.org/10.1145/218380.218407   [Google Scholar]
  15. Lin IC, Yeh JS, and Ouhyoung M (2002). Extracting 3D facial animation parameters from multiview video clips. IEEE Computer Graphics and Applications, 22(6): 72-80. https://doi.org/10.1109/MCG.2002.1046631   [Google Scholar]
  16. Luxand (2019). Luxand FaceSDK-Detected facial features. Available online at: https://bit.ly/2xkGJ3s
  17. Lyons M, Akamatsu S, Kamachi M, and Gyoba J (1998). Coding facial expressions with gabor wavelets. In the Third IEEE international conference on automatic face and gesture recognition, IEEE, Nara, Japan: 200-205. https://doi.org/10.1109/AFGR.1998.670949   [Google Scholar]
  18. Okabe M and Yamada S (2018). Clustering using boosted constrained k-means algorithm. Frontiers in Robotics and AI, 5: 18. https://doi.org/10.3389/frobt.2018.00018   [Google Scholar]
  19. Pandzic IS and Forchheimer R (2003). MPEG-4 facial animation: The standard, implementation and applications. John Wiley and Sons Inc., Hoboken, USA. https://doi.org/10.1002/0470854626   [Google Scholar]
  20. Rezende DJ, Eslami SA, Mohamed S, Battaglia P, Jaderberg M, and Heess N (2016). Unsupervised learning of 3d structure from images. In the 30th Advances in Neural Information Processing Systems, Barcelona, Spain: 4996-5004.   [Google Scholar]
  21. Rock J, Gupta T, Thorsen J, Gwak J, Shin D, and Hoiem D (2015). Completing 3d object shape from one depth image. In the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA: 2484-2493. https://doi.org/10.1109/CVPR.2015.7298863   [Google Scholar]
  22. Zhang Y, Prakash EC, and Sung E (2002). Constructing a realistic face model of an individual for expression animation. International Journal of information Technology, 8(2): 10-25.   [Google Scholar]