Automation, Control and Intelligent Systems

| Peer-Reviewed |

3D Face Feature Location Method Based on Stripes and Shape Index

Received: 26 December 2018    Accepted:     Published: 27 December 2018
Views:       Downloads:

Share This Article

Abstract

A 3D face feature location method based on stripes and shape index is proposed in order to locate the feature points in face exactly and quickly. The grating projection technique is used to obtain 3D face image. Based on the difference between the background fringe image and the deformed fringe image, the basic information of the human face in the image is determined. The basic information includes left and right edge lines, upper and lower edge coordinates and the width of the human face in the image. And then, the approximate position of the ear is quickly determined according to the left or right edge line. The found areas of the tip of the nose and the inner canthus are reduced according to the position of the ear. The position of the tip of the nose and the inner canthus are determined according to the height and shape index information. The experiment was conducted in a dark environment, with an average total time of 4.05 seconds and an average time of positioning of 1.07 seconds. When the allowable error is 15 pixels, the positioning accuracy is 85.34% for different poses, and the positioning accuracy is 96.88% when the face rotation angle is less than 20 degrees.

DOI 10.11648/j.acis.20180604.12
Published in Automation, Control and Intelligent Systems (Volume 6, Issue 4, August 2018)
Page(s) 47-53
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

3D Face Localization, Grating Projection, Feature Points, Gaussian Curvature, Mean Curvature, Shape Index

References
[1] Andrea F A, Michele N, Daniel R and Gabriele S. 2D and 3D face recognition: A survey. Pattern Recognition Letters 2007; 28: 1885-906.
[2] Wang Y-M, Pan G and Wu Z-H. A Survey of 3D Face Recognition. JOURNAL OF COMPUTER-AIDED DESIGN & COMPUTER GRAPHICS 2008; 20: 819-29.
[3] Cai Z-M and Yan J-W. Face recognition: From 2D to 3D. Computer Engineering and Applications 2011; 47: 155-9.
[4] Chellappa R, Wilson C L and Sirohey S. Human and machine recognition of faces: a survey. Proceedings of the IEEE 1995; 83: 705-41.
[5] Yuan F-L and Zhong J-G. Data obtain and automatic orientation of 3D facial model. Computer Engineering and Design 2006; 27: 2966-9.
[6] Takeda M and Mutoh K. Fourier transform profilometry for the automatic measurement of three-dimensional object shapes. Applied Optics 1983; 22: 3977-82.
[7] Srinivasan V, Liu H-C and Halioua M. Automated phase-measuring profilometry of 3-D diffuse objects. Applied Optics 1984; 23: 3105-8.
[8] Li W-S, Su L-K and Su X-Y. Phase-Measuring Profilometry in Big Scale Measurement. ACTA OPTICA SINICA 2008; 20: 819-29.
[9] Dennis C G and Louis A R. Robust two-dimensional weighted and unweighted phase unwrapped that uses fast transforms and iterative methods. Journal of the Optical Society of America A 1994; 11: 107-17.
[10] Guo Y, Chen X-T and Zhang T. Robust phase unwrapped algorithm based on least squares. Optics & Lasers in Engineering 2014; 63: 25-9.
[11] Zhu S-H, Zhang L, Luo Y and Chen Y-H. Characteristics positioning of facial point cloud based on spin image. COMPUTER ENGINEERING AND DESIGN 2017; 38: 2209-12.
[12] Dorai C and Jain A K. COSMOS-A representation scheme for 3D free-form objects. IEEE Transactions on Pattern Analysis & Machine Intelligence 1997; 19: 1115-30.
[13] Li X-L and Da F-P. 3D Face Recognition Based on Profile and Rigid Regions. Journal of Image and Graphics 2010; 15: 266-73.
[14] Hu M-Y and Sun Y-R. 3D Face Recognition Based on Local Feature. Modern Computer 2016; 6: 33-8.
Author Information
  • School of Information Science and Engineering, Shenyang University of Technology, Shenyang, China

  • School of Information Science and Engineering, Shenyang University of Technology, Shenyang, China

Cite This Article
  • APA Style

    Li Chang, Shuai Liu. (2018). 3D Face Feature Location Method Based on Stripes and Shape Index. Automation, Control and Intelligent Systems, 6(4), 47-53. https://doi.org/10.11648/j.acis.20180604.12

    Copy | Download

    ACS Style

    Li Chang; Shuai Liu. 3D Face Feature Location Method Based on Stripes and Shape Index. Autom. Control Intell. Syst. 2018, 6(4), 47-53. doi: 10.11648/j.acis.20180604.12

    Copy | Download

    AMA Style

    Li Chang, Shuai Liu. 3D Face Feature Location Method Based on Stripes and Shape Index. Autom Control Intell Syst. 2018;6(4):47-53. doi: 10.11648/j.acis.20180604.12

    Copy | Download

  • @article{10.11648/j.acis.20180604.12,
      author = {Li Chang and Shuai Liu},
      title = {3D Face Feature Location Method Based on Stripes and Shape Index},
      journal = {Automation, Control and Intelligent Systems},
      volume = {6},
      number = {4},
      pages = {47-53},
      doi = {10.11648/j.acis.20180604.12},
      url = {https://doi.org/10.11648/j.acis.20180604.12},
      eprint = {https://download.sciencepg.com/pdf/10.11648.j.acis.20180604.12},
      abstract = {A 3D face feature location method based on stripes and shape index is proposed in order to locate the feature points in face exactly and quickly. The grating projection technique is used to obtain 3D face image. Based on the difference between the background fringe image and the deformed fringe image, the basic information of the human face in the image is determined. The basic information includes left and right edge lines, upper and lower edge coordinates and the width of the human face in the image. And then, the approximate position of the ear is quickly determined according to the left or right edge line. The found areas of the tip of the nose and the inner canthus are reduced according to the position of the ear. The position of the tip of the nose and the inner canthus are determined according to the height and shape index information. The experiment was conducted in a dark environment, with an average total time of 4.05 seconds and an average time of positioning of 1.07 seconds. When the allowable error is 15 pixels, the positioning accuracy is 85.34% for different poses, and the positioning accuracy is 96.88% when the face rotation angle is less than 20 degrees.},
     year = {2018}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - 3D Face Feature Location Method Based on Stripes and Shape Index
    AU  - Li Chang
    AU  - Shuai Liu
    Y1  - 2018/12/27
    PY  - 2018
    N1  - https://doi.org/10.11648/j.acis.20180604.12
    DO  - 10.11648/j.acis.20180604.12
    T2  - Automation, Control and Intelligent Systems
    JF  - Automation, Control and Intelligent Systems
    JO  - Automation, Control and Intelligent Systems
    SP  - 47
    EP  - 53
    PB  - Science Publishing Group
    SN  - 2328-5591
    UR  - https://doi.org/10.11648/j.acis.20180604.12
    AB  - A 3D face feature location method based on stripes and shape index is proposed in order to locate the feature points in face exactly and quickly. The grating projection technique is used to obtain 3D face image. Based on the difference between the background fringe image and the deformed fringe image, the basic information of the human face in the image is determined. The basic information includes left and right edge lines, upper and lower edge coordinates and the width of the human face in the image. And then, the approximate position of the ear is quickly determined according to the left or right edge line. The found areas of the tip of the nose and the inner canthus are reduced according to the position of the ear. The position of the tip of the nose and the inner canthus are determined according to the height and shape index information. The experiment was conducted in a dark environment, with an average total time of 4.05 seconds and an average time of positioning of 1.07 seconds. When the allowable error is 15 pixels, the positioning accuracy is 85.34% for different poses, and the positioning accuracy is 96.88% when the face rotation angle is less than 20 degrees.
    VL  - 6
    IS  - 4
    ER  - 

    Copy | Download

  • Sections