skip to main content
research-article

Image ratio features for facial expression recognition application

Published: 01 June 2010 Publication History

Abstract

Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

References

[1]
D.-Q. Dai and P. C. Yuen, "Face recognition by regularized discriminant analysis," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 4, pp. 1080-1085, Aug. 2007.
[2]
K.-C. Wong, W.-Y. Lin, Y. H. Hu, N. Boston, and X. Zhang, "Optimal linear combination of facial regions for improving identification performance," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 5, pp. 1138-1148, Oct. 2007.
[3]
R. Singh, M. Vatsa, A. Ross, and A. Noore, "A mosaicing scheme for pose-invariant face recognition," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 5, pp. 1212-1225, Oct. 2007.
[4]
H. Zhao and P. C. Yuen, "Incremental linear discriminant analysis for face recognition," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 1, pp. 210-221, Feb. 2008.
[5]
P. Ekman and W. V. Friesen, "Constant across cultures in the face and emotion," J. Pers. Soc. Psychol., vol. 17, no. 2, pp. 124-129, 1971.
[6]
G. Guo and C. R. Dyer, "Learning from examples in the small sample case: Face expression recognition," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 3, pp. 477-488, Jun. 2005.
[7]
P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System. Salt Lake City, UT: Human Face, 2002.
[8]
G. Chanel, K. Ansari-Asl, and T. Pun, "Valence-arousal evaluation using physiological signals in an emotion recall paradigm," in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2007, pp. 2662-2667.
[9]
L. Ma and K. Khorasani, "Facial expression recognition using constructive feedforward neural networks," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 34, no. 3, pp. 1588-1595, Jun. 2004.
[10]
G. D. Guo and C. R. Dyer, "Simultaneous feature selection and classifier training via linear programming: A case study for face expression recognition," in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recog., Toronto, ON, Canada, 2003, vol. 1, pp. 346-352.
[11]
K. Anderson and P. W. McOwan, "A real-time automated system for recognition of human facial expressions," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 1, pp. 96-105, Feb. 2006.
[12]
H. Wang and N. Ahuja, "Facial expression decomposition," in Proc. IEEE Int. Conf. Comput. Vis., Nice, France, Oct. 2003, vol. 2, pp. 958-965.
[13]
I. Cohn, N. Sebe, L. Chen, A. Garg, and T. S. Huang, "Facial expression recognition from video sequences: Temporal and static modeling," Comput. Vis. Image Underst., vol. 91, no. 1/2, pp. 160-187, Jul. 2003.
[14]
Y. Zhang and Q. Ji, "Facial expression understanding in image sequences using dynamic and active visual information fusion," in Proc. IEEE Int. Conf. Comput. Vis., Nice, France, Oct. 2003, vol. 2, pp. 113-118.
[15]
N. Sebe, M. S. Lew, I. Cohen, Y. Sun, T. Gevers, and T. S. Huang, "Authentic facial expression analysis," in Proc. IEEE Int. Conf. Autom. Face Gesture Recog., Seoul, Korea, May 2004, pp. 517-522.
[16]
H. Seyedarabi, A. Aghagolzadeh, and S. Khanmohammadi, "Recognition of six basic facial expressions by feature-points tracking using RBF neural network and fuzzy inference system," in Proc. IEEE Int. Conf. Multimed. Expo, Taipei, Taiwan, Jun. 2004, pp. 1219-1222.
[17]
Z. Zhu and Q. Ji, "Robust real-time face pose and facial expression recovery," in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recog., New York, 2006, vol. 1, pp. 681-688.
[18]
Y. L. Tian, T. Kanade, and J. F. Cohn, "Facial expression analysis," in Handbook of Face Recognition, S. Z. Li and A.K. Jain, Eds. New York: Springer-Verlag, 2005, pp. 247-276.
[19]
J. J. Lien, T. Kanade, J. F. Cohn, and C.-C. Li, "Subtly different facial expression recognition and expression intensity estimation," in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recog., Santa Barbara, CA, 1998, pp. 853-859.
[20]
Y. Gao, S. H. Leung, and H. Leung, "Facial expression recognition from line-based caricatures," IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 33, no. 3, pp. 407-412, May 2003.
[21]
Y. Zhang and Q. Ji, "Active and dynamic information fusion for facial expression understanding from image sequences," IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 699-714, May 2005.
[22]
Y. Tian, T. Kanade, and J. F. Cohn, "Recognizing action units for facial expression analysis," IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 2, pp. 97-115, Feb. 2001.
[23]
A. M. Martínez, "Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class," IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 6, pp. 748-763, Jun. 2002.
[24]
J. Han and K.-K. Ma, "Rotation-invariant and scale-invariant Gabor features for texture image retrieval," Image Vis. Comput., vol. 25, no. 9, pp. 1474-1481, Sep. 2007.
[25]
G. Zhao and M. Pietikäinen, "Dynamic texture recognition using local binary patterns with an application to facial expressions," IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 915-928, Jun. 2007.
[26]
J. Shi, A. Samal, and D. Marx, "How effective are landmarks and their geometry for face recognition?" Comput. Vis. Image Underst., vol. 102, no. 2, pp. 117-133, May 2006.
[27]
Y. Chang, C. Hu, R. Feris, and M. Turk, "Manifold based analysis of facial expression," Image Vis. Comput., vol. 24, no. 6, pp. 506-614, Jun. 2006.
[28]
T. Kanade, J. F. Cohn, and Y. Tian, CMU-Pittsburgh AU-Coded Face Expression Image Database. {Online}. Available: http://vasc.ri.cmu.edu/ idb/html/face/facial_expression/
[29]
M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, "Coding facial expressions with Gabor wavelets," in Proc. IEEE Int. Conf. Autom. Face Gesture Recog., Nara, Japan, 1998, pp. 200-205.
[30]
International Organization for Standardization, Facial Animation Parameters, 1998. Text for ISO/IEC FDIS 14496-2 Visual, ISO/IEC JTC1/SC29/ WG11 N2502.
[31]
Z. Liu, Y. Shan, and Z. Zhang, "Expressive expression mapping with ratio images," in Proc. ACM SIGGRAPH, Los Angeles, CA, 2001, pp. 271-276.
[32]
J. Bu, M. Song, Q. Wu, and C. Chen, "Sketch based facial expression recognition using graphics hardware," in Proc. Int. Conf. Affective Comput. Intell. Interaction, Beijing, China, 2005, vol. 3784, pp. 72-80.
[33]
M. Covell and C. Bregler, "Eigen-points," in Proc. IEEE Int. Conf. Image Process., Laussane, Switzerland, 1996, pp. 471-474.
[34]
T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, "Active shape models--Their training and application," Comput. Vis. Image Underst., vol. 61, no. 1, pp. 38-59, Jan. 1995.
[35]
T. F. Cootes, G. Edwards, and C. J. Taylor, "Comparing active shape models with active appearance models," in Proc. Brit. Mach. Vis. Conf., Nottingham, U.K., 1999, pp. 173-182.
[36]
F. Lavagetto and R. Pockaj, "The facial animation engine: Towards a high-level interface for the design of MPEG-4 compliant animated faces," IEEE Trans. Circuits Syst. Video Technol., vol. 9, no. 2, pp. 277-289, Mar. 1999.
[37]
D. Cai, X. He, Y. Hu, J. Han, and T. Huang, "Learning a spatially smooth subspace for face recognition," in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recog., Minneapolis, MN, 2007, pp. 1-7.
[38]
D. Cai, X. He, X.Wang, H. Bao, and J. Han, "Locality preserving nonnegative matrix factorization," in Proc. Int. Joint Conf. Artif. Intell., Pasadena, CA, 2009, pp. 1010-1015.
[39]
X. He, M. Ji, and H. Bao, "Graph embedding with constraints," in Proc. Int. Joint Conf. Artif. Intell., Pasadena, CA, 2009, pp. 1065-1070.
[40]
M. F. Valstar and M. Pantic, "Biologically vs. logic inspired encoding of facial actions and emotions in video," in Proc. IEEE Int. Conf. Multimed. Expo, Toronto, ON, Canada, 2006, pp. 325-328.
[41]
P. Sungsoo, S. Jongju, and K. Daijin, "Facial expression analysis with facial expression deformation," in Proc. IAPR Int. Conf. Pattern Recog., Tampa, FL, 2008, pp. 1-4.
[42]
M. Valstar, M. Pantic, and I. Patras, "Motion history for facial action detection in video," in Proc. IEEE Int. Conf. Syst., Man, Cybern., Hague, The Netherland, 2004, pp. 635-640.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics  Volume 40, Issue 3
Special issue on game theory
June 2010
425 pages

Publisher

IEEE Press

Publication History

Published: 01 June 2010
Revised: 29 April 2009
Received: 18 December 2008

Author Tags

  1. Expression recognition
  2. expression recognition
  3. facial expression
  4. image ratio features

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media