Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/24196
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSWINKELS, Wout-
dc.contributor.authorCLAESEN, Luc-
dc.contributor.authorXiao, Feng-
dc.contributor.authorShen, Haibin-
dc.date.accessioned2017-08-11T07:41:57Z-
dc.date.available2017-08-11T07:41:57Z-
dc.date.issued2017-
dc.identifier.citationProceedings of the 2017 Conference on Dependable and Secure Computing,p. 86-92 (Art N° B1-3)-
dc.identifier.isbn9781509055685-
dc.identifier.urihttp://hdl.handle.net/1942/24196-
dc.description.abstractFace recognition is nowadays implemented in security systems to grant access to areas that are only allowed for authorized persons. However an additional layer of security can be added to these systems by detemining if the person in front of the camera is present in real-life and that the detected object is not a 2D representation of that person. Forcing people to interact with the system by for example posing a certain emotion can be an additional layer of complexity to deny the access for unauthorized persons. This paper focuses on that aspect i.e. real-time emotion detection. Therefore a novel algorithm is developed to extract emotions based on the movement of 19 feature points. These feature points are located in different regions of the countenance such as the mouth, eyes, eyebrows and nose. To obtain the feature points an Ensemble of Regression Trees [1] is constructed. After the extraction of the feature points 12 distances, in and around these facial regions, are calculated to be used in displacement ratios. In the final step, the algorithm inputs the displacement ratios to a classification algorithm, which is a cascade of a multi-class support vector machine (SVM) and a binary SVM. Experimental results on the Extended Cohn-Kanade dataset (CK+) [2], [3] indicate that the proposed algorithm reaches an average accuracy of 89,78% at a detection speed of less than 30 ms. The accuracy is comparable with stateof- the-art emotion detection algorithms and outperforms these algorithms when detecting the emotions Contempt, Disgust, Fear and Surprise. The detection speed evaluation of the proposed algorithm was perfomed on a Windows 8.1 laptop with an Intel-Core i7-5500U CPU (2.40 GHz) and 8,00GB of RAM.-
dc.description.sponsorshipThe research in this paper was sponsored in part by the Belgian FWO (Flemish Research Council) and the Chinese MOST (Ministry of Science and Technology) bilateral cooperation project number G.0524.13.-
dc.language.isoen-
dc.publisherIEEE-
dc.rights(c) 2017 IEEE-
dc.subject.otherHoG; ensemble of Regression Trees; SVM; emotion detection; real-time-
dc.titleSVM Point-Based Real-time Emotion Detection-
dc.typeProceedings Paper-
local.bibliographicCitation.conferencedate07-10/08/2017-
local.bibliographicCitation.conferencename2017 IEEE Conference on Dependable and Secure Computing-
local.bibliographicCitation.conferenceplaceTaipei, Taiwan-
dc.identifier.epage92-
dc.identifier.spage86-
local.bibliographicCitation.jcatC1-
dc.description.notesSwinkels, W (reprint author), Univ Hasselt, Fac Engn Technol, Diepenbeek, Belgium.-
local.publisher.placeNew York, NY, USA-
dc.relation.references[1] V. Kazemi and J. Sullivan, “One Millisecond Face Alignment with an Ensemble of Regression Trees” in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1867–1874. [2] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis” in Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000, pp. 46–53. [3] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression” in Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis, San Francisco, USA, 2010, pp. 94–101. [4] P. Ekman and W. V. Friesen, “Measuring facial movement” Environmental Psychology and Nonverbal Behaviour, vol. 1, no. 1, pp. 56–75, 1976. [5] S. Ghosh, E. Laksana, S. Scherer, and L. Morency, “A Multi-label Convolutional Neural Network Approach to Cross-Domain Action Unit Detection” in 2015 International Conference on Affective Computing and Intelligent Interaction, Sept. 2015, pp. 609–615. [6] T. Simon, M. H. Nguyen, F. De La Torre, and J. F. Cohn, “Action Unit Detection with Segment-based SVMs” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2010, pp. 2737–2744. [7] X. Ding, W. Chu, F. De La Torre, J. F. Cohn, and Q. Wang, “Facial Action Unit Detection by Cascade of Tasks” in 2013 IEEE International Conference on Computer Vision, Dec. 2013, pp. 2400–2407. [8] X. Zhang and M. Mahoor, “Simultaneous Detection of Multiple Facial Action Units via Hierarchical Task Structure Learning” in 2014 22nd International Conference on Pattern Recognition, Aug. 2014, pp. 1863–1868. [9] K. Mistry, L. Zhang, S. C. Neoh, M. Jiang, A. Hossain, and B. Lafon, “Intelligent Appearance and Shape based Facial Emotion Recognition for a Humanoid Robot” in The 8th International Conference on Software, Knowledge, Information Management and Applications, 2014. [10] S. Mitra, C. Saha, and A. Das, “Hierarchical Clustering based Facial Expression Analysis from Video Sequence” in 2011 International Conference on Communication and Industrial Application, Dec. 2011, pp. 1–5. [11] A. Sohail and P. Bhattacharya, “Classifying Facial Expressions using Point-Based Analytic Face Model and Support Vector Machines” in 2007 IEEE International Conference on Systems, Man and Cybernetics, Oct. 2007, pp. 1008–1013. [12] P. Viola and M. Jones, “Robust Real-time Object Detection” in International Journal of Computer Vision, vol. 57(2), 2001, pp. 137–154. [13] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, pp. 886–893. [14] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting” Journal Of Computer And System Sciences, vol. 55, no. 1, pp. 119–139, 1997. [15] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object Detection with Discriminatively Trained Part-Based Models” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010. [16] D. E. King, “Dlib-ml: A Machine Learning Toolkit” Journal of Machine Learning Research, vol. 10, pp. 1755–1758, 2009. [17] C. Sagonas, E. Antonakos, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 Faces In-The-Wild Challenge: database and results” Image and Vision Computing, vol. 47, pp. 3–18, March 2016. [18] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge” in 2013 IEEE International Conference on Computer Vision Workshops, Dec. 2013, pp. 397–403. [19] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “A Semiautomatic Methodology for Facial Landmark Annotation” in 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 2013, pp. 896–903. [20] C. J. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, June 1998. [21] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 27, pp. 1–27, April 2011. [22] L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, and D. N. Metaxas, “Learning active facial patches for expression analysis” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, June 2012, pp. 2562–2569. [23] M. Song, D. Tao, Z. Liu, X. Li, and M. Zhou, “Image Ratio Features for Facial Expression Recognition Application” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 40, no. 3, pp. 779–788, June 2010.-
local.type.refereedRefereed-
local.type.specifiedProceedings Paper-
local.bibliographicCitation.artnrB1-3-
dc.identifier.doi10.1109/DESEC.2017.8073838-
dc.identifier.isi000450296400009-
local.bibliographicCitation.btitleProceedings of the 2017 Conference on Dependable and Secure Computing-
item.contributorSWINKELS, Wout-
item.contributorCLAESEN, Luc-
item.contributorXiao, Feng-
item.contributorShen, Haibin-
item.accessRightsRestricted Access-
item.fullcitationSWINKELS, Wout; CLAESEN, Luc; Xiao, Feng & Shen, Haibin (2017) SVM Point-Based Real-time Emotion Detection. In: Proceedings of the 2017 Conference on Dependable and Secure Computing,p. 86-92 (Art N° B1-3).-
item.fulltextWith Fulltext-
item.validationecoom 2019-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
SVM Point-based Real-time Emotion Detection.pdf
  Restricted Access
Peer-reviewed author version1.58 MBAdobe PDFView/Open    Request a copy
Show simple item record

SCOPUSTM   
Citations

1
checked on Sep 2, 2020

WEB OF SCIENCETM
Citations

2
checked on Apr 22, 2024

Page view(s)

50
checked on May 19, 2022

Download(s)

40
checked on May 19, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.