Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/9133
Title: Conveying Emotions through Facially Animated Avatars in Networked Virtual Environments
Authors: DI FIORE, Fabian 
QUAX, Peter 
VANAKEN, Cedric 
LAMOTTE, Wim 
VAN REETH, Frank 
Issue Date: 2008
Publisher: Springer
Source: Egges, A. & Kamphuis, A. & Overmars, M. (Ed.) Motion In Games (MIG08). p. 222-233.
Series/Report: LECTURE NOTES IN COMPUTER SCIENCE
Series/Report no.: 5277
Abstract: In this paper, our objective is to facilitate the way in which emotion is conveyed through avatars in virtual environments. The established way of achieving this includes the end-user having to manually select his/her emotional state through a text base interface (using emoticons and/or keywords) and applying these pre-defined emotional states on avatars. In contrast to this rather trivial solution, we envisage a system that enables automatic extraction of emotion-related metadata from a video stream, most often originating from a webcam. Contrary to the seemingly trivial solution of sending entire video streams --- which is an optimal solution but often prohibitive in terms of bandwidth usage --- this metadata extraction process enables the system to be deployed in large-scale environments, as the bandwidth required for the communication channel is severely limited.
Document URI: http://hdl.handle.net/1942/9133
ISBN: 978-3-540-89219-9
DOI: 10.1007/978-3-540-89220-5_22
ISI #: 000263687700022
Category: C1
Type: Proceedings Paper
Validations: ecoom 2010
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
conveying.pdf9.35 MBAdobe PDFView/Open
Show full item record

WEB OF SCIENCETM
Citations

5
checked on Oct 10, 2024

Page view(s)

184
checked on Nov 7, 2023

Download(s)

298
checked on Nov 7, 2023

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.