Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/11622
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBEKAERT, Philippe-
dc.contributor.authorDE DECKER, Bert-
dc.date.accessioned2011-02-24T12:48:06Z-
dc.date.availableNO_RESTRICTION-
dc.date.available2011-02-24T12:48:06Z-
dc.date.issued2010-
dc.identifier.urihttp://hdl.handle.net/1942/11622-
dc.description.abstractWhen it comes to traditional 2D video editing, there are many video manipulation techniques to choose from, but all of them suffer from the limited amount of information that is present in the video itself. When more information about the scene is available, more powerful video manipulation methods become possible. In this dissertation, we examine what extra information about a scene might be useful and how this information can be used to develop powerful yet easy-to-use video manipulation techniques. We present a number of novel video manipulation methods that improve on the way the scene information is captured and the way this information is used. First we show how a 2D video can be manipulated if the scene is captured using multiple video cameras. We present an interactive setup that calculates collisions between virtual objects and a real scene. It is a purely image based approach, so no time is wasted computing an explicit 3D geometry for the real objects in the scene, all calculations are performed on the input images directly. We demonstrate our approach by building a setup where a human can interact with a rigid body simulation in real-time. Secondly, we investigate which manipulation techniques become possible if we track a number of points in the scene. For this purpose, we created two novel motion capture systems. Both are low cost optical systems that use imperceptible electronic markers. The first is a camera based location tracking system. A marker is attached to each point that needs to be tracked. A bright IR LED on the marker emits a specific light pattern that is captured by the cameras and decoded by a computer to locate and identify each marker. The second system projects a number of light patterns into the scene. Electronic markers attached to points in the scene decode these patterns to obtain their position and orientation. Each marker also senses the color and intensity of the ambient light. We show this information can be used in many applications, such as augmented reality and motion capture.-
dc.language.isoen-
dc.titleVideo Manipulation using External Cues-
dc.typeTheses and Dissertations-
local.format.pages127-
local.bibliographicCitation.jcatT1-
local.type.refereedNon-Refereed-
local.type.specifiedPhd thesis-
dc.bibliographicCitation.oldjcatD1-
item.contributorDE DECKER, Bert-
item.fulltextWith Fulltext-
item.fullcitationDE DECKER, Bert (2010) Video Manipulation using External Cues.-
item.accessRightsOpen Access-
Appears in Collections:PhD theses
Research publications
Files in This Item:
File Description SizeFormat 
PhD De Decker.pdf6.73 MBAdobe PDFView/Open
Show simple item record

Page view(s)

18
checked on Sep 6, 2022

Download(s)

10
checked on Sep 6, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.