Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/10357
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHERMANS, Chris-
dc.contributor.authorFRANCKEN, Yannick-
dc.contributor.authorCUYPERS, Tom-
dc.contributor.authorBEKAERT, Philippe-
dc.date.accessioned2010-01-13T08:31:42Z-
dc.date.available2010-01-13T08:31:42Z-
dc.date.issued2009-
dc.identifier.citationCVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4. p. 1865-1872.-
dc.identifier.isbn978-1-4244-3991-1-
dc.identifier.urihttp://hdl.handle.net/1942/10357-
dc.description.abstractIn this paper we present a novel method for 3D structure acquisition, based on structured light. Unlike classical structured light methods, in which a static projector illuminates a scene with time-varying illumination patterns, our technique makes use of a moving projector emitting a static striped illumination pattern. This projector is translated at a constant velocity, in the direction of the projector’s horizontal axis. Illuminating the object in this manner allows us to perform a per pixel analysis, in which we decompose the recorded illumination sequence into a corresponding set of frequency components. The dominant frequency in this set can be directly converted into a corresponding depth value. This per pixel analysis allows us to preserve sharp edges in the depth image. Unlike classical structured light methods, the quality of our results is not limited by projector or camera resolution, but is solely dependent on the temporal sampling density of the captured image sequence. Additional benefits include a significant robustness against common problems encountered with structured light methods, such as occlusions, specular reflections, subsurface scattering, interreflections, and to a certain extent projector defocus.-
dc.language.isoen-
dc.publisherIEEE-
dc.titleDepth from Sliding Projections-
dc.typeProceedings Paper-
local.bibliographicCitation.conferencenameIEEE-Computer-Society Conference on Computer Vision and Pattern Recognition Workshops-
local.bibliographicCitation.conferenceplaceMiami, USA - JUN 20-25, 2009-
dc.identifier.epage1872-
dc.identifier.spage1865-
local.bibliographicCitation.jcatC1-
dc.description.notes[Hermans, Chris; Francken, Yannick; Cuypers, Tom; Bekaert, Philippe] Hasselt Univ, TUL, IBBT, Expertise Ctr Digital Media, Diepenbeek, Belgium. christ.hermans@uhasselt.be - yannick.francken@uhasselt.be - tom.cuypers@uhasselt.be - philippe.bekaert@uhasselt.be-
local.type.refereedRefereed-
local.type.specifiedProceedings Paper-
dc.bibliographicCitation.oldjcatC1-
dc.identifier.isi000279038001048-
local.bibliographicCitation.btitleCVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4-
item.validationecoom 2011-
item.fulltextWith Fulltext-
item.accessRightsOpen Access-
item.fullcitationHERMANS, Chris; FRANCKEN, Yannick; CUYPERS, Tom & BEKAERT, Philippe (2009) Depth from Sliding Projections. In: CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4. p. 1865-1872..-
item.contributorHERMANS, Chris-
item.contributorFRANCKEN, Yannick-
item.contributorCUYPERS, Tom-
item.contributorBEKAERT, Philippe-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
Hermans et al. - Depth from Sliding Projections.pdfPeer-reviewed author version2.47 MBAdobe PDFView/Open
Show simple item record

WEB OF SCIENCETM
Citations

11
checked on Apr 27, 2024

Page view(s)

132
checked on Aug 2, 2022

Download(s)

324
checked on Aug 2, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.