Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/34825
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBekaert, Philippe-
dc.contributor.advisorLafruit, Gauthier-
dc.contributor.authorJORISSEN, Lode-
dc.date.accessioned2021-09-09T08:25:40Z-
dc.date.available2021-09-09T08:25:40Z-
dc.date.issued2021-
dc.date.submitted2021-08-18T16:24:12Z-
dc.identifier.urihttp://hdl.handle.net/1942/34825-
dc.description.abstractDisplay technologies play an important role in our daily lives: we use displays for all kinds of tasks, ranging from information gathering to entertainment. However, most of the commonly used displays only serve a flat 2D image. As a result, they only stimulate a limited part of the human’s visual system: several visual cues related to the depth of the scene are left out. Technologies such as stereo displays and viewer tracking can be used to stimulate some of these cues, however the result is still very incomplete. In recent years there is an increasing interest in light field displays: displays that try to recreate the light field of an environment in order to create a complete viewing experience, including all cues related to depth perception. However, to accomplish this, light field displays require large amounts of input data. When one wants to display a virtual scene, this is not much of an issue: the required data can easily be generated using traditional computer graphics techniques. However, when one wants to display a real-world scene, other means have to be used to obtain the required data. One approach could be to capture all information using cameras. However, this would require a large amount of cameras to capture all information in a single shot. Hence, such a solution is expensive and often not practical. In this dissertation we describe a view interpolation approach to generate the input data for light field displays based on a sparse set of input cameras. The distance between the cameras should be large enough to capture the full scene. To accommodate the different input requirements for different light field displays, we will present an intermediate data structure that stores the real-world scene. The data structure should also allow to easily generate data for different types of displays. This data structure is based on a light field representation, called Epipolar Plane Images. Epipolar Plane Images are created by stacking the images from a linear camera array on top of each other and taking a slice corresponding to a scanline from the resulting cube. The created images contain a set of lines that implicitly encode the geometry of the scene. First, we will look at techniques to extract this geometrical information from the input camera images. We show that utilization of the Epipolar Plane Image properties allows for an accurate estimation of the geometry of the scene, even for smaller amounts of cameras and when the distance between the cameras is relatively large. Furthermore we propose methods to better handle occlusions of the scene as well as color variations in the input data caused by the material properties of the scene or inconsistencies in camera synchronization. A frequency domain based method is introduced to obtain a per scanline impression of the scene’s depth distribution. This is achieved by analyzing the lines in the epipolar plane image. This analysis helps to reduce the search range while extracting the required information, and hence increase the quality of the obtained data. After filling the data structure, we generate the data for the light field displays. In this dissertation we will focus on the generation of data for two kinds of displays: multi-view light field displays and integral imaging displays. Both types of displays require a different layout of the input data. The former type requires the input images to be captured at the positions from which the user will observe the screen. The latter display type requires the images to be captured at the display plane itself, which is often located inside the scene. We will show that our approach is able to generate consistent data for both types of displays. Additionally, we will show that, due to our proposed extraction algorithms, we are able to generate data of a higher quality than existing view synthesis approaches. With the help of a frequency domain based filter, we are able to reduce the disturbing effects of view dependent noise. The advantages that light field displays have over traditional displays are also useful for augmented reality applications. However, light field displays are often opaque, hence hiding what is behind them. We will present a custom transparent light field display that can be used for augmented reality applications. The display solely consists of an off the shelf projector and a custom holographic optical element that acts as a screen of micromirrors. The system requires an accurate calibration between the projector and the screen to accurately recreate the light field. We first propose a calibration approach for a flat display, and show that the proposed calibration approach is able to correctly align the virtual world with the real world. Since the material of the screen is flexible it can also be applied on curved surfaces such as the wind-screen of a car. An adjusted calibration approach is required for a curved screen due to the extra distortion that the curved surface introduces. We show that the calibration of a curved transparent light field display is possible if the shape of the surface is known.-
dc.language.isoen-
dc.titleCalibration and View Interpolation for Light Field Displays-
dc.typeTheses and Dissertations-
local.format.pages284-
local.bibliographicCitation.jcatT1-
local.type.refereedNon-Refereed-
local.type.specifiedPhd thesis-
local.relation.ispartofseriesnr284-
local.provider.typePdf-
local.uhasselt.uhpubyes-
item.fulltextWith Fulltext-
item.accessRightsEmbargoed Access-
item.contributorJORISSEN, Lode-
item.embargoEndDate2026-09-01-
item.fullcitationJORISSEN, Lode (2021) Calibration and View Interpolation for Light Field Displays.-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
Calibration and View Interpolation for Light Field Displays.pdf
  Until 2026-09-01
86.2 MBAdobe PDFView/Open    Request a copy
Show simple item record

Page view(s)

58
checked on Sep 7, 2022

Download(s)

4
checked on Sep 7, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.