Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/30318
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWIJNANTS, Maarten-
dc.contributor.authorCOPPERS, Sven-
dc.contributor.authorROVELO RUIZ, Gustavo-
dc.contributor.authorQUAX, Peter-
dc.contributor.authorLAMOTTE, Wim-
dc.date.accessioned2020-01-15T13:52:40Z-
dc.date.available2020-01-15T13:52:40Z-
dc.date.issued2019-
dc.date.submitted2020-01-09T09:28:14Z-
dc.identifier.citationProceedings of the 27th ACM International Conference on Multimedia (MM ’19), ACM, p. 1035 -1037-
dc.identifier.isbn9781450368896-
dc.identifier.urihttp://hdl.handle.net/1942/30318-
dc.description.abstractOver-the-top (OTT) streaming services like YouTube and Netflix induce massive amounts of video data, hereby putting substantial pressure on network infrastructure. This paper describes a demonstration of the object-based video (OBV) methodology that allows for the quality-variant MPEG-DASH streaming of respectively the background and foreground object(s) of a video scene. The OBV methodology is inspired by research into human visual attention and foveated compression, in that it allows to adaptively and dynamically assign bitrate to those portions of the visual scene that have the highest utility in terms of perceptual quality. Using a content corpus of interview-like video footage, the described demonstration proves the OBV methodology's potential to downsize video bitrate requirements while incurring at most marginal perceptual impact (i.e., in terms of subjective video quality). Thanks to its standards-compliant Web implementation, the OBV methodology is directly and broadly deployable without requiring capital expenditure.-
dc.description.sponsorshipMaarten Wijnants is funded by a VLAIO Innovation Mandate (project number HBC.2016.0625), co-sponsored by Androme. Sven Coppers is funded by the Special Research Fund (BOF) of Hasselt University (R-8150). We thank Davy Vanacken for his methodological advice.-
dc.language.isoen-
dc.publisherASSOC Computing Machinery-
dc.rights2019 Owner/Author-
dc.subject.othervideo coding-
dc.subject.otherH.264-
dc.subject.otherHTTP Adaptive Streaming-
dc.subject.otherMPEG-DASH-
dc.subject.othersubjective evaluation-
dc.subject.otherWeb-
dc.titleSplit & Dual Screen Comparison of Classic vs Object-based Video-
dc.typeProceedings Paper-
local.bibliographicCitation.conferencedateOctober 21-25, 2019-
local.bibliographicCitation.conferencenamethe 27th ACM International Conference on Multimedia (MM ’19)-
local.bibliographicCitation.conferenceplaceNice, France-
dc.identifier.epage1037-
dc.identifier.spage1035-
local.bibliographicCitation.jcatC1-
local.publisher.place1515 BROADWAY, NEW YORK, NY 10036-9998 USA-
local.type.refereedRefereed-
local.type.specifiedArticle-
dc.source.typeMeeting-
dc.identifier.doi10.1145/3343031.3350582-
dc.identifier.isiWOS:000509743400119-
local.provider.typePdf-
local.bibliographicCitation.btitleProceedings of the 27th ACM International Conference on Multimedia (MM ’19)-
item.fullcitationWIJNANTS, Maarten; COPPERS, Sven; ROVELO RUIZ, Gustavo; QUAX, Peter & LAMOTTE, Wim (2019) Split & Dual Screen Comparison of Classic vs Object-based Video. In: Proceedings of the 27th ACM International Conference on Multimedia (MM ’19), ACM, p. 1035 -1037.-
item.fulltextWith Fulltext-
item.accessRightsRestricted Access-
item.contributorCOPPERS, Sven-
item.contributorROVELO RUIZ, Gustavo-
item.contributorQUAX, Peter-
item.contributorLAMOTTE, Wim-
item.contributorWIJNANTS, Maarten-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
3343031.3350582.pdf
  Restricted Access
Peer-reviewed author version1.02 MBAdobe PDFView/Open    Request a copy
Wijnants_Maarten_2019.pdf
  Restricted Access
Published version1.02 MBAdobe PDFView/Open    Request a copy
Show simple item record

SCOPUSTM   
Citations

1
checked on Sep 2, 2020

Page view(s)

82
checked on Jul 5, 2022

Download(s)

16
checked on Jul 5, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.