Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/10492
Full metadata record
DC FieldValueLanguage
dc.contributor.authorVANHULSEL, Marlies-
dc.contributor.authorJANSSENS, Davy-
dc.contributor.authorWETS, Geert-
dc.date.accessioned2010-02-18T14:47:30Z-
dc.date.available2010-02-18T14:47:30Z-
dc.date.issued2007-
dc.identifier.citationTRB 86th Annual Meeting Compendium of Papers CD-ROM.-
dc.identifier.urihttp://hdl.handle.net/1942/10492-
dc.description.abstractRecent travel demand modeling mainly focuses on activity-based modeling. However the majority of such models are still quite static. Therefore, the current research aims at incorporating dynamic components, such as short-term adaptation and long-term learning, into these activity-based models. In particular, this paper attempts at simulating the learning process underlying the development of activitytravel patterns. Furthermore, this study explores the impact of key events on generation of daily schedules. The learning algorithm implemented in this paper uses a reinforcement learning technique, for which the foundations were provided in previous research. The goal of the present study is to release the predefined activity-travel sequence assumption of this previous research and to allow the algorithm to determine the activity-travel sequence autonomously. To this end, the decision concerning transport mode needs to be revised as well, as this aspect was previously also set within the fixed schedule. In order to generate feasible activity-travel patterns, another alteration consists of incorporating time constraints, for example opening hours of shops. In addition, a key event, in this case “obtaining a driving license”, is introduced into the learning methodology by changing the available set of transport modes. The resulting patterns reveal more variation in the selected activities and respect the imposed time constraints. Moreover, the observed dissimilarities between activity-travel schedules before and after the key event prove to be significant based on a sequence alignment distance measure.-
dc.language.isoen-
dc.titleCalibrating a New Reinforcement Learning Mechanism for Modeling Dynamic Activity-Travel Behavior and Key Events-
dc.typeProceedings Paper-
local.bibliographicCitation.conferencedate21-25/01/2007-
local.bibliographicCitation.conferencenameTRB 86th Annual Meeting-
local.bibliographicCitation.conferenceplaceWashington,U.S.A.-
local.format.pages17-
local.bibliographicCitation.jcatC2-
dc.description.notesHasselt University - Campus Diepenbeek Transportation Research Institute Wetenschapspark 5, bus 6 BE - 3590 Diepenbeek Belgium E-mail: {marlies.vanhulsel;davy.janssens; geert.wets}@uhasselt.be-
local.type.refereedRefereed-
local.type.specifiedProceedings Paper-
dc.bibliographicCitation.oldjcatC2-
local.bibliographicCitation.btitleTRB 86th Annual Meeting Compendium of Papers CD-ROM-
item.fullcitationVANHULSEL, Marlies; JANSSENS, Davy & WETS, Geert (2007) Calibrating a New Reinforcement Learning Mechanism for Modeling Dynamic Activity-Travel Behavior and Key Events. In: TRB 86th Annual Meeting Compendium of Papers CD-ROM..-
item.accessRightsOpen Access-
item.contributorVANHULSEL, Marlies-
item.contributorJANSSENS, Davy-
item.contributorWETS, Geert-
item.fulltextWith Fulltext-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
Calibrating_a_New_Reinforcement_Learning_Mechanism_for_Modeling_Dynamic_Activity-Travel_Behavior_and_Key_Events.pdfPublished version178.83 kBAdobe PDFView/Open
Show simple item record

Page view(s)

12
checked on Jul 15, 2022

Download(s)

8
checked on Jul 15, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.