Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/732
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | TUYLS, Karl | - |
dc.contributor.author | Heytens, Dries | - |
dc.contributor.author | Nowé, Ann | - |
dc.contributor.author | Manderick, Bernard | - |
dc.date.accessioned | 2005-04-20T06:55:10Z | - |
dc.date.available | 2005-04-20T06:55:10Z | - |
dc.date.issued | 2003 | - |
dc.identifier.citation | MACHINE LEARNING: ECML 2003. p. 421-431 | - |
dc.identifier.isbn | 3-540-20121-1 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/1942/732 | - |
dc.description.abstract | Modeling learning agents in the context of Multi-agent Systems requires an adequate understanding of their dynamic behaviour. Evolutionary Game Theory provides a dynamics which describes how strategies evolve over time. Börgers et al. and Tuyls et al. have shown how classical Reinforcement Learning (RL) techniques such as Cross-learning and Q-learning relate to the Replicator Dynamics (RD). This provides a better understanding of the learning process. In this paper, we introduce an extension of the Replicator Dynamics from Evolutionary Game Theory. Based on this new dynamics, a Reinforcement Learning algorithm is developed that attains a stable Nash equilibrium for all types of games. Such an algorithm is lacking for the moment. This kind of dynamics opens an interesting perspective for introducing new Reinforcement Learning algorithms in multi-state games and Multi-Agent Systems. | - |
dc.language.iso | en | - |
dc.relation.ispartofseries | Lecture Notes in Computer Science | - |
dc.title | Extended Replicator Dynamics as a Key to Reinforcement Learning in Multi-agent Systems | - |
dc.type | Journal Contribution | - |
local.bibliographicCitation.conferencename | 14th European Conference on Machine Learning | - |
dc.identifier.epage | 431 | - |
dc.identifier.spage | 421 | - |
local.bibliographicCitation.jcat | A1 | - |
local.type.refereed | Refereed | - |
local.type.specified | Article | - |
local.relation.ispartofseriesnr | 2837 | - |
dc.bibliographicCitation.oldjcat | A1 | - |
dc.identifier.doi | 10.1007/b13633 | - |
dc.identifier.isi | 000187061900038 | - |
item.fulltext | No Fulltext | - |
item.accessRights | Closed Access | - |
item.fullcitation | TUYLS, Karl; Heytens, Dries; Nowé, Ann & Manderick, Bernard (2003) Extended Replicator Dynamics as a Key to Reinforcement Learning in Multi-agent Systems. In: MACHINE LEARNING: ECML 2003. p. 421-431. | - |
item.contributor | TUYLS, Karl | - |
item.contributor | Heytens, Dries | - |
item.contributor | Nowé, Ann | - |
item.contributor | Manderick, Bernard | - |
crisitem.journal.issn | 0302-9743 | - |
Appears in Collections: | Research publications |
WEB OF SCIENCETM
Citations
9
checked on Sep 30, 2024
Page view(s)
84
checked on Nov 7, 2023
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.