Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/728
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 't Hoen, Pieter Jan | - |
dc.contributor.author | TUYLS, Karl | - |
dc.date.accessioned | 2005-04-15T12:29:46Z | - |
dc.date.available | 2005-04-15T12:29:46Z | - |
dc.date.issued | 2004 | - |
dc.identifier.citation | MACHINE LEARNING: ECML 2004, PROCEEDINGS. p. 168-179 | - |
dc.identifier.isbn | 3-540-23105-6 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/1942/728 | - |
dc.description.abstract | In this paper, we show how the dynamics of Q-learning can be visualized and analyzed from a perspective of Evolutionary Dynamics (ED). More specifically, we show how ED can be used as a model for Q-learning in stochastic games. Analysis of the evolutionary stable strategies and attractors of the derived ED from the Reinforcement Learning (RL) application then predict the desired parameters for RL in Multi-Agent Systems (MASs) to achieve Nash equilibriums with high utility. Secondly, we show how the derived fine tuning of parameter settings from the ED can support application of the COllective INtelligence (COIN) framework. COIN is a proved engineering approach for learning of cooperative tasks in MASs. We show that the derived link between ED and RL predicts performance of the COIN framework and visualizes the incentives provided in COIN toward cooperative behavior. | - |
dc.language.iso | en | - |
dc.publisher | Springer | - |
dc.relation.ispartofseries | LECTURE NOTES IN COMPUTER SCIENCE | - |
dc.title | Analyzing Multi-agent Reinforcement Learning Using Evolutionary Dynamics | - |
dc.type | Journal Contribution | - |
local.bibliographicCitation.conferencename | MACHINE LEARNING: ECML 2004, PROCEEDINGS | - |
dc.identifier.epage | 179 | - |
dc.identifier.spage | 168 | - |
local.bibliographicCitation.jcat | A1 | - |
local.type.refereed | Refereed | - |
local.type.specified | Article | - |
local.relation.ispartofseriesnr | 3201 | - |
dc.bibliographicCitation.oldjcat | A1 | - |
dc.identifier.doi | 10.1007/978-3-540-30115-8_18 | - |
dc.identifier.isi | 000223999500018 | - |
item.fulltext | No Fulltext | - |
item.accessRights | Closed Access | - |
item.fullcitation | 't Hoen, Pieter Jan & TUYLS, Karl (2004) Analyzing Multi-agent Reinforcement Learning Using Evolutionary Dynamics. In: MACHINE LEARNING: ECML 2004, PROCEEDINGS. p. 168-179. | - |
item.contributor | 't Hoen, Pieter Jan | - |
item.contributor | TUYLS, Karl | - |
crisitem.journal.issn | 0302-9743 | - |
Appears in Collections: | Research publications |
WEB OF SCIENCETM
Citations
4
checked on Sep 30, 2024
Page view(s)
102
checked on Nov 7, 2023
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.