Creating incentives for cooperation is a challenge in natural and artificial systems . One potential answer is reputation, whereby agents trade the immediate cost of cooperation for the future benefits of having a good reputation . We use a simple model ofreinforcement learning to show that reputation mechanisms generate twocoordination problems . We propose two mechanisms to alleviate this: (i)seeding a proportion of the system with fixed agents that steer others towards good equilibria, and (ii) intrinsic rewards based on the idea of the concept ofintrospection, i.e., augmenting agents’ rewards by an amount proportionate to the performance of their own strategy against themselves . A combination of these simple mechanisms is successful in stabilizing cooperation, even in afully decentralized version of the problem where agents learn to use and assignreputations simultaneously . We show how our results relate to the literature inEvolutionary Game Theory, and discuss implications for human and hybrid systems, where reputations can be used as a way to establish trust and co-operate with other systems, such as artificial systems, are successful in using reputations to stabilize cooperation, we say . We also discuss implications of

Author(s) : Nicolas Anastassacos, Julian GarcĂ­a, Stephen Hailes, Mirco Musolesi

Links : PDF - Abstract

Code :

https://github.com/mtrazzi/two-step-task


Coursera

Keywords : cooperation - systems - reputation - agents - mechanisms -

Leave a Reply

Your email address will not be published. Required fields are marked *