Communication requires having a common language, a lingua franca, between agents . This language could emerge via a consensus process, but it may require generations of trial and error . Alternatively, agents ground their language in representationsof the observed world . We demonstrate a simple way to ground language inlearned representations, which facilitates decentralized multi-agentcommunication and coordination . We find that a standard representation learning algorithm — autoencoding — is sufficient for arriving at a grounded commonlanguage. When agents broadcast these representations, they learn to understandand respond to each other’s utterances .

Author(s) : Toru Lin, Minyoung Huh, Chris Stauffer, Ser-Nam Lim, Phillip Isola

Links : PDF - Abstract

Code :
Coursera

Keywords : language - agents - ground - representations - learning -

Leave a Reply

Your email address will not be published. Required fields are marked *