Submitted by hughbzhang t3_z1yt45 in MachineLearning
Github: https://github.com/facebookresearch/diplomacy_cicero
Abstract:
Despite much progress in training AI systems to imitate human language, building agents that use language to communicate intentionally with humans in interactive environments remains a major challenge. We introduce Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. Cicero integrates a language model with planning and reinforcement learning algorithms by inferring players' beliefs and intentions from its conversations and generating dialogue in pursuit of its plans. Across 40 games of an anonymous online Diplomacy league, Cicero achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game.
​
​
Disclosure: I am one of the authors of the above paper.
Edit: I just heard from the team that they’re planning an AMA to discuss this work soon, keep an eye out for that on /r/machinelearning.
Amortize_Me_Daddy t1_ixdyg12 wrote
Very cool work. I saw this on my LinkedIn feed and immediately had to share it with my fiancé who is a huge fan of risk and diplomacy. To me, this seems like a much bigger deal than AlphaGo - can someone give me a sanity check?
I’m also interested in how much thought was put into the persuasiveness of generated messages when making a proposal. It seems like something way out of the scope of RL, but still quite important to optimize. I am just… astounded reading over that convo between France and Turkey. If you have time, would you mind offering some insight into the impressive “salesmanship” of CICERO’s language model?