Abstract:
This talk introduces Concordia, a framework for developing and evaluating cooperative intelligence in agents built with Large Language Models (LLMs). Building upon the rich history of Agent-Based Modeling (ABM), Concordia empowers researchers to construct Generative Agent-Based Models (GABMs), where LLM-powered agents interact with each other and their environment through natural language. Concordia facilitates the creation of complex, language-mediated simulations of diverse scenarios, from physical worlds to digital environments involving apps and services. Within these simulations, agents employ a flexible component system to guide their actions, seamlessly integrating LLM calls with associative memory retrieval. A key feature of Concordia is the "Game Master," inspired by tabletop role-playing games, which simulates the environment, translates agent actions into appropriate implementations, and ensures the realism of interactions. By enabling the study of cooperation among LLM agents in challenging scenarios, such as those involving competing interests and potential miscommunication, Concordia aims to advance research on cooperative and social intelligence. This research is critical as we witness the rapid growth of LMs and anticipate the increasing prevalence of personalized agents in our lives. The ability of these agents to effectively cooperate with one another and with humans will be crucial for their successful and beneficial integration into society.
Minsuk Chang is interested in our and other agents’ (in)ability to acquire new skills/knowledge through interaction. He builds and studies the dynamics of learning processes, seeking to understand how agents can effectively gather information, adapt to new situations, and expand their repertoire of behaviors.