Realtime Mentalizing in Human-Agent Collaboration

This projects explores how AI-based agents can be equipped with an ability to cooperate grounded in a Theory of Mind, i.e. attribution of hidden mental states to other agents inferred from their observable behavior. In contrast to the usual approach to study this capability in offline, observer-based settings, we aim to fuse mentalizing with strategic planning and interacting in realtime situated cooperation. Previous work has developed Bayesian ToM models capable of performing mentalizing in adaptive, “satisficing” ways, i.e. solving the trade-off between accuracy and efficiency. Ongoing work looks at how this can be integrated bi-directionally with realtime planning and monitoring of cooperative behavior. We also investigate the cooperative abilities of LLM-based agents as well as how humans cooperate and communicate in the game environment “Overcooked”.

Contact: Florian Schröder (fschroeder@techfak.uni-bielefeld.de), Stefan Kopp (skopp@techfak.uni-bielefeld.de)

Publications: