Satisficing mentalizing — Learning models of Theories of Mind for behavior understanding

This projects explores different strategies of how artificial systems could be equipped with a Theory of Mind, i.e. being able to infer hidden mental states of other agents from observable behavior. We are developing models which can learn from past encounters with agents to form models capable of performing such mentalizing in a satisficing way, meaning that predictions need to be accurate enough to be useful in a range of different situations but also performed quickly enough to enable real-time applications.

Contact: Jan Pöppel (