Current Projects

Satisficing mentalizing — Learning models of Theories of Mind for behavior understanding; ?>

Satisficing mentalizing — Learning models of Theories of Mind for behavior understanding


This projects explores different strategies of how artificial systems could be equipped with a Theory of Mind, i.e. being able to infer hidden mental states of other agents from observable behavior. We are developing models which can learn from past encounters with agents to form models capable of performing such mentalizing in a satisficing way, […]


Lively and trustworthy social robots (VIVA); ?>

The VIVA project aims to build a mobile social robot that produces lively and socially-appropriate behavior in interaction. We are in charge of developing an embodied communication architecture that controls and mediates the robot’s responsive behavior. In addition, we endow the robot with abilities for cohesive spoken dialogue over long-term interactions.


Mental models in collaborative interactive reinforcement learning


This project is part of the Research Cluster CINEMENTAS (“Collaborative Intelligence Based on Mental Models of Assistive Systems”) funded by Honda Research Institute Europe. We investigate the role of mental models for interactive reinforcement learning, according to which learning is seen as a  dynamic collaborative process in which the trainer and learner together trying to figure […]


Development of iconic gesture-speech integration in preschool children (EcoGest); ?>

This projects aims to provide a detailed account of the development of iconic gesturing and its integration with speech in different communicative genres. We will study pre-school children at 4 to 5 years of age to investigate their speech-accompanying iconic gesture use and to develop a computational cognitive model of their development. We apply qualitative and […]


Child-Robot Communication and Collaboration (BabyRobot); ?>

BabyRobot aims to develop robots that can support the development of socio-affective, communication and collaboration skills in both typically developing and autistic spectrum children. The goal of the project is to enable robots to share attention, establish common ground and form joint goals with children. We will contribute research on nonverbal gesture-based interaction in relation to shared attention, interpersonal alignment and grounding. Specifically, we develop modules for recognition and synthesis of expressive social signals and gestures on social robot platforms.


Second Language Tutoring Using Social Robots (L2TOR); ?>

L2TOR (pronounced ‘el tutor’) aims to design a child-friendly tutor robot that can be used to support teaching preschool children a second language (L2) by interacting with children in their social and referential world. The project will focus on teaching English to native speakers of Dutch, German and Turkish, and teaching Dutch and German as L2 to immigrant children. We contribute research on interaction management tasks based on decision-theoretic dialogue modeling, probabilistic mental state attribution and incremental grounding.


Social Motorics; ?>

Social Motorics


This projects explores the role of perceptual and motor processes in live social interaction. We are developing a probabilistic model of how prediction-based action observation and mentalizing (i.e. theory of mind) contribute and work together during dynamically unfolding gestural interactions.


Underlying mechanisms of the sense of agency (SoA); ?>

Underlying mechanisms of the sense of agency (SoA)


This project investigates the underlying mechanisms of the sense of agency, i.e. the sense that one is in control of their actions and the consequences of these actions. We employ behavioral methodologies established in both physical and virtual reality environments, and also examine how sense of ownership and sense of agency alter in joint actions with robots and virtual agents.


Intelligent Coaching Space (ICSPACE); ?>

ICSPACE explores how to use Virtual Reality (VR) to provide intelligent coaching for sports training, motor skill learning, or physical rehabilitation. We develop a closed-loop intelligent coaching environment that enables online support through multi-modal, multi-sensory feedback delivered by augmented virtual mirrors or action-oriented natural language of a virtual coach.