Scalable hybrid Avatar-Agent-Technologies for everyday social interaction in XR (HiAvA)

HiAvA investigates and develops technologies for enabling multi-user applications in Social VR, mitigating the challenges of social distancing. The goal is to improve upon current solutions by maintaining immersion and social presence even on hardware devices that only allow for limited tracking or rendering. The resulting system should exceed the capabilities of current video communication in terms of scalability, immersion and comfort. Our group contributes work on AI-based models for the speech-driven generation of non-verbal behavior of human avatars, in particular meaningful human-like gesticulation that is suitable for use with avatar-based face-to-face interaction systems.

Full Project Page

Implications of conversing with intelligent assistants in everyday life (IMPACT)

Autonomous systems using artificial intelligence (AI) to communicate with humans will soon be a part of everyday life. The increasing availability and deployment of such systems can have implications for humans and society at different levels. This project studies those implications with regard to users’ (1) understanding of AI algorithms, (2) communication with machines, and (3) relationship building with machines.

Adaptive autonomy of worker assistance systems

This project is part of the Forschungskolleg “Design of flexible working environments – human-centered use of Cyber-Physical Systems in industry 4.0” run by the Universities of Paderborn and Bielefeld. We investigate how to develop learning, intelligent assistance systems for industrial workers that adapt their level of assistance and autonomy to the internal state of the worker, and need to make this understood, accepted, and utilized by the user.

Computational cognitive modeling of the predictive active self in situated action (COMPAS)

The COMPAS project aims to develop a computational cognitive model of the execution and control of situated action in an embodied cognitive architecture that allows for (1) a detailed explanation, in computational terms, of the mechanisms and processes underlying the sense of agency; (2) simulation of situated actions along with the subjectively perceived sense of control and its impact on how actions are regulated, (3) empirical validation  through comparison with empirical data obtained in experimental studies with human participants.

Lively and trustworthy social robots (VIVA)

The VIVA project aims to build a mobile social robot that produces lively and socially-appropriate behavior in interaction. We are in charge of developing an embodied communication architecture that controls and mediates the robot’s responsive behavior. In addition, we endow the robot with abilities for cohesive spoken dialogue over long-term interactions.

Mental models in collaborative interactive reinforcement learning

This project is part of the Research Cluster CINEMENTAS (“Collaborative Intelligence Based on Mental Models of Assistive Systems”) funded by a major international technology company. We investigate the role of mental models for interactive reinforcement learning, according to which learning is seen as a  dynamic collaborative process in which the trainer and learner together trying to figure out best action policy. We study how learning of a cognitive system can be sped up when the human trainer gives evaluative, informative feedback based on mental models of the system and the learning process, and the learner uses this feedback to build a (potentially simplified) model of the task domain and to shape its policy.

Development of iconic co-speech gesturing in preschool children (EcoGest)

This projects aims to provide a detailed account of the development of iconic gesturing and its integration with speech in different communicative genres. We will study pre-school children at 4 to 5 years of age to investigate their speech-accompanying iconic gesture use and to develop a computational cognitive model of their development. We apply qualitative and quantitative methods to study children’s speech-gesture behavior and to evaluate our findings with computational cognitive modeling in terms of the following aspects: (1) forms of iconic gesturing, (2) positions of gestures, (3) semantic coordination of speech and gesture, (4) packaging of information.