Completed Projects



Speech-Gesture Alignment; ?>

This project investigated the cognitive mechanisms that underlie the production of multimodal utterances in dialogue. In such utterances, words and gestures are tightly coordinated with respect to their semantics, their form, the manner in which they are performed, their temporal arrangement, and their joint organization in a phrasal structure of utterance. We studied in particular, both empirically and with computational simulation, how multimodal meaning is composed and mapped onto verbal and gestural iconic forms and how these processes interact both within and across both modalities to form a coherent multimodal delivery.


LogiPro (transfer project in the BMBF cluster “it’s OWL”); ?>

This transfer project developed a human-guided optimization system for logistics planning, based on analysis of an actual logistics process of a mid-sized company. The system allows for interactively exploring, manipulating, and constraining possible solutions in practical work environments at runtime. Since then it is being applied by our industrial partner.



IMoSA – Imitation and Motor Cognition in Social Embodied Agents (CITEC); ?>

This project explored how motor cognition mechanisms for mirroring and mental simulation are employed in social interaction. We develop a hierarchical probabilistic model of motor knowledge that is used to generate communicative gestures and recognize them in others. This allowed our virtual robot VINCE to learn, perceive and produce meaningful gestures incrementally and robustly online in interaction.


CLARIN-D – Working Group “Language and other Modalities” (BMBF); ?>

Within the working group 6 “Language and other Modalities” of the BMBF-funded project CLARIN-D “Research Infrastructure for Digital Humanities” we developed methodologies for building multimodal corpora. This included the editing and integration of multimodal resources into CLARIN-D and the preparation of a web-service tool chain for multimodal data. We applied this to the Speech and Gesture Alignment Corpus (Bielefeld University), Dicta-Sign Corpus of German Sign Language (Hamburg University) and Natural Media Motion Capture-Data (RWTH Aachen University).


Conceptual Motorics (CoR-Lab; Honda Research Europe); ?>

Conceptual Motorics (CoR-Lab; Honda Research Europe)

2008-2012

Enabled the humanoid robot ASIMO to produce synthetic speech along with expressive hand gesture, e.g., to point to objects or to illustrate actions currently discussed, without being limited to a predefined repertoire of motor actions. A series of experiments gave new insights into human perception of gestural machine behaviors and how to use these in designing artificial communicators. We found that humans attend to a robot’s gestures, are affected by incongruent speech-gesture combinations, and socially prefer robots that occasionally produce imperfect gesturing.


AMALIS – Adaptive Machine Learning of Interaction Sequences (CITEC); ?>

This project explored how systems can learn multivariate sequences of interaction data in highly dynamic environments. We developed models for unsupervised and reinforcement learning of hierarchical structure from sequential data (Ordered Means Models), which not only afford the analysis of sequences but also generation of learned patterns. This was demonstrated, e.g., in enabling the agent VINCE to play, and always win the rock-paper-scissors game.


ADECO – Adaptive Embodied Communication (CITEC); ?>

ADECO – Adaptive Embodied Communication (CITEC)

2008-2010

Instructions about sequences of actions are better memorized when offered with appropriate gestures. In this project, the user’s memory representations during learning a complex action from speech-gesture instructions was assessed using the SDM-A method. A virtual character was then used to give instructions with accordingly self-generated gestures.