Our work “Minimal Latency Speech-Driven Gesture Generation for Continuous Interaction in Social XR” with Niklas as first author received the Best Poster Award at the 6th IEEE International Conference on Artificial Intelligence & extended and Virtual Reality (AIxVR). We explore how to use AI-based nonverbal behavior synthesis for realtime and seamless “behavior augmentation” of avatars in Social VR.
Author Archives: Stefan Kopp
Student paper accepted as HRI 2024 Late Breaking Report
A paper by student author Lisa Bohnenkamp (together with Olga) got accepted at HRI 2004 as late breaking report. It presents a study on the factors influencing how humans perceive information presented by a robot via gestures.
New project within the CRC 1646 “Linguistic Creativity” to start in April 2024
A new Collaborative Research Center (CRC) on “Linguistic Creativity in Communication” funded by DFG is about to start in 2024. We are part of it with a project on multimodal creativity in speech-gesture production. In collaboration with Joana Cholin (Psycholinguistics) we will investigate how humans and AI models can use gesture to accompany newly created linguistic constructions, when given communicative resources are not available or not adequate.
Best paper award at ICMI 2023!
Our paper “AQ-GT: a temporally aligned and quantized GRU-Transformer for Co-Speech Gesture Synthesis” by Hendric Voss and Stefan Kopp won the Best Paper Award of the 25th ACM International Conference on Multimodal Interaction (ICMI 2023) held in Paris.