Student paper accepted as HRI 2024 Late Breaking Report

A paper by student author Lisa Bohnenkamp (together with Olga) got accepted at HRI 2004 as late breaking report. It presents a study on the factors influencing how humans perceive information presented by a robot via gestures.

New project within the CRC 1646 “Linguistic Creativity” to start in April 2024

A new Collaborative Research Center (CRC) on “Linguistic Creativity in Communication” funded by DFG is about to start in 2024. We are part of it with a project on multimodal creativity in speech-gesture production. In collaboration with Joana Cholin (Psycholinguistics) we will investigate how humans and AI models can use gesture to accompany newly created linguistic constructions, when given communicative resources are not available or not adequate.

Best paper award at ICMI 2023!

Our paper “AQ-GT: a temporally aligned and quantized GRU-Transformer for Co-Speech Gesture Synthesis” by Hendric Voss and Stefan Kopp won the Best Paper Award of the 25th ACM International Conference on Multimodal Interaction (ICMI 2023) held in Paris.

New Joint Research Center on Cooperative and Cognition-enabled AI

The newly launched CoAI Joint Research Center brings together researchers from the universities of Bielefeld, Bremen, and Paderborn — uniting their expertise in cognitive interaction technology, cognition-enabled robotics, and socially embedded intelligent systems. CoAI strives to break new ground in the interaction between humans and artificial intelligence systems. Our primary objective is to equip AI systems – especially robots – with the ability to reason and communicate in ways that will let them coordinate their behavior with the interests and goals of their human partners, ultimately enabling them to accomplish novel joint tasks with, and for, humans.

Three long papers accepted for oral presentation at ACM IVA 2023

Three long papers accepted for oral presentation at ACM IVA 2023!
1/3: Amelie‘s paper on benefits and drawbacks of adaptivity in AI-generated explanations.
2/3: Niklas‘ piece on realtime gesture generation during online social XR.
3/3: Hendric‘s paper on augmenting DL-based co-speech gesture synthesis with form and meaning features.

Check them out at IVA 2023.