Presenting a XAI reasoning support system for medical diagnosing

Can AI help improving the diagnostic reasoning of physicians to make them arrive at diagnoses that are better justifiable? We present “ASCODI”, an XAI-based interactive reasoning support system for this, in a paper at the MAI-XAI Workshop at the ECAI 2024 conference. This is work done in Project C05 of the TRR 318 Constructing Explainability.

Multimodal Creativity in Speech-Gesture Production

This project is part C02 of the CRC 1646 “Linguistic Creativity in Communication”.  It investigates how speakers make creative use of their verbal and gestural resources in communicatively challenging situations. We combine psycholinguistics experiments with A.I.-based, computational cognitive modeling to develop a model of speech-gesture creativity and create simulations of robust, effective multimodal speakers.

New papers out at CogSci, ACM IVA and SemDIAL

Paper season continues! short and long papers accepted at CogSci, ACM IVA, and SemDial conferences. Topics:

  • Inferring Partner Models for Adaptive Explanation Generation (SemDial “TrentoLog”)
  • Revealing the Cognitive Trajectories of Medical Diagnostic Reasoning (CogSci 2024)
  • Sense of Control in Dynamic Multitasking and its Impact on Voluntary Task-Switching (CogSci 2024)
  • Integrating Representational Gestures into Automatically Generated Embodied Explanations (IVA 2024)

Papers accepted at CogSci 2024

We will present our work at CogSci 2024 in Rotterdam (NL): One paper with Dominik as 1st author on “Revealing the Dynamics of Medical Diagnostic Reasoning as Step-by-Step Cognitive Process Trajectories”; a second one with Annika as lead author on “Sense of Control in Dynamic Multitasking and its Impact on Voluntary Task-Switching Behavior”.

Best Poster Award at IEEE AIxVR 2024!

Our work “Minimal Latency Speech-Driven Gesture Generation for Continuous Interaction in Social XR” with Niklas as first author received the Best Poster Award at the 6th IEEE International Conference on Artificial Intelligence & extended and Virtual Reality (AIxVR). We explore how to use AI-based nonverbal behavior synthesis for realtime and seamless “behavior augmentation” of avatars in Social VR.

Student paper accepted as HRI 2024 Late Breaking Report

A paper by student author Lisa Bohnenkamp (together with Olga) got accepted at HRI 2004 as late breaking report. It presents a study on the factors influencing how humans perceive information presented by a robot via gestures.

New project within the CRC 1646 “Linguistic Creativity” to start in April 2024

A new Collaborative Research Center (CRC) on “Linguistic Creativity in Communication” funded by DFG is about to start in 2024. We are part of it with a project on multimodal creativity in speech-gesture production. In collaboration with Joana Cholin (Psycholinguistics) we will investigate how humans and AI models can use gesture to accompany newly created linguistic constructions, when given communicative resources are not available or not adequate.