Our first KickOff-Meeting was held on 12.2-13.2.2018 at CITEC, Bielefeld University. Invited speakers were Afra Alishai (Tilburg University), Manuel Bohn (Stanford University) and Miriam Morek (RUB Bochum).
|10:00||Katharian Rohlfing, Stefan Kopp, Olga Abramov.||Introduction.|
|10:30 - 11:30||Miriam Morek (RUB Bochum)||Analysing (children's) explanations: An interactionist perspective Drawing on recordings of naturally occurring dinner-table interactions of middle- aged children, the paper presents an interactionist approach to the analysis of explanations within everyday talk. A CA-based perspective on the sequential and co-constructed unfolding of explanations is combined with a genre-oriented account of the functional embedding of explanations into a specific sequential and social context. By means of exemplary sequences, it is demonstrated how explanatory sequences can be analysed on the levels of ‘jobs’, ‘devices’ and ‘forms’. Consequences of an interactionist approach for research into children’s acquisition of explanatory discourse competencies will be drawn.|
|11:45-12:45||Afra Alishahi (Tilburg University)||Emerging representations of form and meaning in models of grounded language Humans learn to understand speech from weak and noisy supervision: they manage to extract structure and meaning from speech by simply being exposed to utterances situated and grounded in their daily sensory experience. Emulating this remarkable skill has been the goal of numerous studies; however researchers have often used severely simplified settings where either the language input or the extralinguistic sensory input, or both, are small scale and symbolically represented. We simulate this process in visually grounded models of language understanding which projects utterances and images to a joint semantic space. We use variations of recurrent neural networks to model the temporal nature of spoken language, and examine how form and meaning-based linguistic knowledge emerge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.|
|16:00 - 17.00||Manuel Bohn (Stanford University)||Grounding reference in shared experience How can we communicate in the absence of language? How is language related to the world? Among others, these questions are important when thinking about language emergence, both in ontogeny and phylogeny. Gestures have been suggested to be a powerful means of communication that can be used in the absence of language. In this talk, I will argue that gestures have this potential because they ground reference in shared experience. I will present empirical work with children between 1 and 6 years of age as well as great apes to support this claim. First, I will show how infants (as well as great apes) use pointing gestures to refer to things that are absent but have been part of a shared experience. Next, I will discuss studies suggesting that 2 to 4yo children (but not apes) spontaneously relate iconic signals to aspects of shared experience. Finally, I will talk about some ongoing work showing how 4 and 6yo children can use iconic gestures to create a novel communication system when they cannot talk. Taken together, I hope these studies lend support to the idea that human communication is rooted in social cognition, making our communicative abilities resilient and relatively independent of language.|