Symposium Programme


Monday, 16th October 2017

09:30 – 10:00: Registration

10:00 – 10:05: Welcome

10:05 – 11:05: Keynote: Irene Mittelberg
Augmented iconicity — Visualizing and analyzing gestures with motion-capture technology

11:05 – 11:30: Coffee

11:30 – 13:00: Session 1: Eye- and gaze behavior

Inez Beukeleers, Geert Brône and Myriam Vermeerbergen:
Seeing turn transitions from the perspective of the non- addressed participant: an eye- tracking study in Flemish Sign Language interactions

Patrizia Paggio and Costanza Navarretta:
Coordination of facial expressions and head movements in first encounter dialogues

Paul Hömke, Judith Holler and Stephen C. Levinson:
Eye blinking as listener feedback in face-to-face communication

13:00 – 14:30: Lunch

14:30 – 15:30: Session 2: Digital communication devices and tutoring systems

Hildegard Vermeiren:
Multimodal communication, yes. But what about interaction?

CANCELLED: Marietta Sionti and Thomas Schack:
Kinematics and literacy; towards an intelligent tutoring system

Junko Kanero, Mirjam de Haas, Ezgi Mamus, Cansu Oranç, Rianne van den Berghe, Josje Verhagen, Ora Oudgenoeg-Paz, Kirsten Bergmann, Thorsten Schodde, Aylin C. Küntay, Tilbe Göksun and Paul Vogt:
Observing human tutoring to develop robot-based language lessons

15:30 – 16:30: Coffee

16:30 – 17:30: Session 3: Gestures

Annelies Jehoul, Geert Brône and Kurt Feyaerts:
Gestural holds and turn holding. On the coordination of gesture and eye gaze in interaction management

Friederike Kern, Kirsten Bergmann, Stefan Kopp and Katharina Rohlfing:
Iconicity in children’s discourse – Forms and Functions

18:30: Dinner at Jivino (self-paid)


Tuesday, 17th October 2017

09:00 – 10:00: Keynote: Christian Wolf
Learning human motion: gestures, activities, pose, identity

10:00 – 10:30: Coffee

10:30 – 12:00: Session 4: Conversational agents

Brian Ravenet, Chloé Clavel and Catherine Pelachaud:
The potential of Image Schemas for computing automatically metaphoric gestures for embodied conversational agents

Eugenia Hee, Ron Artstein, Su Lei, Cristian Cepeda and David Traum:
Assessing Differences in Multimodal Grounding with Embodied and Disembodied Agents

Hendrik Buschmeier and Stefan Kopp:
Conversational Agents Need to Be ‘Attentive Speakers’ to Receive Conversational Feedback from Human Interlocutors


12:00 – 13:30: Lunch

13:30 – 15:00: Session 5: Modeling communication

Matthew Roddy and Naomi Harte:
Towards predicting dialog acts from previous speakers’ non-verbal cues

Kristiina Jokinen and Trung Ngotrong:
Conversational topic modelling in first encounter dialogues

Jacqueline Hemminghaus, Laura Hoffmann and Stefan Kopp
Teaching a Robot how to Guide Attention in Child-Robot Learning Interactions


15:00 – 15:30: Coffee

15:30 – 16:30 Session 6: Student papers

Agata Wlaszczyk and Pola Schwöbel:
The influence of noise on multimodal spatial localization tasks

Fiammetta Caccavale and Rasmus Kær Jørgensen:
Learning through different modalities: Comparison between visual stimuli and auditory stimuli through the participants’ ability to recall items

Lorenzo Cazzoli, Jacqueline Hemminghaus, Stefan Kopp and Mauro Gaspari:
Generalizing Multimodal Policies Learned in a Human-Robot Interaction

16:30: Closing

16:30 – 18:30: Open labs at CITEC