Gesture plays an outsized role in the communication of young children who are just starting to acquire spoken language. It may have played a similarly central role in the earliest stages of language evolution before the emergence of shared linguistic conventions. Gestures produced by non-human great apes (our closest living relatives) are sophisticated in many ways. They are produced intentionally, deployed flexibly, and modified strategically. And yet, they often feel qualitatively different from either co-speech gesturing or silent gesturing in humans. This talk will examine similarities and differences of gesturing in apes and humans and consider differences in the ways these systems are studied by researchers. Theories about gesture’s role in the origin of language will be reexamined in relation to comparative findings about the gestures of children and apes. By highlighting continuities and discontinuities in gesturing, we can generate better theories about how language evolved and what role gesture played.
About the keynote: Erica Cartmill is Professor of Anthropology, Animal Behavior, Cognitive Science, and Psychology at Indiana University. She also co-directs the Diverse Intelligences Summer Institute (DISI) and is the co-chair of the Evolution of Language (EVOLANG) conferences. Erica aims to understand the embodied origins of language by studying gestural communication and social cognition in children and great apes.
Individuals vary how much information they receive from gestures and how they use their gestures. Why do people gesture in a wide variety of situations and thinking, talking about different cognitive and affective processes? Do gestures serve similar functions in all these instances? How do individuals’ use of other cognitive resources interact with their gesture use and processing of gestures across different contexts? This talk will address multifunctionality of gesture processing and production across populations and contexts, with an emphasis on gesture’s role in using (and possibly compensating) verbal and visuo-spatial cognitive resources. The question of “why we gesture” will be examined through detailed analyses that focus on internal and external factors in these processes across developmental time (i.e., who gestures how, about what topic, and under which circumstances).
About the keynote: Tilbe Göksun is a professor at Koç University Department of Psychology and is the director of Koç University Language and Cognition Lab. She examines the interaction between language and thought in different age groups (children, younger and older adults) and populations (typical and atypically developing children, neuropsychological patients), using multilevel and multimodal analyses. Her primary research areas are early language and cognitive development, the links between language and other cognitive processes. She uses interdisciplinary research methods and cross-linguistic comparisons in her research.
Gestures play a crucial role in human communication, complementing both spoken and signed languages. While gestures are commonly used in visual interactions, deafblind signers adapt them for tactile communication. This presentation explores how deafblind addressees perceive and interpret manual palm-up gestures, emotional expressions, and haptic sensations in tactile Swedish Sign Language. Based on corpus data, the study examines dyadic conversational strategies, shedding light on the transition from visual to tactile modalities. The findings enhance our understanding of how deafblind signers modify gesture use to maintain effective communication in a tactile context.
About the keynote: Johanna Mesch is a Professor of Sign Language in the Sign Language Section of the Department of Linguistics at Stockholm University, Sweden. She has pioneered various approaches to the study of Swedish Sign Language and tactile Swedish Sign Language from both linguistic and pragmatic perspectives. Additionally, she has played a key role in developing corpora for Swedish Sign Language, including the learner corpus and the tactile sign language corpus.
Signed languages have been studied for decades in linguistics and related fields, yet relatively little attention has been given to the embodied aspects of signing. For instance, how does the physical experience of producing signs influence one’s perception of others’ signing? How can theories of embodied learning be leveraged to enhance learning outcomes for sign language users? In this talk, I will present recent work from my lab at Gallaudet University, the world’s leading institution of higher education for deaf and hard-of-hearing students. I will discuss behavioral and neuroscientific research, as well as emerging technological approaches, to exploring how sign language facilitates efficient learning and perception. I will share data supporting experience-dependent neuroplasticity in signers, along with ongoing efforts to develop augmented and virtual reality technologies that harness the spatial nature of signed languages to create innovative learning tools.
About the keynote: Dr. Lorna Quandt is the director of Action & Brain Lab at Gallaudet University in Washington, D.C. She serves as Co-Director of the VL2 Research Center alongside Melissa Malzkuhn. Dr. Quandt is an Associate Professor in the Ph.D. in Educational Neuroscience (PEN) program and is also the Science Director of the Motion Light Lab. Dr. Quandt founded the Action & Brain lab in early 2016. Before that, Dr. Quandt obtained her BA in Psychology from Haverford College and a PhD in Psychology, specializing in Brain & Cognitive Sciences, from Temple University. She completed a postdoctoral fellowship at the University of Pennsylvania, working with Dr. Anjan Chatterjee. Her research examines how knowledge of sign language changes perception, particularly visuospatial processing. Dr. Quandt is also pursuing the development of research-based educational technology to create new ways to learn signed languages in virtual reality.
How can the science of multimodal communication move much faster? How can we safely share data as a coordinated, global scientific community, the way scientists do in astronomy and physics? Human teams working on human behavior face extreme barriers in gathering, wrangling, annotating, analyzing, and sharing data. In 2024, Torrent, Hoffmann, Lorenzi, & Turner published Copilots for Linguists: AI, Constructions, and Frames (Cambridge University Press; https://doi.org/10.1017/9781009439190 ). It demonstrated various ways in which computational approaches can assist the researcher, as copilots, to accelerate the progress. Copilot analytic techniques are now available that only a few months ago were unimagined. We are witnessing lightning advances in computational methods and technology for studying multimodal communication. (For example, fears of feeding corporate cloud systems evaporate if one treats all the data and runs all the systems on-premises, using distilled models and powerful, relatively affordable new hardware.) This talk surveys how copilots can serve any researcher working on communicative performances involving vision, sound, manipulation of affordances, “motion to meaning.” Topics include data capture and annotation, analyzing the dynamics of multimodal communication, processing of data through multimodal data pipelines, modeling gesture–speech synchrony, building virtual agents equipped with gesture recognition and generation capabilities, analyzing body kinematics, and AI generation of multimodal communication. New software and hardware tools will be highlighted that have only recently been developed.
About the keynote: Mark Turner is Institute Professor and Professor of Cognitive Science at Case Western Reserve University. Dr. Turner's research focuses on the mental operations that make it possible for cognitively modern human beings to be so astoundingly creative as a species and to have such remarkable higher-order cognition (e.g., language, gesture, art, music, mathematical insight, scientific discovery, religion, advanced social cognition, refined tool use, dance, fashions of dress, sign systems, and social systems of politics, economics, and law). His research particularly emphasizes cognitively modern abilities for mapping and conceptual integration. Turner co-directs The International Distributed Little Red Hen Lab, dedicated to theory of multimodal communication. Red Hen Lab develops AI, computational, statistical, and technical tools to assist those who research multimodal communication.
For the latest news, follow @ISGS2025 on Bluesky and X!
#ISGS2025