< Back to previous page

Project

Multimodal stance-taking in interaction

A fundamental property of language is its ability to simultaneously represent subjects, objects or events, and express the speaker’s stance towards these representations. Although stance-taking as a socially contextualized and recognized interpersonal phenomenon has received substantial attention in different subfields of linguistics, its multimodal realization in real-life interaction still remains largely unexplored. The proposed research project zooms in on the interplay of different semiotic resources, including manual gestures and signs, posture, facial expressions, acoustic experience, touch and eye gaze in complex stance-taking acts (stance-stacking), which may be realized simultaneously and/or sequentially, within or across speakers engaged in interaction (co-stacking). Through a balanced set of three interrelated phenomena (multimodal grounding and distancing in irony, depiction of embodied performances and full-body enactments of others), involving different interaction types (spontaneous interactions, narratives, music classes) and languages (Dutch, Flemish Sign Language (VGT), English, German), we aim to develop a full-fledged empirical account of multimodal stance-taking.

Date:15 Nov 2020 →  Today
Keywords:multimodal stance-taking, discourse analysis
Disciplines:Sign language research, Linguistics not elsewhere classified, Pragmatics, Corpus linguistics, Discourse studies
Project type:PhD project