Attention and learning in L2 multimodality: A webcam-based eye-tracking study

April 18, 2025, 1:06 p.m.
July 14, 2025, 7:41 p.m.
July 14, 2025, 7:41 p.m.
[['https://scholarspace.manoa.hawaii.edu/bitstreams/f38b56f5-1dd3-4685-9505-9606f7b38402/download', '29_01_10125-73626.pdf']]
[['https://scholarspace.manoa.hawaii.edu/bitstreams/c6892d3a-b0ec-4d25-a5de-c3d0ecd0170c/download', 'full_text']]
Volume 29 Number 1, 2025
Zhang, Pengchong Zhang, Shi
2025-04-17T23:23:20Z
2025-04-17T23:23:20Z
2025
2025-04-21
Multimodal input can significantly support second language (L2) vocabulary learning and comprehension. However, very little research has examined how L2 learners, especially young learners, allocate attention when exposed to such input and whether learning from multimodal input can be explained by attention allocation. This study therefore investigated individual differences in attention allocation during L2 vocabulary learning with multimodal input and how vocabulary learning and comprehension were influenced by these differences. Forty young learners of French watched two types of multimodal input (Written+Audio+Picture vs. Written+Speaker+Video) and had their eye-movements recorded through online webcam-based eye-tracking technology. They also completed tests of comprehension, vocabulary, and phonological short-term memory (PSTM). We show that greater attention was allocated to the non-verbal input in video than in picture format, and such attention allocation differences were further negatively predicted by learners’ PSTM capacity. Additionally, increased attention to the non-verbal element, whether video or picture, resulted in better overall comprehension and larger vocabulary gains in meaning recognition and recall. Our findings give new insights into the role of attention and how it can be maximized, with both theoretical and pedagogical implications for multimodal L2 learning.
27
27
Zhang, P., & Zhang, S. (2025). Attention and learning in L2 multimodality: A webcam-based eye-tracking study. Language Learning & Technology, 29(1), 1–27. https://hdl.handle.net/10125/73626
1094-3501
https://hdl.handle.net/10125/73626
eng
1
Language Learning & Technology
University of Hawaii National Foreign Language Resource Center Center for Language & Technology
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
https://creativecommons.org/licenses/by-nc-nd/4.0/
/item/10125-73626/
1
Attention, Multimodality, Vocabulary, Comprehension
Attention and learning in L2 multimodality: A webcam-based eye-tracking study
Article Text
29