Enhancing multimodal literacy using augmented reality

May 7, 2021, 2:06 a.m.
Feb. 15, 2022, 7:15 a.m.
Feb. 15, 2022, 7:15 a.m.
[['https://scholarspace.manoa.hawaii.edu/bitstreams/21a795a0-35fa-40a9-9d70-b4ec7249e961/download', '24_1_10125-44706.pdf']]
[['https://scholarspace.manoa.hawaii.edu/bitstreams/b8bfc169-68ba-4c3a-9b26-8616b627fa1b/download', 'full_text']]
Volume 24 Number 1, February 2020
Yeh, Hui-Chin Tseng, Sheng-Shiang
Greg Kessler
2020-01-31T15:56:04Z
2020-01-31T15:56:04Z
2020-02-01
Augmented reality (AR) technology has been used to successfully improve traditional literacy. However, there has been a paradigm shift in literacy education from traditional literacy to multimodal literacy. Little research has explored how students establish effective multimodal meaning-making using AR technology. This study is an investigation of how EFL college students use different multimodal modes to communicate with others using AR technology. Participants were 52 English as a Foreign Language (EFL) students. The collected data included (a) pre-and post-administrations of a multimodal literacy survey, (b) students’ use of different modes to introduce tourist spots within the location-based AR app, and (c) students’ reflection essays. The results demonstrated that the modes which students used were categorized into visual and auditory forms. The visual mode was composed of visual effects, images, and animations, whose functions were to focus viewers’ attention on what is important, provide concrete ideas, process complex information, and promote engagement. The auditory mode consisted of background music and sound effects, which were used to arouse emotional feelings and enhance immersive experiences. The results also revealed that creating the content in a location-based AR app with the combination of different multimodal media significantly improved students’ multimodal literacy.
Made available in DSpace on 2020-01-31T15:56:04Z (GMT). No. of bitstreams: 1 24_1_10125-44706.pdf: 554730 bytes, checksum: 67e84c5959eff5b1adf39fc50811729c (MD5) Previous issue date: 2020-02-01
37
Yeh, H.-C. & Tseng, S.-S., (2020). Enhancing Multimodal Literacy using Augmented Reality. Language Learning & Technology, 24 (1), 27–37. https://doi.org/10125/44706
10125/44706
1094-3501
http://hdl.handle.net/10125/44706
Language Teaching and Technology Forum
1
Language Learning & Technology
University of Hawaii National Foreign Language Resource Center Center for Language & Technology (co-sponsored by Center for Open Educational Resources and Language Learning, University of Texas at Austin)
/item/10125-44706/
27
Augmented Reality EFL Students Multimodal Literacy Multimodal Media
Enhancing multimodal literacy using augmented reality
Column
Text
24