Search

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Ç Ł Ş

Showing 81 - 90 results of 110 for Speech Synthesis

Cognate vs. noncognate processing and subtitle speed among advanced L2-English learners: An eye-tracking study
...speech, and concreteness in English (Brysbaert et al., 2014). See Appendix S2 online for details. Pretesting The 63 keywords were pretested to ensure they were known to participants3. Forty-one Po...

by Breno Silva, Valentina Ragni, Agnieszka Otwinowska, Agnieszka Szarkowska
in Volume 28 Number 1, 2024

Using machine translation to support ESL pre-service teachers’ collaborative feedback for writing
...speech. The group finally agreed on “His hands trembled with the microphone; he opened his mouth, but no sound came out,” adapted from Transmart’s output. The data also suggested that MT could help ...

by Linling Fu, Michelle Mingyue Gu, Tan Jin
in Volume 28 Number 1, 2024

The impact of technology-enhanced language learning environments on second language learners’ willingness to communicate: A systematic review of empirical studies from 2012 to 2023
...synthesis aims to provide an in- depth understanding of the mechanisms linking TELLEs and WTC. The implications derived from the findings may help researchers to further investigate unknown aspects o...

by Huan Huang, Michael Li
in Volume 28 Number 1, 2024

Multimodal effects of processing and learning contextualized L2 vocabulary
...speech was considered desirable. (2) Other words beyond the most frequent 4,000 word families in English according to the BNC- COCA 25k word family list (Webb & Nation, 2017) were replaced with more...

by Jonathon Malone
in Volume 29 Number 3, October 2025 Special Issue: Multimodality in CALL

Computer-based multimodal composing activities, self-revision, and L2 acquisition through writing
...speech (TTS) program that allows “playback of printed text as spoken words” (Atkinson & Greches, 2003, p. 178). As the text is being spoken, users may control the speed of the voice, pause the playb...

by Richmond Dzekoe
in Volume 21 Number 2, June 2017

Ecological semiotics: Multimodality, multilingualism, and situated language learning in the AI era
...speech. Any transliterations of speech are likely to come from texts such as lectures, readings, or scripts. Cope and Kalantzis (2024) comment: “As a consequence, prosody, dialect, gesticulation, em...

by Robert Godwin-Jones
in Volume 29 Number 3, October 2025 Special Issue: Multimodality in CALL

Automated written corrective feedback: Error-correction performance and timing of delivery
...speech (POS), and semantic information (Leacock et al., 2014). Context also factors into solutions for addressing what is the most frequent error type in both L1 and L2 writing: misspellings. Spelli...

by Jim Ranalli, Taichi Yamashita
in Volume 26 Number 1, 2022

The effects of face-to-face and computer-mediated recasts on L2 development
...speech (i.e., backshifting of verbs from past to past perfect). Conversely, SCMC recasts seem to be less successful when they target non-salient linguistic features (Loewen & Erlam, 2006; Sauro, 200...

by Nektaria-Efstathia Kourtali
in Volume 26 Number 1, 2022

Mobile-assisted language learning: A selected annotated bibliography of implementation studies 1994–2012
...speech synthesis technology, the system generates audio clips and packages them into an application which can be downloaded to mobile phones or accessed via the Internet. Learners listen to the word...

by Jack Burston
in Volume 17 Number 3, October 2013 Special Issue on MALL

Language proficiency over nonverbal sound effects in children's eBook incidental word learning
...speech 138 Language Learning & Technology perception. Even when the sounds are in the background, they can still capture children’s attention and divert cognitive resources away from pr...

by He Sun, Adam Roberts, Jessica Tan, Jieying Leh, Yvonne Cui Yun Moh
in Volume 29 Number 3, October 2025 Special Issue: Multimodality in CALL