Despite the current potential to use computers to automatically generate a large range of text-based indices, many issues remain unresolved about how to apply these data in established language teaching and assessment contexts. One way to resolve these issues is to explore the degree to which automatically generated indices, which are reflective of key measures of text quality, align with parallel measures derived from locally relevant, human evaluations of texts. This study describes the automated evaluation of 104 English as a second language texts through use of the computational tool Coh-Metrix, which was used to generate indices reflecting text cohesion, lexical characteristics, and syntactic complexity. The same texts were then independently evaluated by two experienced human assessors through use of an analytic scoring rubric. The interrelationships between the computer and human generated evaluations of the texts are presented in this paper with a particular focus on the automatically generated indices that were most strongly linked to the human generated measures. A synthesis of these findings is then used to discuss the role that such automated evaluation may have in the teaching and assessment of second language writing.
Made available in DSpace on 2018-10-03T17:55:01Z (GMT). No. of bitstreams: 1
22_03_matthews_10125_44661.pdf: 685471 bytes, checksum: 4f5b71cfa52c203af4f5b989588e3753 (MD5)
Previous issue date: 2018-10-01
Matthews, J., & Wijeyewardene, I. (2018). Exploring relationships between automated and human evaluations of L2 texts. Language Learning & Technology, 22(3), 143–158. https://doi.org/10125/44661/
University of Hawaii National Foreign Language Resource Center Michigan State University Center for Language Education and Research
Writing Assessment/Testing Language Teaching Methodology Research Methods
Exploring relationships between automated and human evaluations of L2 texts