Exploring relationships between automated and human evaluations of L2 texts

Oct. 26, 2018, 10:03 p.m.
Feb. 14, 2022, 10:58 p.m.
Feb. 14, 2022, 10:58 p.m.
[['https://scholarspace.manoa.hawaii.edu/bitstreams/a83a0963-f3f3-40d7-be53-d9b69f89940c/download', '22_03_matthews_10125-44661.pdf']]
[['https://scholarspace.manoa.hawaii.edu/bitstreams/6f34c692-35e2-47f5-a78d-d3cb5dfa93d2/download', 'full_text']]
Volume 22 Number 3, October 2018
Matthews, Joshua Wijeyewardene, Ingrid
2018-10-03T17:55:01Z
2018-10-03T17:55:01Z
2018-10-01
Despite the current potential to use computers to automatically generate a large range of text-based indices, many issues remain unresolved about how to apply these data in established language teaching and assessment contexts. One way to resolve these issues is to explore the degree to which automatically generated indices, which are reflective of key measures of text quality, align with parallel measures derived from locally relevant, human evaluations of texts. This study describes the automated evaluation of 104 English as a second language texts through use of the computational tool Coh-Metrix, which was used to generate indices reflecting text cohesion, lexical characteristics, and syntactic complexity. The same texts were then independently evaluated by two experienced human assessors through use of an analytic scoring rubric. The interrelationships between the computer and human generated evaluations of the texts are presented in this paper with a particular focus on the automatically generated indices that were most strongly linked to the human generated measures. A synthesis of these findings is then used to discuss the role that such automated evaluation may have in the teaching and assessment of second language writing.
Made available in DSpace on 2018-10-03T17:55:01Z (GMT). No. of bitstreams: 1 22_03_matthews_10125_44661.pdf: 685471 bytes, checksum: 4f5b71cfa52c203af4f5b989588e3753 (MD5) Previous issue date: 2018-10-01
158
Matthews, J., & Wijeyewardene, I. (2018). Exploring relationships between automated and human evaluations of L2 texts. Language Learning & Technology, 22(3), 143–158. https://doi.org/10125/44661
10125/44661
1094-3501
http://hdl.handle.net/10125/44661
3
Language Learning & Technology
University of Hawaii National Foreign Language Resource Center Michigan State University Center for Language Education and Research
/item/10125-44661/
143
Writing Assessment/Testing Language Teaching Methodology Research Methods
Exploring relationships between automated and human evaluations of L2 texts
Article
Text
22