International Journal of Language and Linguistics

| Peer-Reviewed |

An Empirical Study on the Impact of an Automated Writing Assessment on Chinese College Students’ English Writing Proficiency

Received: 29 June 2019    Accepted: 27 August 2019    Published: 11 September 2019
Views:       Downloads:

Share This Article

Abstract

Automated writing evaluation (AWE) is an online essay scoring system which can provide feedback and revising advice to teachers and students. In this paper, an empirical study was carried out to explore the impact of Writing Roadmap2.0 (WRM2.0)-an automated writing assessment system on the English writing proficiency, which is reflected in three dimensions-the language form, the contextual structure and the writing quality of non-English major freshmen in China. In this study, 100 participants were divided into the experimental class (EC) and the controlled one (CC) at random, with 50 ones in each class. Both qualitative method and quantitative method were adopted for data collection and analysis, including the pre- and post-tests on WRM2.0, teacher-assessed writing task and interviews. The results revealed that while there was no significant difference in the writing proficiency in pre-test on WRM2.0 between EC and CC, the former outperformed the latter in both post-test on WRM2.0 and teacher-assessed writing task in the final exam in two dimensions: the language form and the writing quality. With regard to the aspect of contextual structure, EC benefited a little on WRM2.0. Generally speaking, this empirical study observed positive impact of WRM2.0 on writing proficiency of L2 students in China. It is expected that the findings will provide references for the further integration of AWE with writing teaching and learning in the EFL classroom.

DOI 10.11648/j.ijll.20190705.16
Published in International Journal of Language and Linguistics (Volume 7, Issue 5, September 2019)

This article belongs to the Special Issue Linguistic and Pedagogical Language Issues in the 21st Century Classrooms

Page(s) 218-229
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Automated Writing Evaluation, Writing Roadmap 2.0, Writing Proficiency, Language Form, Contextual Structure, Writing Quality

References
[1] Flower, L. and Hayes, J. R. (1981). A Cognitive Process Theory of Writing. CCC, 32, pp. 365-387.
[2] Alexander, S. (2001). E-learning developments and experiences. Education & Training, 43, 4/5, pp. 240-248.
[3] Nicol, D. (2007). Laying a foundation for lifelong learning: Case studies of e-assessment in large 1st -year classes. British Journal of Educational Technology 38 (4), pp. 668-678.
[4] Schmidt, R. (1990). The role of consciousness in second language learning, Applied Linguistics, 11, pp 129-158.
[5] Swain, M. (1995). Three functions of output in second language learning [A]. In Cook, G & Seidlhover, B. (Eds.), Principles and Practice in the Study of Language [M]. Oxford: Oxford University Press.
[6] Van Patten, S. (1996). Input Processing and Grammar Instruction in Second Language Acquisition [M]. Norwood, NJ: Ablex.
[7] Qi, S. & Lapkin, S. (2001). Exploring the role of noticing in a three stage second language writing task [J]. Journal of Second Language Writing, 10, pp. 277-303.
[8] Liou, H.-C. (1994). Practical considerations for multimedia courseware development: An EFL IVD experience. CALICO Journal, 11 (3), pp. 47-74.
[9] Warden, C. & Chen, J. (1995). Improving feedback while decreasing teacher burden in R. O. C. ESL business English writing classes. In P. Bruthiaux, T. Boswood & B. Du-Babcock (Eds.), Explorations in English for professional communications (pp. 125-137), Hong Kong: City University of Hong Kong.
[10] Brock, M. (1990). Customizing a computerized text analyzer for ESL writers: Cost versus gain. CALICO Journal, 8 (2), pp. 51-60.
[11] Brock, M. (1993). Three disk-based text analyzers and the ESL writer. Journal of Second Language Writing, 2 (1), pp. 19-40.
[12] Burston, J. (2001). Computer-mediated feedback in composition correction. CALICO Journal, 19 (1), pp. 37-50.
[13] Ferris, D. R. (1993). The design of an automatic analysis program for L2 text research: Necessity and feasibility. Journal of Second Language Writing, 2 (2), pp. 119-129.
[14] Leacock, C. (2004). Scoring free-responses automatically: A case study of a large-scale assessment. Examens, 1 (3), pp. 102-117.
[15] Tang, J. L., Rich, C. S. & Wang, Y. H. (2012). Technology-enhanced English language writing assessment in the classroom, Chinese Journal of Applied linguistics, 35 (4), pp. 385-399.
[16] Attali, Y., Bridgeman, B., & Trapani, C. (2010). Performance of a generic approach in automated essay scoring. Journal of Technology, Learning, and Assessment, 10 (3), pp. 1-21
[17] McCurry, D. (2010). Can machine scoring deal with broad and open writing tests as well as human readers? Assessing Writing, 15 (2), pp. 118-129.
[18] Tsai, Min-hsiu. (2012). The consistency between human raters and an automated essay scoring system in grading high school students’ English writing. Action In Teacher Education, 34 (4), pp. 328-335.
[19] Rich, C. S., Harrington, H., Kim, J. & West, B. (2008). Automated essay scoring in state formative and summative writing assessment. Paper presented at the annual meeting of the American Educational Research Association, March 2008, NYC.
[20] Gibbs, G. (2006). Why assessment is changing. In C. Bryan & K. Clegg (Eds). Innovative assessment in higher education (pp. 11-22): London: Routledge.
[21] Wang, J. & Brown, M. S. (2007). Automated essay scoring versus human scoring: A comparative study. Journal of Technology, Learning, and Assessment, 6 (2), pp. 4-28.
[22] Shermis, M. D. & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. Handbook of Automated Essay Evaluation: Current Applications and New Directions, pp. 313-346.
[23] He, X. L. (2013). Reliability and validity of the assessment by the Pigaiwang on college students’ writings. Modern Educational Technology, 23 (5), pp. 64-67.
[24] Li, Y. L. & Tian, X. C. (2018). An empirical research into the reliability of iWrite2.0. Modern Educational Technology, (2), pp. 5-80.
[25] Herrington, A. & Moran, C. (2001). What happens when machines read our students’ writing? College English, 63 (4), pp. 480-499.
[26] Matzen Jr., R. N., & Sorensen, C. (2006). E-write as a means for placement into three composition courses: A pilot study. In P. F. Ericsson & R. H. Haswell (Eds.), Machine Scoring of Student Essays: Truth and Consequences (pp. 130-147). Logan, UT: Utah State University Press.
[27] Shermis, M. D., Burstein, J. C., & Bliss, L. (2004). The impact of automated essay scoring on high stakes writing assessments. Paper presented at the Annual Meeting of the National Council on Measurement in Education, San Diego, CA.
[28] Wilson, J. & Andrada, G. (2013). Examining patterns of writing performance of struggling writers on a statewide classroom benchmark writing assessment: The utility of dynamic assessment. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
[29] Saricaoglu, A. (2018). The impact of automated feedback on l2 learners’ written causal explanations. ReCall, 31 (2), pp. 189-203.
[30] Vantage Learning, (2007). MY Access! Efficacy Report. Newtown, PA: Vantage Learning. http://www.vantagelearning.com/school/research/myaccess.html
[31] Warschauer, M. & Ware, P. (2006). Automated essay scoring, defining classroom research agenda. Language Teaching Research, 10 (2), pp. 1-24.
[32] Grimes, D. (2008). Assessing automated assessment: Essay evaluation software in the classroom. Retrieved December 12th 2016 from ftp://ftp.ics.uci.edu/pub/grimesd/Auto Assessment.pdf
[33] White, L., Hixson, N., D’Brot, J., Perdue, J., Foster, S. & Rhudy, V. (2010). Research brief, impact of Writing Roadmap 2.0 on WESTEST 2 online writing assessment scores. http://wvde.state.wv.us/oaa/pdf/research/Research%20Brief%20-%20WRM2.0%20Impact%20FINAL%2001.27.10.pdf
[34] Wilson, J., Olinghouse, N. G. & Andrada, G. N. (2014). Does automated feedback improve writing quality? Learning Disabilities: A Contemporary Journal, 12 (1), pp. 93-118.
[35] Palermo, C. & Thomson, M. M. (2018). Teacher implementation of self-regulated strategy development with an automated writing evaluation system: Effects on the argumentative writing performance of middle school students. Contemporary Educational Psychology, (7), pp. 255-270.
[36] Jiang, X. Q., Cai, J. & Tang, J. L. (2011). Impacts of An Automated Essay Scoring Tool on the Development of Writing Proficiency of Chinese College EFL Learners, Shandong Foreign Language Teaching Journal, 145 (6), pp. 36-43.
[37] Rich, C. S. (2012). The impact of online automated writing evaluation: A case study from Dalian. Chinese Journal of Applied Linguistics, 35 (1), pp. 83-99.
[38] Wang, S. W. & Xian, Y. C. (2012). An empirical study on the efficacy of error correction practice by using an automated writing evaluation system, In English Teaching Reform in the Digital Age—The Application of Educational Assessment Technology in Writing Instruction, Tang, J. L., et al. (eds.), pp 254-264.
[39] Gao, M. X. & Luan, X. H. (2013). The impact of online automated essay socring system Writing Roadmap 2.0 on writing quality of college students, Journal of Jilin Institute of Chemical Technology, 20 (4), pp. 88-90.
[40] Zhou, L. (2015). The effect of online English writing platforms on college students’ syntactic ability, TEFLE, 165, pp. 26-29.
[41] Wang, S. W. (2017). A longitudinal study on the impacts of an automated writing assessment system on the English writing proficiency of Chinese college EFL learners, Shandong Foreign Language Teaching, 177 (2), pp. 51-61.
[42] Li, G. F. (2019). Impact of the integrated feedback based on AWE on students’ writing revision. Foreign Language Education, 40 (4), pp. 72-76.
[43] Hoon, T. (2006). Online automated essay assessment: Potentials for writing development. Retrieved December 12th, 2016 from http://ausweb.scu.edu.au/aw06/papers/refereed/tan3/paper.html
[44] Warschauer, M. & Grimes, D. (2008). Automated writing assessment in the classroom. Pedagogies: An International Journal, 3 (1), pp. 22-36.
[45] Grimes, D. & Warschauer, M. (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. Journal of Technology, Learning, and Assessment, 8 (6), pp. 1-44.
[46] Wang, Y., Shang, H. & Briody, P. (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students’ writing. Computer Assisted Language Learning, 26 (3), pp. 234-257.
[47] Rolim, C. & Isaias, P. (2018). Examining the use of e-assessment in higher education: Teachers and students’ viewpoints. British Journal of Educational Technology, 4 (50), 1785-1800.
[48] Chen, C. F. & Cheng, W. Y. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, (2), pp. 94-112.
[49] Wang, S. W. (2011) On On-line English Writing Feedback with Writing Roadmap 2.0 Automated Evaluation System. Modern Educational Technology, 21 (3), pp. 76-81.
[50] Yang, L. (2013). On the application of AWE system in high-level students’ EFL writing learning. Modern Educational Technology, 23 (5), pp. 73-77.
[51] Chandler, J. (2000). The efficacy of error correction for improvement in the accuracy of L2 student writing. Paper presented at the AAAL Conference, Vancouver, BC.
[52] Fathman, A. & Walley, E. (1990). Teacher response to student writing: Focus on form versus content, in B. Kroll (ed.), Second Language Writing: Research Insights for the Classroom, Cambridge: Cambridge University Press.
[53] Kepner, C. G. (1991). An experiment in the relationship of types of written feedback to the development of second-language writing skills. Modern Language Journal, 75, pp. 305-313.
[54] Truscott, J. (2004). Evidence and conjecture on the effects of correction: A response to Chandler, Journal of Second Language Writing, 13 (4), pp 337-343.
[55] Ross, S. (1982). The effects of heuristic feedback on EFL composition. JALT Journal, 4, pp. 97-108.
[56] Applebee, A. N. (1981). Writing in the secondary school (NCTE Research Rep. No. 21). Urbana, IL: National Council of Teachers of English.
[57] Cai, J. G. (2002). Influence of CET writing requirements and scoring criteria on Chinese students’ compositions, Journal of PLA University of Foreign Languages, 25 (5), pp. 49-53.
[58] Vann, R. J., Meyer, D. E., & Lorenz, F. O. (1984). Error gravity: A study of faculty opinion of ESL errors. TESOL Quarterly, 18, pp. 427-440.
[59] Zamel, V. (1985). Responding to student writing. TESOL Quarterly, 19, pp. 79-101.
[60] Tang, J. L. (2014). How to integrate an automated writing assessment tool in the EFL classroom, FLLTP, 1, pp. 49-57.
[61] Ashwell, T. (2000). Patterns of teacher response to student writing in a multiple-draft composition classroom: Is content feedback followed by form feedback the best method? Journal of Second Language Writing, 9, pp. 227-257.
[62] Feng, L. & Gao, S. F. (2012). A study on effects of rating methods in college English writing formative assessment: A comparative study of holistic and mixed rating methods, Journal of Beijing Jiaotong University (Social Sciences Edition), 11 (3), pp. 126-131.
[63] Lizotte R. (2001). Quantifying progress in an ESL writing class. MATSOL Currents, 27 (1), pp. 7-17.
[64] Robb T. Ross S, Shorted I. (1986). Salience of feedback on error and its effect of EFL writing quality. TESOL Quarterly, 20, pp. 83-93.
[65] Bolt, P. (1992). An evaluation of grammar-checking programs as self-help learning aids for learners of English as a foreign language. Computer Assisted Language Learning, 5 (1-2), pp. 49-91.
[66] Dalgish, G. (1991). Computer-assisted error analysis and courseware design: Applications for ESL in the Swedish context. CALICO Journal, 9 (2), pp. 39-56.
[67] Healey, D. (1992). Where’s the beef? Grammar practice with computers. CAELL Journal, 3 (1), pp. 10-16.
[68] Nutta, J. (1998). Is computer-based grammar instruction as effective as teacher-directed grammar instruction for teaching L2 structures?. CALICO Journal, 16 (1), pp. 49-62.
[69] McCausland, W. D. (2003). Extended case study: Computer aided assessment and independent learning in macroeconomics. Bristol: University of Bristol.
[70] Novak, G. M., Patterson, E. T., Gavrin, A. D. & Christian, W. (1999). Just-in-time-teaching: blending active learning with web technology. New Jersey: Prentice Hall.
Author Information
  • School of Foreign Languages, Southwest Petroleum University, Chengdu, China

  • School of Foreign Languages, Southwest Petroleum University, Chengdu, China

Cite This Article
  • APA Style

    Shuwen Wang, Ran Li. (2019). An Empirical Study on the Impact of an Automated Writing Assessment on Chinese College Students’ English Writing Proficiency. International Journal of Language and Linguistics, 7(5), 218-229. https://doi.org/10.11648/j.ijll.20190705.16

    Copy | Download

    ACS Style

    Shuwen Wang; Ran Li. An Empirical Study on the Impact of an Automated Writing Assessment on Chinese College Students’ English Writing Proficiency. Int. J. Lang. Linguist. 2019, 7(5), 218-229. doi: 10.11648/j.ijll.20190705.16

    Copy | Download

    AMA Style

    Shuwen Wang, Ran Li. An Empirical Study on the Impact of an Automated Writing Assessment on Chinese College Students’ English Writing Proficiency. Int J Lang Linguist. 2019;7(5):218-229. doi: 10.11648/j.ijll.20190705.16

    Copy | Download

  • @article{10.11648/j.ijll.20190705.16,
      author = {Shuwen Wang and Ran Li},
      title = {An Empirical Study on the Impact of an Automated Writing Assessment on Chinese College Students’ English Writing Proficiency},
      journal = {International Journal of Language and Linguistics},
      volume = {7},
      number = {5},
      pages = {218-229},
      doi = {10.11648/j.ijll.20190705.16},
      url = {https://doi.org/10.11648/j.ijll.20190705.16},
      eprint = {https://download.sciencepg.com/pdf/10.11648.j.ijll.20190705.16},
      abstract = {Automated writing evaluation (AWE) is an online essay scoring system which can provide feedback and revising advice to teachers and students. In this paper, an empirical study was carried out to explore the impact of Writing Roadmap2.0 (WRM2.0)-an automated writing assessment system on the English writing proficiency, which is reflected in three dimensions-the language form, the contextual structure and the writing quality of non-English major freshmen in China. In this study, 100 participants were divided into the experimental class (EC) and the controlled one (CC) at random, with 50 ones in each class. Both qualitative method and quantitative method were adopted for data collection and analysis, including the pre- and post-tests on WRM2.0, teacher-assessed writing task and interviews. The results revealed that while there was no significant difference in the writing proficiency in pre-test on WRM2.0 between EC and CC, the former outperformed the latter in both post-test on WRM2.0 and teacher-assessed writing task in the final exam in two dimensions: the language form and the writing quality. With regard to the aspect of contextual structure, EC benefited a little on WRM2.0. Generally speaking, this empirical study observed positive impact of WRM2.0 on writing proficiency of L2 students in China. It is expected that the findings will provide references for the further integration of AWE with writing teaching and learning in the EFL classroom.},
     year = {2019}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - An Empirical Study on the Impact of an Automated Writing Assessment on Chinese College Students’ English Writing Proficiency
    AU  - Shuwen Wang
    AU  - Ran Li
    Y1  - 2019/09/11
    PY  - 2019
    N1  - https://doi.org/10.11648/j.ijll.20190705.16
    DO  - 10.11648/j.ijll.20190705.16
    T2  - International Journal of Language and Linguistics
    JF  - International Journal of Language and Linguistics
    JO  - International Journal of Language and Linguistics
    SP  - 218
    EP  - 229
    PB  - Science Publishing Group
    SN  - 2330-0221
    UR  - https://doi.org/10.11648/j.ijll.20190705.16
    AB  - Automated writing evaluation (AWE) is an online essay scoring system which can provide feedback and revising advice to teachers and students. In this paper, an empirical study was carried out to explore the impact of Writing Roadmap2.0 (WRM2.0)-an automated writing assessment system on the English writing proficiency, which is reflected in three dimensions-the language form, the contextual structure and the writing quality of non-English major freshmen in China. In this study, 100 participants were divided into the experimental class (EC) and the controlled one (CC) at random, with 50 ones in each class. Both qualitative method and quantitative method were adopted for data collection and analysis, including the pre- and post-tests on WRM2.0, teacher-assessed writing task and interviews. The results revealed that while there was no significant difference in the writing proficiency in pre-test on WRM2.0 between EC and CC, the former outperformed the latter in both post-test on WRM2.0 and teacher-assessed writing task in the final exam in two dimensions: the language form and the writing quality. With regard to the aspect of contextual structure, EC benefited a little on WRM2.0. Generally speaking, this empirical study observed positive impact of WRM2.0 on writing proficiency of L2 students in China. It is expected that the findings will provide references for the further integration of AWE with writing teaching and learning in the EFL classroom.
    VL  - 7
    IS  - 5
    ER  - 

    Copy | Download

  • Sections