| Peer-Reviewed

Replacing Paper-Based Testing with an Alternative for the Assessment of Iranian Undergraduate Students: Administration Mode Effect on Testing Performance

Received: 15 April 2017    Accepted: 26 April 2017    Published: 24 May 2017
Views:       Downloads:
Abstract

There have been studies on comparability of test results in Computer-Based Testing (Henceforth CBT) and Paper-Based Testing (Henceforth PBT) considering key factors associated with test results in different countries with different languages and technological backgrounds. The main purpose of the current study was to discover the equivalency of test scores on PBT and CBT in the English achievement test in Payame Noor University (PNU) among undergraduate students. It also intended to investigate if there was any relationship between computer attitude and testing performance on CBT. Based upon the quantitative and qualitative data, some major findings were revealed. Firstly, there was statistically significant difference between two sets of mean scores. Furthermore, based on descriptive results, in comparing the results of computerized and paper-based tests, students showed better performance on PBT than CBT. The results of this study support the necessity of doing comparability studies in higher educational contexts before substituting CBT for PBT or including it in the system. Then, computer attitude had not any interaction with testing performance on CBT among Iranian undergraduate students in PNU. Finally, the results of interview supported the quantitative findings, i.e. participants mostly showed high preference for computerized test and liked CBT more than PBT but due to some justifications and habit of taking tests traditionally, they performed better on PBT.

Published in International Journal of Language and Linguistics (Volume 5, Issue 3)
DOI 10.11648/j.ijll.20170503.13
Page(s) 78-87
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Computer-Based Testing, Paper-Based Testing, Testing Administration Mode, Computer Attitude

References
[1] Clariana, R. & Wallace, P. (2002). Paper-based versus computer-based assessment: key factors associated with the test mode effect. British Journal of Educational Technology. 33 (5) 593-602. https://doi.org/10.1111/1467-8535.002944.
[2] Bachman, L. F. (2000). Modern language testing at the turn of the century: Assuring that what we count counts. Language Testing. 17 (1), 1–42. https://doi.org/10.1191/0265532006750414644.
[3] Hashemi Toroujeni, S. M. (2016). Computer-Based Language Testing versus Paper-and-Pencil Testing: Comparing Mode Effects of Two Versions of General English Vocabulary Test on Chabahar Maritime University ESP Students’ Performance. Unpublished thesis submitted for the degree of Master of Arts in TEFL. Chabahar Marine and Maritime University (Iran) (2016).
[4] Peat, M., & Franklin, S. (2002). Supporting student learning: the use of computer based formative assessment modules. British Journal of Educational Technology, 33 (5), 515–523. https://doi.org/10.1111/1467-8535.002888.
[5] Bennett, R. E. (1998). Reinventing assessment: Speculations on the future of large scale educational testing. Princeton, NJ: Educational Testing Service, Policy Information Center.
[6] Bunderson, V., Inouye, D. & Olsen, J. (1989). The four generations of computerized educational measurement. In R. L. Linn (Ed). Educational Measurement, pp. 367-407. Phoenix, AZ: Oryx Press.
[7] Chapelle, C. (2007). Technology and second language acquisition. Annual Review of Applied Linguistics. 27, 98-114. https://doi.org/10.1017/s0267190508070050.
[8] Mazzeo, J., & Harvey, L. A. (1988). The equivalence of scores from automated and conventional education and psychological tests: a review of the literature. (Report No. CBR 87-8, ETS RR 88-21). Princeton, NJ: Educational Testing Services.
[9] Mead, A. and Drasgow, F. (1993). Equivalence of Computerized and Paper-and- Pencil Cognitive Ability Tests: A Meta-Analysis. Psychological Bulletin, 114 (3), pp. 449-58. https://doi.org/10.1037/0033-2909.114.3.449.
[10] Wainer, H., Doran, N., Flaugher, R., Green, B., Mislevy, R., Steinberg, L. & Thissen, D. (1990). Computer Adaptive Testing: A Primer. Hillsdale. NJ: Lawrence Erlbaum Associates.
[11] Higgins, J., Russell, M. & Hoffmann, T. (2005). Examining the Effect of Computer-Based Passage Presentation on Reading Test Performance. The Journal of Technology, Learning, and Assessment, 3 (4), pp. 5-34.
[12] Khoshsima, H., Hosseini, M. & Hashemi Toroujeni, S. M. (2017). Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper and Pencil-Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL learners. English Language Teaching, Vol. 10, No. 2; January (2017). ISSN 1916-4742 (Print), ISSN (1916-4750). http://dx.doi.org/10.5539/elt.v10n2p23.
[13] Russo, A. (2002). Mixing Technology and Testing, Computer-Based Testing, The School administrator, http://www.aasa.org/SchoolAdministratorArticle.aspx?id=10354, 9.05.2012.
[14] Trotter, A. (2001). Testing firms see future market in online assessment. Education Week on the Web, 20 (4), 6.
[15] Anakwe, B. (2008). Comparison of student performance in paper-based versus Computer-based testing. Journal of Education for Business, September/October, 13-17. https://doi.org/10.3200/JOEB.84.1.13-17.
[16] Chapelle, C. A. & Douglas, D. (2006). Assessing language through computer technology. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511733116.
[17] Paek, P. (2005). Recent trends in comparability studies. Pearson Educational Measurement Research Reports. Research Report 05-05. Pearson Educational Measurement. USA.
[18] Khoshsima, H. & Hashemi Toroujeni, S. M. (2017c). Technology in Education: Pros and Cons of Using Computer in Testing Domain. International Journal of Language Learning and Applied Linguistics World (IJLLALW), (2017) Volume 1 (2), February 2017; 32-49. EISSN 2289-2737, PISSN: 2289-3245. http://ijllalw.org/Current-Issue.htmll.
[19] Douglas, D. (2000). Assessing Languages for Specific Purposes. Cambridge. Cambridge University Press.
[20] Fazeli, P. L., Ross, L. A., Vance, D. E., & Ball, K., (2013). The relationship between computer experience and computerized cognitive test performance among older adults. Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 68 (3), 337–346. https://doi.org/10.1093/geronb/gbs071.
[21] Khoshsima, H. & Hashemi Toroujeni, S. M. (2017a). Transitioning to an Alternative Assessment: Computer-Based Testing and Key Factors related to Testing Mode. European Journal of English Language Teaching, Vol. 2, Issue. 1, pp. 54-74, February (2017). ISSN 2501-7136. http://dx.doi.org/10.5281/zenodo.2685766.
[22] Lightstone, K., Smith, S. M., (2009). Student Choice between Computer and Traditional Paper-and-Pencil University Tests: What Predicts Preference and Performance? International Journal of Technologies in Higher Education, 6 (1), 30-45. https://doi.org/10.7202/039179ar.
[23] Maguire, K. A., Smith, D. A., Brallier, S. A., & Palm, L. J. (2010). Computer-Based Testing: A Comparison of Computer-Based and Paper-and-Pencil Assessment. Academy of Educational Leadership, 14 (4), 117-125.
[24] Terzis, V., & Economids, A. A. (2011). Computer based assessment: Gender differences in perceptions and acceptance, Computers in Human Behavior 27 (2011), 2108-2122. https://doi.org/10.1016/j.chb.2011.06.005.
[25] Bennett, R. E. (2002). Inexorable and inevitable: the continuing story of technology and assessment. The Journal of technology, Learning, and Assessment, 1 (1), 23.
[26] Dooling, J. (2000). What students want to learn about computers? Educational Leadership, 58 (2), 20-24.
[27] Pommerich, M. (2004). Developing computerized versions of paper-and-pencil Tests: Mode effects for passage-based tests. The Journal of Technology, Learning, and Assessment, 2 (6).
[28] Sawaki, Y. (2001). Comparability of Conventional and Computerized Tests of Reading in a Second Language. Language Learning & Technology. 5 (2), 38-59.
[29] Al-Amri, S. (2009). Computer based testing vs. paper based testing: Establishing the comparability of reading tests through the revolution of a new comparability model in a Saudi EFL context. Thesis submitted for the degree of Doctor of Philosophy in Linguistics. University of Essex (UK).
[30] Khoshsima, H. & Hashemi Toroujeni, S. M. (2017b). Comparability of Computer-Based Testing and Paper-Based Testing: Testing Mode Effect, Testing Mode Order, Computer Attitudes and Testing Mode Preference. International Journal of Computer (IJC), (2017) Volume 24, No 1, pp 80-99. ISSN 2307-4523 (Print & Online), http://ijcjournal.org/index.php/InternationalJournalOfComputer/article/view/825/41888.
[31] Wise, S., & Plake, B. (1990). Computer-Based Testing in Higher Education. Measurement and Evaluation in Counselling and Development, 23, 10.
[32] Choi, I., Kim, K., & Boo, J. (2003). Comparability of a paper-based language test and a computer-based language test. Language Testing, 20 (3), 295-320.
[33] Neumann, G. & Baydoun, R., (1998). Computerization of paper-and-pencil tests: When are they equivalent? Applied Psychological Measurement, 22, 71–83. https://doi.org/10.1177/01466216980221006.
[34] Bodmann, S. M. & Robinson, D. H. (2004). Speed and Performance Differences among computer-based and paper-based tests. Journal of Educational Computing Research, 31 (1) 51-60. https://doi.org/10.2190/GRQQ-YT0F-7LKB-F033.
[35] Eid, G. K. (2005). An investigation into the effects and factors influencing computer- based online math problem-solving in primary schools. Journal of Educational Technology Systems, 33 (3), 223-240. https://doi.org/10.2190/J3Q5-BAA5-2L62-AEY3.
[36] Puhan, G., Bought on, K., & Kim, S. (2007). Examining Differences in Examinee Performance in Paper and Pencil and Computerized Testing. The Journal of Technology, Learning, and Assessment, 6 (3), 5-20.
[37] Kapes, J. T., Matinez, L., & Ip, C. F. (1998). Internet-based vs. paper-pencil occupational competency test administration: an equivalency study. Journal of Vocational Education Research, 23 (3), 201-219.
[38] Choi, S. W. & Tinkler, T. (2002). Evaluating comparability of paper and computer-based assessment in a K–12 setting. Paper presented at annual meeting of the National Council on Measurement in education, New Orleans, LA.
[39] Flowers, C., Do-Hong, K., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read- aloud accommodation. Journal of Special Education Technology, 26 (1), 1-12. https://doi.org/10.1177/016264341102600102.
[40] O‘Malley, K. J., Kirkpatrick, R., Sherwood, W., Burdick, H. J., Hsieh, C. & Sanford, E. E. (2005). Comparability of a Paper Based and Computer Based Reading Test in Early Elementary Grades. Paper presented at the AERA Division D Graduate Student Seminar, Montreal, Canada.
[41] Pomplun, M., & Custer, M. (2005). The score comparability of computerized and paper-and-pencil formats for K-3 reading tests. Journal of Educational Computing Research, 32 (2), 153-166. https://doi.org/10.2190/D2HU-PVAW-BR9Y-J1CL.
[42] Mason, B. J., Patry, M., & Bernstein, D. J. (2001). An examination of the equivalence between non-adaptive computer-based and traditional testing. Journal of Educational computing research. 24, 29-39. https://doi.org/10.2190/9EPM-B14R-XQWT-WVNL.
[43] Kruger, J., Wirtz, D., & Miller, D. T. (1977). Counterfactual thinking and the first instinct fallacy. Journal of Personality and Social Psychology.
[44] Schwarz, S. P., McMorris, R. F., & DeMers, L. P. (1991). Reasons for changing answers: An evaluation using personal interviews. Journal of Educational Measurement, 28, 163-171. https://doi.org/10.1111/j.1745-3984.1991.tb00351.x.
[45] Vispoel, W. P. (1998). Reviewing and changing answers on computer-adaptive and self-adaptive vocabulary tests. Journal of Educational Measurement, 35, 328-345. https://doi.org/10.1111/j.1745-3984.1998.tb00542.x.
[46] Parshall, C. G., & Kromery, J. D. (1993). Computer versus paper and pencil testing: An analysis of examinee characteristics associated with mode effect. Abstract from: ERIC Abstract No. ED363272. Paper presented at the annual meeting of the American Educational Research Association, Atlanta, GA.
[47] Johnson, M., and Green, S. (2006). On-line mathematics assessment: The impact of mode on performance and question answering strategies. The Journal of Technology, Learning, and Assessment, 4 (5).
[48] Hargreaves, M., Shorrocks-Taylor, D., Swinnerton, B., Tait, K., & Threlfall, J. (2004). Computer or paper? that is the question: Does the medium in which assessment question are presented affect children‘s performance in mathematics? Educational Research, 46 (1), 29-42. https://doi.org/10.1080/0013188042000178809.
[49] Horkay, N., Bennett, R. E., Allen, N., & Kaplan, B. (2005). Online assessment in writing. In B. Sandene, N. Horkay, R. E. Bennett, N. Allen, J. Braswell, B. Kaplan, & A. Oranje (Eds.), Online assessment in mathematics and writing: Reports from the NAEP Technology-Based Assessment Project (NCES 2005-457). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
[50] Sandene, B., Horkay, N., Bennett, R. E., Allen, N. Braswell, J., Kaplan, B., & Oranje, A. (Eds.) (2005). Online assessment in mathematics and writing: Reports from the NAEP technology based assessment project (NCES 2005-457). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
[51] Fletcher, P., & Collins, M. A. J. (1986). Computer-administered versus written tests-advantages and disadvantages. Journal of Computers in Mathematics and science Teaching, 6, 38-43.
[52] Messick, S. (1989). Validity. In Robert Linn (Eds) Educational Measurement (3rd Ed.) London: Collier Macmillan Publishers.
[53] Chapelle, C. (2001). Computer Applications in Second Language Acquisition: Foundations for teaching, Testing and Research. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139524681.
[54] Boo, J. (1997) Computerized versus paper-and-pencil assessment of educational development: Score comparability and examinee preferences. Unpublished PhD dissertation, University of Iowa, USA.
[55] Fulcher, G. (1999). Computerizing an English language placement test. ELT journal, 53 (4), 289-299. https://doi.org/10.1093/elt/53.4.289.
[56] McNamara, T. (2000). Language Testing. Oxford: Oxford University Press.
[57] Al-Amri, C (2008). Computer-Based Testing vs. Paper-Based Testing: A Comprehensive Approach to Examining the Comparability of Testing Modes. Essex Graduate Student Papers in Language & Linguistics, 10, 22-44.
[58] Tatira, B., Mutambara, L. H. N., Chagwiza, C. J., & Nyaumwe, L. J., (2011). Computerized Summative Assessment of Multiple-choice Questions: Exploring Possibilities with the Zimbabwe School Examination Council Grade 7 Assessments. Computer and Information Science. 4 (6). https://doi.org/10.5539/cis.v4n6p66.
[59] Taylor, C., Kirsch, I., Eignor, D., & Jamieson, J. (1999). Examining the relationship between computer familiarity and performance on computer-based language tasks. Language Learning, 49 (2), 219-274. https://doi.org/10.1111/0023-8333.00088.
[60] Zhang, Q. (2007). EFL Teachers’ Attitudes toward Information and Communication Technologies and Attributing Factors. Peking University.
[61] Leeson, H. (2006). The Mode Effect: A Literature Review of Human and Technological Issues in Computerized Testing. International Journal of Testing, 6 (1), 1-24. https://doi.org/10.1207/s15327574ijt0601_1.
[62] Goldberg, A., & Pedulla, J. J. (2002). Performance differences according to test mode and computer familiarity on a practice GRE. Educational and Psychological Measurement, 62 (6), 1053-1067. https://doi.org/10.1177/0013164402238092.
[63] Pomplun M., Ritchie, T., & Custer M. (2006). Factors in paper-and-pencil and computer reading score differences at the primary grades. Educational Assessment, 11 (2), 127-143. https://doi.org/10.1207/s15326977ea1102_3.
[64] Bennett, R. E., Braswell, J., Oranje, A., Sandene, B., Kaplan, B., & Yan, F. (2008). Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment, 6 (9).
[65] Kutluca, T., & Gokalp, Z., (2011). A study on computer usage and attitude toward computers of prospective preschool teachers. International Journal on New Trends in Education and Their Implications, 2 (1), 2-147.
[66] Rezaee, A. A., Abidin, Z. M. J., Isa, H. J., & Mustafa, O. P. (2012). TESOL in-Service Teachers' Attitudes towards Computer Use. English Language Teaching. 5 (1), 61-68.
[67] Powers, D. and O‘Neill. K. (1993). Inexperienced and anxious computer users: coping with a computer-administered test of academic skills. Educational Testing Services, Research Report RR 92-75. https://doi.org/10.1207/s15326977ea0102_4.
[68] Warner, R. M. (2013). Applied Statistics: From Bivariate through Multivariate Techniques. (2th Ed.). SUA: SAGE Publication Inc.
[69] Privitera, G. J. (2012). Statistics for the Behavioral Siences. USA: SAGE publication Inc.
[70] Ricci, V. (2005). Fitting distributions with R. R project. Website http://cran.r-project.org/doc/ contrib/Ricci-distributions-en.pdf. Retrieved July 6, 2007.
[71] Evans, J. D. (1996). Straightforward statistics for the behavioral sciences. Pacific Grove, CA: Brooks/Cole Publishing.
[72] Seidman, I. (1998). Interviewing as qualitative research: A guide for researchers in education and the social sciences (2nd ed.). New York: Teachers College Press.
Cite This Article
  • APA Style

    Monirosadat Hosseini, Seyyed Morteza Hashemi Toroujeni. (2017). Replacing Paper-Based Testing with an Alternative for the Assessment of Iranian Undergraduate Students: Administration Mode Effect on Testing Performance. International Journal of Language and Linguistics, 5(3), 78-87. https://doi.org/10.11648/j.ijll.20170503.13

    Copy | Download

    ACS Style

    Monirosadat Hosseini; Seyyed Morteza Hashemi Toroujeni. Replacing Paper-Based Testing with an Alternative for the Assessment of Iranian Undergraduate Students: Administration Mode Effect on Testing Performance. Int. J. Lang. Linguist. 2017, 5(3), 78-87. doi: 10.11648/j.ijll.20170503.13

    Copy | Download

    AMA Style

    Monirosadat Hosseini, Seyyed Morteza Hashemi Toroujeni. Replacing Paper-Based Testing with an Alternative for the Assessment of Iranian Undergraduate Students: Administration Mode Effect on Testing Performance. Int J Lang Linguist. 2017;5(3):78-87. doi: 10.11648/j.ijll.20170503.13

    Copy | Download

  • @article{10.11648/j.ijll.20170503.13,
      author = {Monirosadat Hosseini and Seyyed Morteza Hashemi Toroujeni},
      title = {Replacing Paper-Based Testing with an Alternative for the Assessment of Iranian Undergraduate Students: Administration Mode Effect on Testing Performance},
      journal = {International Journal of Language and Linguistics},
      volume = {5},
      number = {3},
      pages = {78-87},
      doi = {10.11648/j.ijll.20170503.13},
      url = {https://doi.org/10.11648/j.ijll.20170503.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijll.20170503.13},
      abstract = {There have been studies on comparability of test results in Computer-Based Testing (Henceforth CBT) and Paper-Based Testing (Henceforth PBT) considering key factors associated with test results in different countries with different languages and technological backgrounds. The main purpose of the current study was to discover the equivalency of test scores on PBT and CBT in the English achievement test in Payame Noor University (PNU) among undergraduate students. It also intended to investigate if there was any relationship between computer attitude and testing performance on CBT. Based upon the quantitative and qualitative data, some major findings were revealed. Firstly, there was statistically significant difference between two sets of mean scores. Furthermore, based on descriptive results, in comparing the results of computerized and paper-based tests, students showed better performance on PBT than CBT. The results of this study support the necessity of doing comparability studies in higher educational contexts before substituting CBT for PBT or including it in the system. Then, computer attitude had not any interaction with testing performance on CBT among Iranian undergraduate students in PNU. Finally, the results of interview supported the quantitative findings, i.e. participants mostly showed high preference for computerized test and liked CBT more than PBT but due to some justifications and habit of taking tests traditionally, they performed better on PBT.},
     year = {2017}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Replacing Paper-Based Testing with an Alternative for the Assessment of Iranian Undergraduate Students: Administration Mode Effect on Testing Performance
    AU  - Monirosadat Hosseini
    AU  - Seyyed Morteza Hashemi Toroujeni
    Y1  - 2017/05/24
    PY  - 2017
    N1  - https://doi.org/10.11648/j.ijll.20170503.13
    DO  - 10.11648/j.ijll.20170503.13
    T2  - International Journal of Language and Linguistics
    JF  - International Journal of Language and Linguistics
    JO  - International Journal of Language and Linguistics
    SP  - 78
    EP  - 87
    PB  - Science Publishing Group
    SN  - 2330-0221
    UR  - https://doi.org/10.11648/j.ijll.20170503.13
    AB  - There have been studies on comparability of test results in Computer-Based Testing (Henceforth CBT) and Paper-Based Testing (Henceforth PBT) considering key factors associated with test results in different countries with different languages and technological backgrounds. The main purpose of the current study was to discover the equivalency of test scores on PBT and CBT in the English achievement test in Payame Noor University (PNU) among undergraduate students. It also intended to investigate if there was any relationship between computer attitude and testing performance on CBT. Based upon the quantitative and qualitative data, some major findings were revealed. Firstly, there was statistically significant difference between two sets of mean scores. Furthermore, based on descriptive results, in comparing the results of computerized and paper-based tests, students showed better performance on PBT than CBT. The results of this study support the necessity of doing comparability studies in higher educational contexts before substituting CBT for PBT or including it in the system. Then, computer attitude had not any interaction with testing performance on CBT among Iranian undergraduate students in PNU. Finally, the results of interview supported the quantitative findings, i.e. participants mostly showed high preference for computerized test and liked CBT more than PBT but due to some justifications and habit of taking tests traditionally, they performed better on PBT.
    VL  - 5
    IS  - 3
    ER  - 

    Copy | Download

Author Information
  • Department of English Language, Faculty of Humanities, Farahan Payame Noor University, Farahan, Iran

  • English Language Department, Faculty of Management and Humanities, Chabahar Marine and Maritime University, Chabahar, Iran

  • Sections