Measuring Academic Misconduct: Evaluating the Construct Validity of the Exams and Assignments Scale
American Journal of Applied Psychology
Volume 4, Issue 3-1, June 2015, Pages: 58-64
Received: May 28, 2015; Accepted: Jun. 16, 2015; Published: Jun. 30, 2015
Views 2111      Downloads 56
Authors
Kenneth D. Royal, Department of Clinical Sciences, North Carolina State University, Raleigh, USA
Keven Flammer, Department of Clinical Sciences, North Carolina State University, Raleigh, USA
Article Tools
Follow on us
Abstract
The purpose of this study was to evaluate the psychometric properties of the Examinations and Assignments Scale (EAS), a newly designed instrument intended to capture perspectives about the severity of a variety of potential misconduct actions and behaviors, and examine evidence for construct validity. A total of 140 veterinary medical students completed the survey in the spring of 2015. Psychometric results indicate the EAS is a psychometrically-sound instrument capable of producing valid and reliable measures of misconduct severity. Substantive results and implications are also discussed.
Keywords
Psychometrics, Measurement, Validity, Academic Misconduct, Cheating, Veterinary Medical Education
To cite this article
Kenneth D. Royal, Keven Flammer, Measuring Academic Misconduct: Evaluating the Construct Validity of the Exams and Assignments Scale, American Journal of Applied Psychology. Special Issue: Psychology of University Students. Vol. 4, No. 3-1, 2015, pp. 58-64. doi: 10.11648/j.ajap.s.2015040301.20
References
[1]
R. T. Burrus, K. M. McGoldrick, and P. W. Schuhmann. “Self-reports of student cheating: Does a definition of cheating matter?,” Journal of Economic Education, 38(1), 3-16, 2007.
[2]
G. J. Cizek, Detecting and preventing classroom cheating: Promoting integrity in assessment. Thousand Oaks, CA, Sage. 2003
[3]
E. B. Stern, and L. Havlicek. “Academic misconduct: Results of faculty and undergraduate student surveys,” Journal of Allied Health, 15(2), 129–143, 1986.
[4]
T. O. Bisping, H. Patron, and K. Roskelley. “Modeling academic dishonesty: The role of student perceptions and misconduct type,” Journal of Economic Education, 39(1), 4–21, 2008.
[5]
K. D. Royal, J. V. Parrent, and R. P. Clark. “Measuring education majors’ perceptions of academic misconduct: An item response theory perspective,” International Journal for Educational Integrity, 7(1), 18-29, 2011.
[6]
T. G. Bond and C. M. Fox. Applying the Rasch Model. Fundamental measurement in the human sciences, 2nd edition. Lawrence Erlbaum Associate, 2007.
[7]
T. Salzberger. “The illusion of measurement: Rasch versus 2-PL,” Rasch Measurement Transactions, 16(2), p. 882, 2002.
[8]
B. D. Wright. “Fundamental measurement,” Rasch Measurement Transactions, 11(2), p. 558, 1997.
[9]
B. D. Wright. Measurement for Social Science and Education: History of Social Science Measurement. MESA Memo #62, Available at: http://www.rasch.org/memo62.htm, 2007.
[10]
G. Engelhard, Jr. Invariant measurement: Rasch measurement in the social, behavioral and health sciences. Routledge, 2013.
[11]
K. D. Royal. “Making meaningful measurement in survey research: A demonstration of the utility of the Rasch model,” IR Applications, 28, 1-16, 2010.
[12]
D. Andrich. “A rating formulation for ordered response categories,” Psychometrika, 43, 561-573, 1978.
[13]
L. M. Linacre. WINSTEPS® (Version 3.90.0). Computer Software. Beaverton, OR: Winsteps.com, 2015.
[14]
B. D. Wright and G. N. Master. Rating scale analysis: Rasch measurement. Chicago, IL: MESA Press, 1982.
[15]
J. M. Linacre. “Optimizing rating scale category effectiveness,” Journal of Applied Measurement, 3(1), 85-106, 2002.
[16]
B. D. Wright and J. M. Linacre. “Reasonable mean-square fit values,” Rasch Measurement Transactions, 8, 370, 1994.
[17]
J. M. Linacre. Differential item functioning DIF pairwise. Available at: http://www.winsteps.com/winman/table30_1.htm, 2015.
[18]
S. Messick. “Validity,” In R. L. Linn (Ed.) Educational Measurement (3rd ed., pp. 13-103). New York: Macmillan, 1989.
[19]
W. Lopez. “Communication validity and rating scales,” Rasch Measurement Transactions, 10(1), 482-483, 1996.
[20]
K. D. Royal and J. C. Puffer. “The consequential validity of ABFM examinations,” Journal of the American Board of Family Medicine, 27(3), 430-431, 2014.
[21]
D. N. Bunn, S. B. Caudill, and D. M. Gropper. “Crime in the classroom: An economic analysis of undergraduate student cheating behavior,” The Journal of Economic Education, 23(3), 197-207, 1992.
[22]
T. C. Grijalva, C. Nowell, and J. Kerkvliet, J. “Academic honesty and online courses,” College Student Journal, 40(1), 180-185, 2006.
[23]
E. E. LaBeff, R. E. Clark, V. J. Haines, and G. M. Dickhoff. “Situational ethics and college student cheating,” Sociological Inquiry, 60(2), 190-198, 1990.
[24]
G. M. Sykes and D. Matza, D. “Techniques of neutralization: A theory of delinquency,” American Sociological Review, 22(6), 664-670, 1957.
[25]
S. F. Hard, J. M. Conway, and A. C. Moran. “Faculty and college student beliefs about the frequency of student academic misconduct,” The Journal of Higher Education, 77(6), 1058-1080, 2006.
[26]
D. L. McCabe, L. K. Trevino and K. D. Butterfield. “Academic integrity in honor code and non-honor code environments: A qualitative investigation,” The Journal of Higher Education, 70(2), 211-234, 1999.
[27]
G. J. Cizek. Cheating on tests: How to do it, detect it, and prevent it. Mahwah, NJ: Lawrence Erlbaum, 1999.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186