American Journal of Applied Psychology
Volume 3, Issue 3, May 2014, Pages: 72-79
Received: May 10, 2014;
Accepted: May 26, 2014;
Published: Jun. 20, 2014
Views 3092 Downloads 301
Ze-wei Ma, Thye Hua Social Work Service Center, Luogang District, Guangzhou 510555, China; Department of Psychology, School of Humanities and Management, Guangdong Medical College, Dongguan 523808, China
Wei-nan Zeng, Department of Psychology, School of Humanities and Management, Guangdong Medical College, Dongguan 523808, China
Most of the applied psychological researchers usually conduct studies requiring application of advanced mediation models, such as multiple mediator models. However, in designing research, most of the applied researchers largely ignore the statistical power of their studies. As a result, power analyses are ignored when researchers report their results. It is well recognized that low power is one possible reason for no statistically significant result being identified in a study. Moreover, studies with low statistical power have been labeled “scientifically useless”. The current study describes how to apply Monte Carlo simulation to test the type I error rates and statistical power of mediating effects in a multiple mediator model. Findings from the current simulation study indicated that the effect sizes of mediating effects and sample sizes were two important factors influencing type I error rates of indirect effects in a multiple mediator model. Furthermore, the requirement of sample size and desired power level were strongly depended on the effect size of the indirect effect.
A Multiple Mediator Model: Power Analysis Based on Monte Carlo Simulation, American Journal of Applied Psychology.
Vol. 3, No. 3,
2014, pp. 72-79.
Hayes AF, Preacher KJ. Quantifying and testing indirect effects in simple mediation models when the constituent paths are nonlinear. Multivariate Behavioral Research. 2010;45:627-60.
Albert JM, Nelson S. Generalized causal mediation analysis. Biometrics. 2011;67:1028-38.
Imai K, Keele L, Yamamoto T. Identification, infe-rence and sensitivity analysis for causal mediation effects. Statistical Science. 2010;25:51-71.
MacKinnon DP, Lockwood CM, Hoffman JM, West SG, Sheets V. A comparison of methods to test mediation and other intervening variable effects. Psychological methods. 2002;7:83.
Baron RM, Kenny DA. The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of personal-ity and social psychology. 1986;51:1173.
Alwin DF, Hauser RM. The decomposition of effects in path analysis. American Sociological Review. 1975:37-47.
Freedman LS, Schatzkin A. Sample size for studying intermediate endpoints within intervention trials or observational studies. American Journal of Epidemiology. 1992;136:1148-59.
Fritz MS, MacKinnon DP. Required sample size to detect the mediated effect. Psycholog-ical science. 2007;18:233-9.
Thoemmes F, MacKinnon DP, Reiser MR. Power analysis for complex mediational designs using Monte Carlo methods. Structural Equation Modeling. 2010;17:510-34.
Preacher KJ, Hayes AF. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior research methods. 2008;40:879-91.
MacKinnon DP, Fairchild AJ, Fritz MS. Mediation analysis. An-nual review of psychology. 2007;58:593.
Schimmack U. The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods. 2012;17:551.
Christley R. Power and error: in-creased risk of false positive results in underpowered studies. Open Epidemiology Journal. 2010;3:16-9.
Altman DG. Statistics and ethics in medical research: III How large a sample? British Medical Journal. 1980;281:1336.
Halpern SD, Karlawish JH, Berlin JA. The continuing unethical conduct of underpo-wered clinical trials. JAMA: the journal of the American Medical Association. 2002;288:358-62.
Mackinnon DP. Introduction to statistical mediation analysis. New York: Lawrence Erlbaum Associates; 2008.
MacKinnon DP, Lockwood CM, Williams J. Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate behavioral research. 2004;39:99-128.
Williams J, MacKinnon DP. Resampling and dis-tribution of the product methods for testing indirect effects in complex models. Structural Equation Modeling. 2008;15:23-51.
Cohen J. Statistical power analysis for the behavioral sciencies: Routledge; 1988.
Sedlmeier P, Gigerenzer G. Do studies of statistical power have an effect on the power of studies? Psychological Bulletin. 1989;105:309.
Oakes JM. Statistical Power and Sample Size: Some Fundamentals for Clinician Researchers. Essentials of Clinical Research. 2008:261-78.
Fox N, Mathers N. Empowering research: statistical power in general practice research. Family practice. 1997;14:324-9.
Maxwell SE. Sample size and multiple regression analysis. Psychological methods. 2000;5:434.
Aberson CL. Applied power analysis for the behavioral sciences: Routledge; 2011.
Muthén LK, Muthén BO. How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling. 2002;9:599-620.
Abraham WT, Russell DW. Statistical power analysis in psychological research. Social and Personality Psychology Compass. 2008;2:283-301.
Van Vleet BL. An Investigation of Power Analysis Approaches for Latent Growth Modeling: Arizona State University; 2011.
Oertzen T. Power equivalence in structural equation modelling. British Jour-nal of Mathematical and Statistical Psychology. 2010;63:257-72.
Shrout PE, Bolger N. Mediation in experi-mental and nonexperimental studies: new procedures and recommendations. Psychological methods. 2002;7:422.
Bradley JV. Robustness? British Journal of Mathematical and Statistical Psychology. 1978;31:144-52.