英文摘要 |
In the 1920s, the statistician Ronald Fisher introduced a statistical theory system based on probability calculations and evaluations. Fisher coined the term“p-value”to denote the probability of observing the obtained results, assuming that the independent variable is not influenced or manipulated. This calculation helps evaluate the likelihood of such results within the research context. Later, Neyman and Pearson proposed the concepts of null hypothesis and alternative hypothesis, advocating for the inclusion of both in the hypothesis testing. While the two approaches to testing statistical hypotheses were different, these approaches were later integrated into what is now known as null hypothesis significance testing, with the p-value as a standardized measure for statistical analysis. In recent years, however, the academic community has identified a concerning trend known as“p-hacking,”where researchers misuse or abuse data analysis to achieve misleading statistical significance. Then, researchers claim to have speciously successful experiments and publish the implausible research results in journals. This paper focuses on p-hacking within quantitative educational research, exploring its origins, definition, technique, impact, and how journals prevent the manipulation of p-hacking. Finally, this paper reinstates the correct perception of p-value with recommendations for preventing p-hacking, including providing evidence of practical significance while presenting statistical significance, ensuring replicability of research results, stating the structure and details of statistical analysis, and judicious selection of appropriate analytical techniques. |