Do Effect-Size Measures Measure up?: A Brief Assessment
Because of criticisms leveled at statistical hypothesis testing, some researchers have argued that measures of effect size should replace the significance-testing practice. We contend that although effect-size measures have logical appeal, they are also associated with a number of limitations that may result in problematic interpretations of them in research on children and adults with learning disabilities (LD). The purpose of the present paper is to provide a framework for reporting and interpreting empirical research findings in LD research. Specifically, we recommend that: (1) researchers apply criteria of both statistical significance and substantive significance to help consumers of research assess the believability and importance of reported results, respectively; with (2) the establishment of statistical significance, obtained via the use of inferential statistical techniques serving as a precursor to the interpretation of measures of substantive importance. We further contend that the family of standard effect-size indices represents just one approach for assessing substantive significance in LD research. Other methods include the use of confidence intervals and consideration of the results' clinical significance and economic significance. In addition, the critical role played by independent replications must not be overlooked by LD researchers. As such, effect-size measures have an important, though not exclusive, function in evaluating educational and psychological research findings in general and LD research results in particular.
Onwuegbuzie, Anthony J.; Levin, Joel R.; and Leech, Nancy L., "Do Effect-Size Measures Measure up?: A Brief Assessment" (2003). RSEM Faculty Publications. 34.