Statistical Significance
Definition
Statistical significance is a measure of whether an observed result is unlikely to have occurred by chance alone. Statistical significance is a threshold, not a measure of truth or practical importance.
Correct Scientific Usage
In scientific research, statistical significance indicates whether the data are sufficiently incompatible with a null hypothesis at a pre-defined level of uncertainty (commonly p < 0.05). Scientists recognize that statistical significance depends heavily on sample size, so they often report confidence intervals alongside p-values for more context.
Common Misunderstandings
Statistical significance is very commonly equated with importance or proof. A p-value of 0.049 is treated as meaningful while 0.051 is dismissed, despite being nearly identical.
There’s also widespread confusion between statistical significance and practical significance. A drug might produce a statistically significant 0.1-point reduction on a 100-point scale, which is meaningless in practice.
Why It Matters
Understanding statistical significance prevents misinterpreting research findings and media headlines. It also helps distinguish between results that are merely statistically detectable versus those that matter clinically.
References
- Statistical Significance, StatPearls
- The ASA Statement on p-Values: Context, Process, and Purpose, The American Statistician
Related Terms
Related Articles
- What Statistical Significance Actually Tells Us
- What 'Backed by Science' Really Means
- Why One Study Is Almost Never Enough