Power Analysis Calculator
Power analysis helps you plan your study by determining the sample size needed to detect an effect, or the statistical power of your existing study design. This is essential for ensuring your research has a good chance of finding a real effect if one exists.
What is Power Analysis?
Statistical power is the probability that a test will correctly reject a false null hypothesis (i.e., detect a real effect). A study with low power may fail to detect effects that truly exist (Type II error), while a study with adequate power gives you confidence that negative results are meaningful.
The Four Components
Power analysis involves four interrelated components. If you know any three, you can calculate the fourth:
- Effect Size: The magnitude of the difference or relationship you expect to find
- Sample Size (n): The number of participants or observations
- Significance Level (α): The probability of Type I error (typically 0.05)
- Power (1-β): The probability of detecting a true effect (typically 0.80)
Common Use Cases
A Priori (Before Study)
Calculate required sample size before collecting data to ensure adequate power for detecting the expected effect.
Post Hoc (After Study)
Calculate achieved power given your sample size and observed effect, or determine the minimum effect you could have detected.
Effect Size Guidelines (Cohen)
| Test Type | Measure | Small | Medium | Large |
|---|---|---|---|---|
| T-Tests | Cohen's d | 0.2 | 0.5 | 0.8 |
| ANOVA | Cohen's f | 0.1 | 0.25 | 0.4 |
| Correlation | r | 0.1 | 0.3 | 0.5 |
| Chi-Square | Cohen's w | 0.1 | 0.3 | 0.5 |
| Proportions | Cohen's h | 0.2 | 0.5 | 0.8 |
Supported Tests
- One-sample t-test
- Paired (dependent) t-test
- Independent samples t-test
- One-way ANOVA
- Correlation (Pearson)
- Chi-square test
- Two proportions test
Recommendations
- Power of 0.80: This is the conventional minimum, meaning an 80% chance of detecting a true effect. For critical research, consider 0.90 or higher.
- Alpha of 0.05: The standard significance level. Use 0.01 for more stringent testing or 0.10 for exploratory research.
- Effect size: Base this on previous research, pilot studies, or the minimum effect that would be practically significant. Avoid using conventions blindly.