Sample Size & Statistical Significance — Practice
Practice bank for UCAT Decision Making questions about evaluating study quality and statistical claims.
Quick-Reference: Key Concepts
Sample Size
- Larger samples give more reliable estimates
- Small samples are more susceptible to random variation
- The question "Is this sample large enough?" depends on the effect size being measured
Statistical Significance
- A result is statistically significant if it is unlikely to have occurred by chance alone
- Conventionally, p < 0.05 means "significant" (less than 5% probability of occurring by chance)
- p-value does NOT tell you the size or importance of the effect
Statistical Significance vs Clinical Significance
| Concept | Meaning |
|---|
| Statistical significance | The result is unlikely due to chance |
| Clinical significance | The result is large enough to matter in practice |
A very large study can detect tiny differences that are statistically significant but clinically meaningless.
Confidence Intervals
- A 95% confidence interval (CI) gives a range within which we are 95% confident the true value lies
- If the CI for a difference includes zero, the difference is not statistically significant
- Narrower CI = more precise estimate (usually from larger sample)
Common Pitfalls
- Assuming a small study's finding is reliable
- Equating statistical significance with practical importance
- Ignoring sample size when comparing percentages from different groups
- Assuming p > 0.05 means "no effect" (it means "not enough evidence")
Strategy
- Check the sample size — is it large enough for the claim being made?
- Distinguish statistical from clinical significance
- Look at confidence intervals if provided
- Target: 45–60 seconds
Practice
Complete the 10 assessment questions.
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.