Introduction
Confidence intervals (CIs) are one of the most tested concepts on the AP Statistics exam. They appear in multiple choice questions, FRQs, and data analysis prompts. Students often calculate them correctly but lose points because they don’t know how to interpret them in context.
This guide will teach you:
- What a confidence interval really means
- How to interpret it properly on the AP Stats exam
- The difference between confidence level and intervals
- Common mistakes students make
- Example interpretations that would earn full credit
At RevisionDojo, we’ve seen hundreds of students lose easy points on CIs—not because of math errors, but because of wording mistakes. After reading this, you won’t fall into that trap.
1. What Is a Confidence Interval?
A confidence interval is a range of values, calculated from a sample, that likely contains the true population parameter (like a mean μ or proportion p).
General Form:
Statistic ± (critical value × standard error)
Examples:
- Proportion: p̂ ± z*√[p̂(1 – p̂)/n]
- Mean: x̄ ± t*(s/√n)
Key Idea: Confidence intervals are about parameters, not samples.
2. What Does a Confidence Level Mean?
If we say we are constructing a 95% confidence interval, it does not mean there’s a 95% chance the population parameter is in our one calculated interval.
Instead:
