If you’ve ever looked at an IQ score report, you’ve encountered the bell curve — that symmetrical, hill-shaped distribution that places most people near the center and progressively fewer toward the extremes. But why do IQ scores follow this pattern? And what does your position on the curve actually tell you? Understanding the normal distribution is fundamental to interpreting any standardized cognitive assessment.
Why do IQ scores follow a normal distribution?
The bell curve pattern isn’t an accident — it’s engineered into IQ tests by design. When test developers create a new intelligence assessment, they administer it to a large normative sample (typically 2,000–4,000 people stratified by age, sex, education, and ethnicity). Raw scores from this sample are then mathematically transformed so that the final distribution has a mean of exactly 100 and a standard deviation of 15.
This norming process was formalized by David Wechsler in 1939 when he introduced the deviation IQ, replacing the older mental-age ratio method. The choice of 100 as the mean was somewhat arbitrary — it simply represented “average” — while the standard deviation of 15 was chosen because it produced a distribution that closely matched empirical data from large population studies.
There is, however, a deeper reason the normal distribution works so well for intelligence. The Central Limit Theorem states that when a measured trait is influenced by many independent factors — as intelligence is influenced by thousands of genetic variants, nutritional factors, educational experiences, and environmental exposures — the resulting distribution will approximate a bell curve. Intelligence genuinely appears to be polygenic and multifactorial, which means the normal distribution isn’t just a convenient fiction; it’s a reasonable approximation of how cognitive ability is distributed in the population.
What do standard deviations mean in IQ terms?
The standard deviation (SD) is the key to interpreting where any IQ score falls on the bell curve. With a mean of 100 and an SD of 15:
| Range | IQ Scores | Percentage of Population | Approximate Ratio |
|---|---|---|---|
| Within ±1 SD | 85–115 | 68.2% | ~2 in 3 people |
| Within ±2 SD | 70–130 | 95.4% | ~19 in 20 |
| Within ±3 SD | 55–145 | 99.7% | ~997 in 1,000 |
| Above +2 SD | ≥ 130 | 2.3% | ~1 in 44 |
| Below −2 SD | ≤ 70 | 2.3% | ~1 in 44 |
This means that roughly two-thirds of the population scores between 85 and 115, and 95% falls between 70 and 130. The extremes become vanishingly rare: an IQ of 145 (+3 SD) occurs in about 1 in 741 people, while an IQ of 160 (+4 SD) is roughly 1 in 31,560.
For a deeper exploration of what scores at the high end signify, see our guide on what an IQ of 130, 140, or 150 actually means.
How are IQ scores classified?
Test publishers use classification systems to translate numerical scores into descriptive categories. The most widely used system comes from the Wechsler scales:
| IQ Range | Wechsler Classification | Percentile Range |
|---|---|---|
| 130+ | Very Superior | 98th+ |
| 120–129 | Superior | 91st–97th |
| 110–119 | High Average | 75th–90th |
| 90–109 | Average | 25th–74th |
| 80–89 | Low Average | 9th–24th |
| 70–79 | Borderline | 2nd–8th |
| Below 70 | Extremely Low | 1st and below |
It’s important to note that these classifications are conventions, not hard boundaries. A person scoring 89 and another scoring 91 are not meaningfully different despite falling in different categories. The standard error of measurement (SEM) — typically 3–5 points on modern IQ tests — means any single score represents a range of probable true scores, not a fixed point.
For a comprehensive overview of IQ score ranges and what they mean in practice, see our detailed guide to high IQ ranges and percentiles.
How is the bell curve actually constructed during test norming?
Creating a normally distributed IQ scale involves several technical steps that most test-takers never see:
Step 1 — Item development and piloting: Test developers create hundreds of candidate items spanning various difficulty levels. These are administered to pilot samples and analyzed using Item Response Theory (IRT) or classical test theory to select items with good psychometric properties.
Step 2 — Standardization sampling: The finalized test is administered to a carefully constructed normative sample. For the WAIS-IV, this included 2,200 adults stratified to match U.S. Census demographics. The WISC-V used 2,200 children and adolescents.
Step 3 — Raw-to-scaled score conversion: Raw scores (total items correct) are converted to scaled scores using a process that forces the distribution into a normal shape with the desired mean and SD. This typically involves ranking raw scores, converting ranks to z-scores, and then linearly transforming to the IQ metric (mean = 100, SD = 15).
Step 4 — Age norming: Because cognitive abilities change with age, separate norm tables are created for different age groups. A 25-year-old and a 70-year-old achieving the same raw score will receive different IQ scores, reflecting their performance relative to age peers.
Does the normal distribution perfectly describe intelligence?
Not entirely. While the bell curve is an excellent approximation for the middle 95% of the distribution, there are known deviations at the extremes:
The low end shows excess frequency. More people score below IQ 70 than the normal curve predicts. This “bump” at the low end reflects pathological conditions — genetic syndromes (Down syndrome, Fragile X), birth injuries, severe environmental deprivation — that create intellectual disability through mechanisms distinct from normal variation. Zigler and Hodapp (1986) described this as the “two-group” model of intellectual disability, distinguishing organic causes from the lower tail of normal variation.
The high end is debated. Some researchers argue that extremely high scores (IQ 160+) are more common than the bell curve predicts, while others suggest they are artifacts of test ceiling effects or measurement error. The truth is difficult to establish because sample sizes at the extremes are inherently tiny.
The Flynn Effect complicates things. IQ scores have risen substantially over the 20th century — roughly 3 points per decade in many countries. This means that a person scoring 100 on 1990 norms might score only 91 on 2020 norms. The normal distribution is recalibrated with each new norming, but the shifting baseline complicates longitudinal comparisons.
What are common misconceptions about the IQ bell curve?
Several persistent myths surround the normal distribution of IQ:
“IQ is fixed and the curve is destiny.” The bell curve describes population-level distribution at a single point in time. It says nothing about individual potential for change. Individual IQ scores can shift by 10–15 points or more over a lifetime due to education, health, and environmental factors.
“A 15-point difference always matters.” While 15 points represents one full standard deviation at the population level, the practical significance depends on where on the curve the difference falls. The difference between IQ 85 and 100 has more functional implications than the difference between 130 and 145, because the former spans the threshold where everyday cognitive demands become challenging.
“Different IQ scales are directly comparable.” Not all IQ tests use SD = 15. The Stanford-Binet 5 uses SD = 15, matching the Wechsler scales, but older versions used SD = 16. The Cattell Culture Fair test uses SD = 24. A “Cattell IQ” of 148 is equivalent to a “Wechsler IQ” of 132 — both represent the 98th percentile, despite the 16-point numerical gap.
“The bell curve proves group differences are innate.” The normal distribution within any group tells us nothing about the causes of differences between groups. As the statistician Richard Lewontin demonstrated, high heritability within groups is perfectly compatible with entirely environmental explanations for between-group differences.
Why does the normal distribution matter for test interpretation?
The practical value of the normal distribution lies in its ability to convert raw scores into meaningful comparisons:
- Percentile ranks: Telling a parent their child scored at the 84th percentile (IQ 115) is more informative than saying they got 47 out of 60 items correct
- Confidence intervals: The normal distribution allows clinicians to calculate the probability that a person’s true score falls within a given range — critical for diagnostic decisions
- Discrepancy analysis: Clinicians compare a person’s scores across different cognitive domains. The normal distribution provides the statistical framework to determine whether a difference between, say, verbal and nonverbal IQ is statistically unusual
- Cross-test comparison: Because all major IQ tests are normed to the same distribution (mean = 100, SD = 15), scores from different tests can be meaningfully compared
As Schmidt and Hunter (1998) demonstrated in their landmark meta-analysis, the predictive validity of IQ scores for job performance, educational achievement, and other life outcomes depends entirely on the standardized scoring framework that the normal distribution provides.
The bottom line
The bell curve is the mathematical backbone of IQ testing. It transforms raw performance into a standardized language that allows meaningful comparison across individuals, age groups, and tests. But it’s essential to remember that this elegant mathematical framework is a tool for measurement — not a deterministic decree about human potential. Every IQ score sits within a confidence interval, every distribution is anchored to a specific normative sample and historical moment, and every individual is more than a single point on a curve.
Understanding the normal distribution empowers you to read IQ scores with the sophistication they demand: appreciating what they reveal about relative standing while recognizing the uncertainty and context they inevitably carry.
People Also Ask
What are iq test anxiety: how stress affects your score and what to do about it?
You sit down for an IQ assessment. Your palms are sweating, your mind races, and the moment you see the first timed task, your thoughts scatter. You know you can do better than this — but the anxiety won't let you. If this sounds familiar, you're not alone. Test anxiety affects an estimated 25–40% of students and can depress cognitive test scores by 5–12 points — enough to shift someone across diagnostic categories.
Read more →How to Interpret IQ Test Results: A Psychometrician's Guide?
You've received an IQ test report — perhaps for yourself, your child, or a client. It's filled with numbers, percentiles, confidence intervals, and subtest scores. What does it all mean? This guide walks you through interpreting a cognitive ability report the way a psychometrician would, helping you understand not just what the scores say, but what they don't.
Read more →What Does an IQ of 130, 140, or 150 Actually Mean?
If you've received a score of 130, 140, or 150 on an IQ test — or if you're simply curious about what these numbers represent — you've likely found that the internet offers more mythology than explanation. These scores place individuals well above average, but what that means practically, statistically, and psychologically requires more than a percentile table.
Read more →SAT Scores and IQ: How Closely Are They Correlated?
The SAT is the most widely taken standardized test in the United States, completed by over two million students annually. IQ tests are the most established instruments for measuring cognitive ability. Given their shared reliance on reasoning, problem-solving, and processing speed, a natural question arises: does your SAT score reflect your IQ? The answer is yes — partially — but the relationship is more complex than a simple conversion table would suggest.
Read more →What are the key aspects of why do iq scores follow a normal distribution??
The bell curve pattern isn't an accident — it's engineered into IQ tests by design. When test developers create a new intelligence assessment, they administer it to a large normative sample (typically 2,000–4,000 people stratified by age, sex, education, and ethnicity). Raw scores from this sample are then mathematically transformed so that the final distribution has a mean of exactly 100 and a standard deviation of 15.
Why does what do standard deviations mean in iq terms? matter in psychology?
The standard deviation (SD) is the key to interpreting where any IQ score falls on the bell curve. With a mean of 100 and an SD of 15: This means that roughly two-thirds of the population scores between 85 and 115, and 95% falls between 70 and 130. The extremes become vanishingly rare: an IQ of 145 (+3 SD) occurs in about 1 in 741 people, while an IQ of 160 (+4 SD) is roughly 1 in 31,560.
Jouve, X. (2026, March 17). The Normal Distribution of IQ Scores: Understanding the Bell Curve. PsychoLogic. https://www.psychologic.online/2026/03/17/the-normal-distribution-of-iq-scores-understanding-the-bell-curve/

