Each dot below is one person or one event. The red dot is the one that "happens."
1 in 10 — e.g. chance of getting a cold in any given month
1 in 100 — e.g. chance of a serious car accident in a year of driving
1 in 1,000 — e.g. dying in a car accident this year (UK/Western Europe) — showing 200 dots as a sample
Finding the red dot in this grid takes effort — and this is only a fifth of the full crowd. In the real group of 1,000, it would be even harder.
The crowd test: "1 in 1,000" means: imagine 1,000 people in a sports stadium. One of them. That's it. "1 in a million" is 20 such stadiums. "1 in a billion" is 20,000 stadiums — the entire population of a large city.
Part B · the probability scale — real anchors
~1 in 2
Lifetime cancer diagnosis (Western countries)
Most cancers are treatable. This high number surprises people who think cancer is rare.
~1 in 4
Dying of heart disease (lifetime risk)
The single largest cause of death in most developed countries.
~1 in 100
Being involved in a serious car accident in any given year
People drive daily and feel safe — but across a lifetime of driving, the cumulative risk is substantial.
~1 in 500
Dying in a road accident this year (UK)
Higher in countries with less safe roads. Lower for pedestrians in cities.
~1 in 11,000
Dying on a single commercial flight
Flying is ~50–100× safer per km than driving. Yet people fear it far more.
~1 in 1,000,000
Being struck by lightning in a year
Lifetime risk (~1 in 15,000) is much higher — which is why "1 in a million" feels wrong for lightning.
~1 in 14,000,000
Winning a major lottery jackpot (UK National Lottery)
You are ~1,000× more likely to be struck by lightning this year than to win the jackpot.
The probability spectrum — logarithmic scale
Each step left is 10× more likely. The distance between lottery and lightning is the same as lightning and car accidents — but it feels very different.
Part C · the biggest trap — relative vs absolute risk
Interactive: see how the same fact can be presented two very different ways
The rule to remember forever
Whenever you hear a relative risk ("X% more likely"), always ask: "more likely than what?" — what is the base rate?
A 100% increase sounds catastrophic. If the base rate is 0.001%, doubling it gives 0.002% — almost nothing.
A 10% increase sounds small. If the base rate is 30%, adding 3 percentage points matters a lot.
Part D · cognitive biases — why our probability intuition fails
Availability bias
Vivid = likely
We judge probability by how easily we can imagine it. Plane crashes make the news; car crashes don't. So we fear the wrong things. Sharks kill ~5 people/year worldwide. Vending machines kill more.
Base rate neglect
Ignoring the denominator
A drug test is "99% accurate." You test positive. What's the chance you actually have the disease? If only 1 in 10,000 people have it, probably less than 1% — because false positives outnumber true positives. See Part E.
Gambler's fallacy
Coins have no memory
After 10 heads in a row, the next flip is still 50/50. The coin doesn't "owe" you tails. Each independent event resets to its base probability. Casinos are built on this misunderstanding.
Conjunction fallacy
Specific ≠ more likely
"She is a feminist bank teller" feels more probable than "she is a bank teller" — but it can't be, because the specific includes the general. Adding detail always reduces probability, never increases it.
Neglect of probability
0.01% feels like 0%
People treat very small probabilities as zero and very large ones as certainty. A 1% chance of catastrophe deserves serious attention — but emotionally, it registers as "won't happen." This is why people underinsure for tail risks.
Optimism bias
"It won't happen to me"
Most people rate their personal risk as below average for almost every negative event — car accidents, divorce, illness. This is statistically impossible for the majority. We are not special; we are averages.
How we perceive vs. how things actually are
Humans don't perceive probability linearly. We overweight very small probabilities (lottery tickets feel plausible; shark attacks feel imminent) and underweight moderate ones. This is why both insurance and gambling industries are profitable — they exploit opposite ends of the same bias.
Part E · the false positive problem — most surprising result in probability
Adjust the sliders — what does a positive test really mean?
1 in 100
99%
How to think about it: the frequency tree
Imagine 10,000 people tested for a disease that affects 1 in 100, with a 99% accurate test.
The key insight: Among all 198 positive tests, only 99 are genuine — exactly 50%. The false positives equal the true positives because the 1% error rate applied to 9,900 healthy people generates as many wrong flags as the 99% hit rate finds in 100 sick ones. Prevalence drives everything.
Part F · the birthday paradox — probability defies intuition
How many people in a room before two share a birthday?
Drag to find where the probability crosses 50% — most people guess far too high.
260
The full curve — probability of a shared birthday
The curve is deceptively steep. It climbs rapidly because each new person can match any of the people already in the room — the number of possible pairs grows quadratically.
Part G · risk in real life — the numbers that actually matter
Lifetime risk of dying in a car (UK/EU)
~1 in 240
Yet most people drive daily without conscious fear. We accept familiar risks far more readily than unfamiliar ones — even when unfamiliar risks are lower.
Lifetime risk of dying in a plane crash
~1 in 11,000
About 45× safer than driving per journey. Per km travelled, flying is ~50–100× safer. Fear of flying is one of the most statistically unjustified common fears.
Risk of a serious side effect from a common vaccine
~1 in 100,000
The disease being vaccinated against typically carries 100–10,000× greater risk of the same outcome. Vaccine risk must always be weighed against disease risk, not against zero.
Risk of dying from surgery (routine, healthy adult)
~1 in 100,000
General anaesthesia alone: ~1 in 100,000. Surgical risk rises sharply with age, obesity, and pre-existing conditions.
Annual death risk per 100,000 people — visual comparison
Bar width is proportional to risk on a linear scale within this comparison. Heart disease dominates everything else. The "all causes" bar at ~830/100k shows total mortality; most deaths cluster around a few causes.
Part H · expected value — probability × impact
What is expected value?
Expected value (EV) is the probability-weighted average of all possible outcomes. It answers: "If I repeated this decision many times, what would the average outcome be?" It is the foundation of rational decision-making under uncertainty.
Formula: EV = Σ (probabilityi × outcomei) for all possible outcomes i
Build your own expected value table
When EV is not enough: the role of variance
Two choices can have identical expected values but feel very different. A 1% chance of winning £1,000 and a 100% chance of winning £10 both have EV = £10 — but they are not equivalent decisions. When a bad outcome is unacceptable (bankruptcy, death, irreversible harm), expected value undercounts the downside. This is why people buy insurance at negative expected value: they are paying to reduce variance, not to maximise EV.
Real-world expected value examples
Part I · the Monty Hall problem — why switching wins
Play the game — then run 1,000 simulations
You're on a game show. Behind one door is a car, behind the other two are goats. You pick a door. The host (who knows where the car is) opens a different door revealing a goat. Should you switch?
Simulation — run it yourself
Why switching is correct (the intuition)
When you first pick a door, you have a 1/3 chance of being right. That means there's a 2/3 chance the car is behind one of the other two doors. The host, who knows where the car is, then eliminates a losing door — but crucially, this doesn't change your initial 1/3 probability. The entire 2/3 probability "concentrates" onto the remaining unchosen door. Switching captures that 2/3. Staying keeps the original 1/3. The simulation above proves it empirically.
Part J · the law of large numbers — why randomness evens out
Flip a coin many times — watch the proportion of heads converge
After just a few flips, the proportion swings wildly. But with thousands of flips, it converges inexorably toward 50%. This is the law of large numbers — not magic, but mathematics.
Law of Large Numbers
Results converge
With enough trials, the sample average converges to the true expected value. This is why casinos never lose in the long run: their edge is small (2–5%) but applied across millions of bets, it is mathematically certain to accumulate.
Law of Small Numbers (fallacy)
Small samples lie
People expect small samples to be representative. If you flip a coin 6 times and get 5 heads, you don't conclude the coin is biased — but in other domains (crime rates in small towns, drug trials with 20 patients), this error is made constantly.
Part K · conditional probability — when events are not independent
The multiplication trap: independent vs. dependent events
When events are independent, you multiply probabilities. When they are not, you must condition on what has already happened. This distinction causes more probability errors than almost anything else.
50%
50%
Visualising probability: Venn diagrams
Independent: P(A∩B) = P(A)×P(B)
Mutually exclusive: P(A∩B) = 0
Common error: "What's the chance of rain on both Saturday and Sunday?" — people often add (50% + 50% = 100%) rather than multiply (50% × 50% = 25%), forgetting the events are independent. Only mutually exclusive events add up.
Part L · test yourself
1. A headline reads: "New drug cuts heart attack risk by 50%." Should you be impressed?
Not necessarily — you need the base rate. If your annual risk of a heart attack was 2%, a 50% relative reduction brings it to 1%. That's a real 1 percentage point benefit — meaningful. But if your base risk was 0.2%, the drug brings it to 0.1% — an absolute reduction of just 0.1 percentage points. The drug would need to treat 1,000 people to prevent one heart attack. Whether that justifies the cost, side effects, and daily pill-taking depends entirely on that absolute number, not the 50% headline.
2. You flip a fair coin and get 7 heads in a row. What is the probability the next flip is tails?
Exactly 50%. Each flip is independent — the coin has no memory of previous outcomes. The probability of 7 heads in a row happening was 1/128 (~0.78%), which was unlikely, but it happened. Now that it has happened, you are simply at flip #8, and the probability of tails is 50%. This is the gambler's fallacy: the feeling that "tails is due" is a cognitive error. Casinos thrive on it. Roulette wheels display recent results precisely to feed this illusion.
3. A disease affects 1 in 1,000 people. A test for it is 99% accurate. You test positive. What is the approximate probability you actually have the disease?
About 9%. This is the false positive paradox. Test 100,000 people: 100 actually have the disease, and the 99% test catches 99 of them (true positives). But 99,900 don't have it, and the 1% error rate gives 999 false positives. So among all ~1,098 positive results, only 99 are real — that's 99/1,098 ≈ 9%. This is why medical screening for rare diseases is complex: even very accurate tests produce mostly false positives when the disease is uncommon. Doctors follow up positive screening tests with confirmatory tests precisely because of this.
4. You are choosing between two routes to work. Route A has a 10% chance of making you 10 minutes late. Route B has a 1% chance of making you 60 minutes late. Which is riskier in terms of expected delay?
They are almost equal. Expected delay = probability × impact. Route A: 10% × 10 min = 1 minute expected delay per trip. Route B: 1% × 60 min = 0.6 minutes expected delay per trip. Route B is actually slightly better in pure expected value — but Route A's frequent small delays might be more manageable than Route B's rare catastrophic ones. This illustrates that expected value isn't always the right metric: variance (how unpredictable the outcome is) matters too, especially when a bad outcome is truly unacceptable.
5. "Smokers are 15–30× more likely to get lung cancer than non-smokers." Is this a relative or absolute risk? What does it mean in practice?
It's a relative risk — but the base rate is high enough that it translates to a large absolute risk too. About 0.1% of non-smokers develop lung cancer per year. A 20× increase means ~2% of heavy smokers develop it per year. Over 40 years of heavy smoking, roughly 1 in 10 to 1 in 6 heavy smokers will develop lung cancer. That's an enormous absolute risk — and this is before counting heart disease, stroke, and other cancers that smoking also dramatically increases. This is one of the clearest examples in medicine where both the relative risk (20×) and the absolute risk (~15% lifetime) are both genuinely large and alarming.
6. On the Monty Hall problem: after the host reveals a goat, is the probability of winning really 2/3 if you switch — or is it 50/50?
It is genuinely 2/3 if you switch, not 50/50. The key is that the host's action is not random — the host always opens a losing door and always knows where the car is. This asymmetry is what makes it non-obvious. When you picked your original door, you had 1/3 probability of being right. The other two doors together held 2/3. The host's reveal collapses the two doors into one — but the 2/3 stays attached to the "other side." If this feels wrong, run the simulation above: after 10,000 trials, switchers win ~67% and stayers win ~33%. The mathematics is unambiguous; the intuition misleads almost everyone, including professional mathematicians when the problem was first published.
7. A financial advisor says: "Our fund has outperformed the market 8 years in a row." Is this evidence of skill?
It depends entirely on how many funds you're looking at. If a fair coin beats the market roughly 50% of years, the probability of any one fund doing it 8 years in a row by luck is (0.5)⁸ = 1/256, or about 0.4%. That seems impressive — but if there are 5,000 active funds, you'd expect 5,000 × 0.004 ≈ 20 funds to have 8-year streaks purely by chance. This is survivorship bias combined with the law of large numbers working against us. The funds that went to zero aren't in the advertisement. Before attributing skill, ask: how many funds were there in total? What's the track record after the advertisement was made?