Probability Calculator
Three modes โ single-event, P(A and B), P(A or B). Every result paired with a plain-language 'what this means' caption + step-by-step derivation.
Mode & Inputs
Answer
What this probability really means
"1 in 4" is a long-run average, not a schedule. In any small batch of 20 trials, the event might happen 0 times, or 5 times, or 10 โ there's no guarantee of "exactly 1 per 4". The actual frequency converges to 25.0% only over many, many trials.
This matters in real life: a 1-in-100 chance of failure per launch doesn't mean "fine for 99 launches and broken on the 100th." It can happen on the very first launch, or never in 500. The probability is the limit, not the schedule.
What do these terms mean?
- Probability
- A number between 0 and 1 saying how often we expect an event to happen. 0 = never; 1 = always; 0.5 = half the time.
- Complement โ P(not A)
- The probability that the event does NOT happen. P(not A) = 1 โ P(A). Useful when "the event happens" is hard to count directly but "the event doesn't happen" is easy.
- "1 in N"
- A friendlier way of writing a small probability. P = 0.04 โ "1 in 25" โ the event happens roughly once per 25 trials on average over the LONG RUN. It is NOT a schedule.
Show your work
- FormulaP(event) = favorable outcomes รท total outcomesโ classical probability โ every outcome equally likely
- Substitute1 รท 4= 0.25
- Complement (P of NOT happening)1 โ 0.25= 0.75
Probability Calculator โ seven modes for the high-school / intro-college syllabus
Pick a mode and enter the inputs: single-event probability, P(A and B), P(A or B), at least one in n trials, binomial (exactly / at most / at least k successes), conditional P(A | B), and Bayes' theorem.
Every result is paired with a smart "what this means" caption, a per-mode visual (outcome grid, Venn-style slice, trial row, or conditional Venn), a real-life example ("where you actually see this"), and a step-by-step derivation.
What probability really means โ long-run frequency, not a schedule
The most common probability misconception: "P = 0.25 means it will happen exactly once in every 4 attempts." It does NOT. P = 0.25 is a long-run average. In any small batch (say 20 attempts) the event might happen 0 times, or 5, or 10. The actual rate converges to 25% only over a very large number of trials.
This matters in real life. A 1-in-100 chance of failure per launch does NOT mean "we're safe for 99 launches and the 100th will fail." It can fail on the very first launch โ or never in 500. The probability is a limit, not a schedule. When you read a probability, read it as "in the long run, this fraction" โ not as a guarantee about timing.
Are the events independent? โ the question that decides the formula
Two events are independent when one doesn't change the probability of the other. Test it: does knowing A happened tell you anything about B?
- Independent: two coin flips, two dice rolls, two days of weather (roughly), drawing a card and putting it back before drawing another. Use the AND / OR formulas on this page.
- NOT independent: drawing two cards WITHOUT replacement, picking 2 students from a class. Use conditional probability: P(A and B) = P(A) ร P(B|A).
Conditional probability and Bayes' theorem
Conditional: P(A | B) = P(A and B) รท P(B). Read the bar as "given." Restrict to the slice of world where B is true, ask what fraction of THAT slice also has A.
Bayes' theorem: P(A | B) = P(B | A) ร P(A) รท P(B). It flips the direction. The classic example: a medical test is 99% sensitive (P(positive | disease) = 0.99), the disease has 1% prevalence (P(disease) = 0.01), and overall positive rate is 5.94% (counting false positives). What's P(disease | positive)? Just 16.7% โ much lower than most people guess. The base rate matters.
The "complement trick" for at-least-one problems
When the question is "what's the probability of at least one X in n trials?", summing all the cases (1 X, 2 Xs, โฆ, n Xs) is tedious. The trick:
One product, one subtraction, done. Example: at least one six in 4 dice rolls = 1 โ (5/6)^4 = 1 โ 625/1296 โ 51.77%. This is also the famous de Mรฉrรฉ problem from 17th-century gambling that helped birth modern probability theory.
Formula reference โ at a glance
| Mode | Formula | Use it for |
|---|---|---|
| Single-event | P = favorable รท total | Coin, die, drawing a card |
| Independent AND | P(A โฉ B) = P(A) ร P(B) | Two coins both heads |
| Independent OR | P(A โช B) = P(A) + P(B) โ P(A โฉ B) | Coin OR die showing 6 |
| At least one | 1 โ (1 โ p)^n | โฅ1 six in 4 rolls (de Mรฉrรฉ) |
| Binomial | C(n, k) ยท p^k ยท (1โp)^(nโk) | Exactly k in n trials |
| Conditional | P(A | B) = P(A โฉ B) รท P(B) | Probability given evidence |
| Bayes | P(A | B) = P(B | A) ร P(A) รท P(B) | Medical tests, spam filters |
Frequently asked questions
What is probability, in plain language?
โพ
What is probability, in plain language?
โพA number between 0 and 1 saying how often we expect an event to happen. 0 = never, 1 = always, 0.5 = half the time. We compute it by dividing favorable outcomes by total outcomes (when each outcome is equally likely) โ flipping a coin to get heads is 1 favorable รท 2 total = 0.5.
Does P = 0.25 mean it will happen once in every 4 trials?
โพ
Does P = 0.25 mean it will happen once in every 4 trials?
โพNo โ and this is the most common misconception. P = 0.25 is a LONG-RUN average. In any small batch (say 20 attempts) it might happen 0 times, or 5 times, or 10. The actual rate converges to 25% only over many, many trials. "1 in 4" is a frequency, not a schedule. A 1-in-100 risk per launch can hit on the very first launch โ or never in 500.
When can I use P(A and B) = P(A) ร P(B)?
โพ
When can I use P(A and B) = P(A) ร P(B)?
โพOnly when A and B are independent โ when one event doesn't change the probability of the other. Two coin flips are independent. Two dice rolls are independent. Drawing two cards WITHOUT putting the first back is NOT independent (the deck shrinks). For dependent events you need the conditional formula P(A and B) = P(A) ร P(B|A).
Why do we subtract P(A and B) in P(A or B) = P(A) + P(B) โ P(A and B)?
โพ
Why do we subtract P(A and B) in P(A or B) = P(A) + P(B) โ P(A and B)?
โพBecause the case where BOTH A and B happen is counted once by P(A) and a second time by P(B) โ adding them double-counts it. We subtract the overlap once to fix the count. This is the inclusion-exclusion principle in its simplest form. For mutually exclusive events (where P(A and B) = 0), the formula collapses to P(A) + P(B).
What is the "at least one" complement trick?
โพ
What is the "at least one" complement trick?
โพInstead of summing many cases ("the event happens on trial 1, OR trial 2, OR trial 3, โฆ"), compute the probability it NEVER happens โ that's a single product: (1 โ p)^n. Then subtract from 1. Example: at least one six in 4 dice rolls = 1 โ (5/6)^4 โ 51.8%. The complement trick is faster and easier than enumerating cases.
When do I use the binomial formula?
โพ
When do I use the binomial formula?
โพWhenever you want the probability of EXACTLY k successes (or at-most-k, or at-least-k) in n independent trials, each with the same success probability p. Examples: exactly 3 heads in 5 flips, at most 2 defects in 10 items, at least 7 correct on a 10-question quiz. Formula: P(X = k) = C(n, k) ร p^k ร (1 โ p)^(n โ k). Sum these terms across a range for the at-most/at-least cumulative.
What is conditional probability?
โพ
What is conditional probability?
โพ"The probability of A GIVEN that B happened" โ written P(A | B). Read the bar (|) as "given." Formula: P(A | B) = P(A and B) รท P(B). It restricts attention to the slice of world where B is already true and asks what fraction of THAT slice also has A. Example: P(face card | red card) โ given a red card was drawn, what's the probability it's a face card?
What is Bayes' theorem and when do I use it?
โพ
What is Bayes' theorem and when do I use it?
โพBayes' theorem flips a known forward conditional. If you know P(B | A) (e.g., test sensitivity: 99% of sick people test positive), the prior P(A) (1% of people are sick), and the total P(B) (overall positive rate), Bayes gives you the inverse P(A | B): given a positive test, what's the actual probability of being sick? The famous answer for these inputs is ~17% โ much lower than people intuitively expect. Critical for medical diagnosis, spam filters, criminal forensics, anywhere you update a belief after evidence.
"1 in N" โ friendlier or more confusing?
โพ
"1 in N" โ friendlier or more confusing?
โพIt's friendlier as long as you remember it's a long-run average. P = 0.04 โ "1 in 25" โ the event happens roughly once per 25 trials on average. But "average" doesn't guarantee timing โ see the misconception above. Use it for intuition, not for prediction.
I need combinations or permutations to set up the problem โ where do I do that?
โพ
I need combinations or permutations to set up the problem โ where do I do that?
โพUse the dedicated calculators. โข Combinations C(n, r) โ for unordered selections (poker hands, lottery). โข Permutations P(n, r) โ for ordered arrangements (race medals, PIN codes). โข Factorial n! โ for arranging all items in a row. Compute the count there, then plug favorable / total here.
Common probability problems โ worked examples
The 10 most-googled probability questions, with the formula route spelled out. If your homework problem looks like one of these, the mode it maps to is in parentheses.
Probability of drawing a red ball (single-event)
Setup: Bag with 3 red and 5 white balls. Draw one at random.
Math: P(red) = 3 รท 8 = 0.375
Answer: 37.5% โ about 3 in 8 draws on average.
Probability of at least one head in 5 coin flips (at-least-one)
Setup: Five fair coin flips. We want AT LEAST one to land heads.
Math: P(โฅ1 H) = 1 โ (1/2)โต = 31/32
Answer: 96.875% โ almost certain.
Probability of rolling at least one six in 4 dice rolls (at-least-one)
Setup: The classic de Mรฉrรฉ problem from 17th-century gambling.
Math: P(โฅ1 six) = 1 โ (5/6)โด
Answer: โ 51.77% โ slightly better than even odds.
Probability of two events both happening (AND)
Setup: P(rain on Saturday) = 0.3, P(rain on Sunday) = 0.3, treat as independent.
Math: P(both rainy) = 0.3 ร 0.3 = 0.09
Answer: 9% โ under 1 in 11.
Probability of at least one of two events (OR)
Setup: P(rain Saturday) = 0.3, P(rain Sunday) = 0.3 (independent).
Math: P(at least one rainy) = 0.3 + 0.3 โ 0.09 = 0.51
Answer: 51% โ slightly more likely than not.
Probability of exactly 3 heads in 5 flips (binomial)
Setup: Five fair coin flips. Want EXACTLY 3 heads.
Math: C(5, 3) ร 0.5ยณ ร 0.5ยฒ = 10/32
Answer: 31.25% โ most-likely single outcome.
Probability of passing a 10-question true/false quiz by guessing (binomial)
Setup: 10 questions, 50% guess rate per question, need โฅ 7 correct to pass.
Math: ฮฃ C(10, k) ร 0.5ยนโฐ for k = 7..10 = 176/1024
Answer: โ 17.2% โ guessing is a bad strategy.
Birthday paradox โ at least one shared birthday (at-least-one)
Setup: 23 people in a room. Probability at least two share a birthday?
Math: 1 โ (365 ร 364 ร โฆ ร 343) / 365ยฒยณ
Answer: โ 50.7% โ surprisingly likely.
Lottery odds โ single ticket, 6 of 49 (single-event)
Setup: C(49, 6) = 13,983,816 distinct tickets. One ticket = one favorable outcome.
Math: P(jackpot) = 1 รท 13,983,816
Answer: โ 0.0000072% โ about 1 in 14 million.
Medical test โ P(disease | positive) (Bayes)
Setup: Disease prevalence = 1%. Test sensitivity = 99%. Total positive rate = 5.94%.
Math: P(D | +) = (0.99 ร 0.01) รท 0.0594
Answer: โ 16.7% โ much lower than people guess.
Need to count outcomes first?
Probability problems often start with "how many waysโฆ" โ that's combinatorics. Compute the count there, then plug it into this calculator.