Conditional Probability Calculator
Calculate conditional probability using Bayes' theorem, P(A|B) = P(A∩B)/P(B), and the law of total probability with step-by-step results.
About
Conditional probability quantifies the likelihood of event A occurring given that event B has already occurred. The fundamental definition is P(A|B) = P(A ∩ B) ÷ P(B), valid only when P(B) > 0. Misapplying this formula leads to the base rate fallacy, a documented cause of diagnostic errors in medicine and false-positive inflation in screening tests. This calculator implements the general definition, Bayes' theorem for inverse conditioning, and the law of total probability for partitioned sample spaces.
The tool assumes events are defined on a standard probability space where all inputs fall within [0, 1]. It does not handle fuzzy logic or subjective priors beyond numeric specification. Pro tip: when working with medical diagnostics, always verify that P(B) accounts for both true positives and false positives across the entire population, not just the symptomatic subgroup.
Formulas
The primary computation uses the definition of conditional probability and its Bayesian inverse.
When the joint probability P(A ∩ B) is not directly known but the reverse conditional P(B|A) is available, Bayes' theorem applies:
For the Law of Total Probability, when the sample space is partitioned into n mutually exclusive events {A1, A2, …, An}:
Derived quantities computed by the tool include:
Where P(A) = probability of event A, P(B) = probability of event B, P(A ∩ B) = joint probability of both events occurring, P(A|B) = probability of A given B has occurred, and P(B|A) = probability of B given A has occurred.
Reference Data
| Concept | Formula | Condition | Common Application |
|---|---|---|---|
| Conditional Probability | P(A|B) = P(A ∩ B) ÷ P(B) | P(B) > 0 | General event dependence |
| Bayes' Theorem | P(A|B) = P(B|A) ⋅ P(A) ÷ P(B) | P(B) > 0 | Medical diagnostics, spam filters |
| Law of Total Probability | P(B) = n∑i=1 P(B|Ai) ⋅ P(Ai) | {Ai} is a partition | Risk assessment, decision trees |
| Multiplication Rule | P(A ∩ B) = P(A|B) ⋅ P(B) | Always valid | Sequential event chains |
| Independence Test | P(A ∩ B) = P(A) ⋅ P(B) | If true, events are independent | Quality control, A/B testing |
| Complement Rule | P(Ac) = 1 − P(A) | Always valid | Survival analysis |
| Union (Inclusion-Exclusion) | P(A ∪ B) = P(A) + P(B) − P(A ∩ B) | Always valid | Insurance, compound events |
| Mutual Exclusivity | P(A ∩ B) = 0 | Events cannot co-occur | Dice outcomes, card suits |
| Odds Form (Bayes) | P(A|B)P(Ac|B) = P(B|A)P(B|Ac) ⋅ P(A)P(Ac) | P(B) > 0 | Bayesian updating, forensics |
| Conditional Independence | P(A ∩ B|C) = P(A|C) ⋅ P(B|C) | Given C | Naive Bayes classifiers |
| Chain Rule | P(A1 ∩ … ∩ An) = n∏k=1 P(Ak|A1 ∩ … ∩ Ak−1) | All conditional probs defined | Markov chains, NLP |
| Posterior Predictive | P(x|data) = ∫ P(x|θ) P(θ|data) dθ | Continuous parameter space | Bayesian prediction |
| Sensitivity (True Positive Rate) | P(T+|D+) | Disease present | Medical screening |
| Specificity (True Negative Rate) | P(T−|D−) | Disease absent | Medical screening |
| Positive Predictive Value | P(D+|T+) | Test positive | Clinical decision-making |
| Negative Predictive Value | P(D−|T−) | Test negative | Clinical decision-making |