Invezz is an independent platform with the goal of helping users achieve financial freedom. In order to fund our work, we partner with advertisers who may pay to be displayed in certain positions on certain pages, or may compensate us for referring users to their services. While our reviews and assessments of each product are independent and unbiased, the order in which brands are presented and the placement of offers may be impacted and some of the links on this page may be affiliate links from which we earn a commission. The order in which products and services appear on Invezz does not represent an endorsement from us, and please be aware that there may be other platforms available to you than the products and services that appear on our website. Read more about how we make money >
Bayesian inference
3 key takeaways
Copy link to section- Bayesian inference updates the probability of a hypothesis based on new data using Bayes’ Theorem.
- It incorporates prior beliefs or information into the analysis, resulting in a posterior distribution.
- This approach allows for making probabilistic statements and decisions under uncertainty.
What is Bayesian inference?
Copy link to sectionBayesian inference is a statistical method that applies Bayes’ Theorem to draw conclusions about parameters, models, or hypotheses based on observed data. Unlike traditional (frequentist) methods that rely solely on the data at hand, Bayesian inference incorporates prior beliefs or information about the parameters and updates these beliefs in light of new evidence.
The goal of Bayesian inference is to obtain a posterior distribution, which represents the updated probability of the parameters after considering the new data. This posterior distribution can be used to make probabilistic statements, predictions, and decisions.
Key components of Bayesian inference
Copy link to section- Prior distribution: Represents the initial beliefs or information about the parameters before observing the data. Priors can be based on previous studies, expert opinions, or other relevant sources.
- Likelihood function: Reflects the probability of the observed data given the parameters. It represents the information provided by the data.
- Posterior distribution: Combines the prior distribution and the likelihood function using Bayes’ Theorem. It represents the updated beliefs about the parameters after considering the data.
- Bayes’ Theorem: The formula for updating the prior distribution with the likelihood to obtain the posterior distribution is given by:
[ P(\theta|y) = \frac{P(y|\theta) \cdot P(\theta)}{P(y)} ]
where ( \theta ) represents the parameters, ( y ) represents the observed data, ( P(\theta|y) ) is the posterior distribution, ( P(y|\theta) ) is the likelihood, ( P(\theta) ) is the prior distribution, and ( P(y) ) is the marginal likelihood.
How does Bayesian inference work?
Copy link to section- Specify the prior distribution: Define the prior beliefs about the parameters based on existing knowledge or assumptions.
- Define the likelihood function: Construct the likelihood function based on the observed data and the assumed model.
- Compute the posterior distribution: Use Bayes’ Theorem to combine the prior distribution and the likelihood function, resulting in the posterior distribution.
- Make inferences: Draw conclusions, make predictions, and test hypotheses based on the posterior distribution. This can involve calculating credible intervals, posterior means, and probabilities of hypotheses.
Examples of Bayesian inference usage
Copy link to section1. Medical diagnosis
Copy link to section- Hypothesis: A patient has a particular disease.
- Evidence: The result of a medical test.
- Application: Bayesian inference can update the probability of the patient having the disease based on the test result, considering the test’s accuracy and the disease’s prevalence.
2. Machine learning
Copy link to section- Hypothesis: A data point belongs to a particular class.
- Evidence: Features or attributes of the data point.
- Application: Bayesian inference is used in algorithms like Naive Bayes classifiers to classify data points based on their features and prior probabilities of different classes.
3. Financial modeling
Copy link to section- Hypothesis: A stock’s future price will increase.
- Evidence: Historical price data and market indicators.
- Application: Bayesian inference can update the probability of the stock’s price increase based on new market data, helping investors make informed decisions.
Importance of Bayesian inference
Copy link to section- Incorporates prior knowledge: Allows the inclusion of prior information, making it useful in situations with limited or uncertain data.
- Probabilistic inferences: Provides a framework for making probabilistic statements about parameters and hypotheses, offering a more nuanced understanding of uncertainty.
- Flexibility: Can be applied to a wide range of problems in various fields, including medicine, finance, engineering, and machine learning.
Real-world application
Copy link to sectionExample: A doctor is assessing the likelihood of a patient having a rare disease based on a positive test result.
Prior distribution: The doctor uses prior information about the disease’s prevalence (e.g., 1 in 1,000 people).
Likelihood function: The doctor considers the accuracy of the test (e.g., 99% sensitivity and 95% specificity).
Posterior distribution: Using Bayes’ Theorem, the doctor updates the prior probability with the test result to calculate the posterior probability of the patient having the disease.
Inference: The doctor uses the posterior probability to make a more informed diagnosis and decide on further testing or treatment.
More definitions
Sources & references

Arti
AI Financial Assistant