Quantitive Modeling

Introduction to Quantitive Modeling: Probabilistic Models

Peter Foy
Peter Foy

Table of Contents

In this article we'll discuss a subset of quantitive modeling: probabilistic models.

In the previous article on linear models, we discussed deterministic models that have no uncertainty in either the inputs or outputs of the model. Probabilistic models, on the other hand, are commonly used in practice as there's often uncertainty involved in business and finance.

This article is based on notes from this course on the Fundamentals of Quantitive Modeling and is organized as follows:

  • What are probabilistic models?
  • Random variables and probability distributions
  • Examples of probabilistic models
  • Probability distributions: mean, variance, and standard deviation
  • Special random distributions: Bernoulli, binomial, and normal
  • The Empirical Rule

This post may contain affiliate links. See our policy page for more information.

What are Probabilistic Models?

Unlike deterministic models, probabilistic models incorporate random variables and probability distributions. Random variables represent the potential outcomes of an uncertain event.

One way to think about random variables is of an event that you know is going to happen, but has not yet. For example, if you're about to throw a die, you know a number 1-6 will come up, but you don't know which.

Probability distributions assign a probability to these various potential outcomes.

Probabilistic models are commonly used in practice as realistic decision making must recognize and incorporate uncertainty into the equation.

By explicitly incorporating uncertainty into the model we can then calculate the uncertainty associated with each potential output. For example, we can calculate a range of potential outcomes for a particular forecast.

Incorporating uncertainty in the modeling process is synonymous with understanding and quantifying risk, which ideally leads to better decision making.

Examples of Probabilistic Models

A few examples of probabilistic models that are commonly used in practice include:

  • Regression models
  • Probability trees
  • Monte Carlo simulation
  • Markov models

Below we'll discuss each of these probabilistic models in more detail.

Regression Models

A regression model is not deterministic and instead uses data as an input to reverse engineer a realistic description of a process.

For example, if we have the price and weights of diamonds, we could use a regression model to find the best fitting line to the data.

If the prices are in a relatively linear line, we can use a regression model to get the prediction interval, or the band around the range of prices. This is contrasted to a deterministic linear model that would simply output a straight line given the prices and weights.

To summarize, regression models use data to estimate the relationship between the mean value of the outcome ($Y$) and a predictor variable ($X$).

The intrinsic variation in the raw data is incorporated into the regression models forecasts. The less noise there is in the underlying data the more precise the regression models forecast will be.

Probability Trees

A probability tree is commonly used when you have a process that moves through several stages as they allow you to propagate probabilities through a sequence of events.

Probability trees have two main parts: the branches and the ends. Graphically, the probability of each branch is written on the branch and the outcome is written at the end of the branch.

Monte Carlo Simulation

Monte Carlo simulations are useful for modeling complex scenarios. Monte Carlo simulations work by modeling the probability of different outcomes in a process that can't be easily predicted due to the presence of random variables.

As Investopedia describes:

It is a technique used to understand the impact of risk and uncertainty in prediction and forecasting models.

Monte Carlo simulations are similar to a scenario analysis, although you're looking at thousands of millions of scenarios that are being generated from inputs, in which the inputs are drawn from probability distributions.

Markov Models

A Markov model is a dynamic probabilistic model that's used for discrete time state space transitions. In other words, a Markov model models the probability of transition between states.

There are four main types of Markov models, which depend on whether or not each sequential state is observable, and whether the system can be adjusted based on observations made, these include:

  • Markov chains: state is fully observable, system is autonomous
  • Markov decision processes: state is fully observable, system is controlled
  • Hidden Markov models: state is partially observable, system is autonomous
  • Partially observable Markov decision process: state is partially observable, system is controlled

Building Blocks of Probability Models

As mentioned, the building blocks of probability models include random variables and probability distributions

Random variables, which can be both discrete and continuous, represent the potential outcomes of an uncertain event.

An example a discrete random variable is the roll of a die. The probabilities of a roll lie between 0 and 1 inclusive and always add up to 1.

An example of a continuous random variable is the percent change of a stock from one day to the next, or the daily return. Technically, the probabilities on a continuous random variable can take on any number between -100% and infinity.

The probabilities fo a continuous random variable are computed from areas under the probability density function.

Instead of just showing the shape of the probability density function, we also want to be able to summarize it. A few common summaries of these probability distributions include:

  • Mean ($\mu$), which measures the centrality of the distribution
  • Variance ($\sigma^2$) is one of the main measures of how spread out the distribution is
  • Standard deviation ($\sigma$) is another common measure of spread

We'll now look at several special probability distributions, starting with the Bernoulli distribution.

The Bernoulli Distribution

The Bernoulli distribution is a foundational probability distribution models a random variable that can only take on one of two values:

  • $P(X = 1) = p$
  • $P(X = 0) = 1 - p$

A Bernoulli distribution is often used if an experiment can only take on two outcomes, for example heads and tails, where heads = 1 and tails = 0.

A few summaries of the Bernoulli distribution include:

  • $\mu = E(X) = 1 x p + 0 x (1 - p) = p$
  • $\sigma^2 = E(X - \mu)^2$ = (1 - p)^2 p + (0-p)^2 (1 - p) = p(1 - p)$
  • $\sigma = \sqrt{p(1 - p)}$

For example, if $p = 0.25$, $\mu = 0.5$, $\sigma^2 = 0.25$, and $\sigma = 0.5$.

The Binomial Distribution

The Binomial distribution occurs when you perform a set of Bernoulli trials.

In other words, a binomial random variable is the number of success in $n$ independent Bernoulli trials. An independent trial means that $P(A and B) = P(A) x P(B)$.

Independence means that knowing that $A$ has occurred provides no information about the occurrence of $B$.

Independence is a common assumption to simplify probabilistic models and make their construction and calculation easier.

In terms of summarizing binomial probability distributions:

$P(X = x) = \begin{pmatrix} n \\ x \end{pmatrix} p^x (1 - p)^{n - x}$, where  $\begin{pmatrix} n \\ x \end{pmatrix}$ is the binomial coefficient: $\frac{n!}{x!(n - x)!}$

$\mu = E(X) = np, \sigma^2 = E(X - \mu)^2 = np(1 - p)$

The Normal Distribution

The Normal distribution, commonly referred to as the Bell Curve, is one of the most important modeling distributions as many disparate processes can be approximated with it.

There are several mathematical theorems, such as the central limit theorem, that tell us that Normal distributions should be expected in many situations.

A Normal distribution is characterized by it mean $\mu$ and standard deviation $\sigma$, and it is symmetric about its mean.

The Normal distribution is also often used as a distributional assumption in Monte Carlo simulations.

Contrasted to Bernoulli and Binomial, the Normal distribution is a continuous distribution, meaning it can theoretically take any possible value.

A $\mu$ of 0 and $\sigma$ of 1 is referred to as the standard normal distribution.

The Empirical Rule

The Empirical Rule is a useful rule that allows you to calculate probabilities of events given the underlying data is Normally distributed.

The Empirical Rule states that:

  • There is an approximate 68% chance that an observation falls within one standard deviation from the mean.
  • There is an approximate 95% chance that an observation falls within two standard deviations of the mean
  • There is an approximate 99.7% chance that an observation falls within three standard deviations from the mean

Summary: Probabilistic Models

In this article we introduced probabilistic models, which have a key component of incorporating uncertainty into them. This uncertainty can be propagated to the model's output and provide a range of potential outcomes for a forecast.

Probabilistic models are essential to finance as they allow you to capture risk. We looked at four common probabilistic models used in practice, including:

  • Regression models
  • Tree-based models
  • Monte Carlo simulations
  • Markov models

We then looked at the building blocks of probabilistic models, including random variables and probability distributions. A few key probability distribution building blocks are the mean, variance, and standard deviation.

We then discussed several special random variables including Bernoulli, Binomial, and Normal distributions.

Finally, we discussed the Empirical Rule, which allows you to approximate the probabilities of events given the underlying data is Normally distributed.


Join the conversation.