Statistical economics/Common Probability Density Functions

From testwiki
Jump to navigation Jump to search

Probability distributions can be broken up into categories based on the continuity of the variables involved. This page breaks up these distributions by their use of discrete or continuous random variables.

General Notation

Here are some notations used throughout this page:

  • P(X=x) is the probability that the random variable X takes on the value x.
  • E[X] is the expected value of the random variable X, and is equal to xxp(x) for discrete random variables and xxp(x)dx for continuous variables.
  • var(X) is the variance of the random variable X, and is a measure of the spread of the possible values of the variable. If X takes on a large range of values, then the variance is larger than if X only takes on a values relatively close to each other.

Probability Distributions of Discrete Random Variables

Some outcomes only take on discrete values, commonly integers (but not always). These outcomes can be modeled by discrete random variables that can only take on those values, and which values appear with probabilities determined by the probability distribution that the random variable is characterized by. Examples of discrete outcomes are the number of cars passing a landmark each day, the number of customers buying more than $50 each day, or simply the number of heads that show up during a coin tossing contest.

Notes for Discrete Random Variables:

  • p(x) is the probability that X takes on the value x (i.e., P(X=x)). This is commonly referred to as the Probability Mass Function (PMF) for discrete random variables.

The Bernoulli Distribution

This is possibly the simplest probability distribution, with only two possible outcomes: success or failure. A common example of a Bernoulli random variable is a coin toss. A Bernoulli distribution is described by the parameter p, which is the probability of success. Naturally, (1p) is the probability of failure (if we were hoping for the coin to land on heads, this would be the probability that it lands on tails instead).

Notation

If a random variable X follows a Bernoulli distribution, it can be denoted by XBernoulli(p), where p is the probability of success or the favorable outcome. Conventionally, since this is a binary variable, we say that the result is a success when X=1, and a failure if X=0.

Distribution Details

  • p(1)=p,p(0)=1p
  • X{0,1}
  • E[X]=p
  • var(X)=p(1p)


The Binomial Distribution

A Binomial random variable X can take on integer values from 1 to n, where n is the total number of identical trails, and X is the number of successful trials. This means that X can be understood as a sum of n Binomial random variables, each with probability p of success, where each Binomial random variable is the success or failure (1 or 0) of each individual trial. A common example of a Binomial random variable is the number of heads that result from flipping a coin n times, with the probability p of landing on heads.

Notation

If a random variable X follows a Binomial distribution, it can be denoted by XBinomial(n,p) or XB(n,p), where n is the number of trials and p is the probability of success of the favorable outcome for each trial.

Distribution Details

  • p(x)=(nx)px(1p)nx
  • X{0,1,2,...,n}
  • E[X]=np
  • var(X)=np(1p)


The Geometric Distribution

Notation

If a random variable Y follows a Geometric distribution, it can be denoted by YGeometric(p), where p is the probability of success of the favorable outcome.

Distribution Details

  • p(y)=(1p)y1p
  • Y{1,2,3,...}
  • E[Y]=1p
  • var(Y)=1pp2


The Negative Binomial Distribution

Notation

If a random variable X follows a Negative Binomial distribution, it can be denoted by XNB(r,p), where r is the number of desired successes and p is the probability of success of the favorable outcome.

Distribution Details

  • p(y)=(y1r1)pr(1p)yr
  • Y{r,r+1,r+2,...}
  • E[Y]=rp
  • var(Y)=r(1p)p2


The Poisson Distribution

Notation

If a random variable Y follows a Poisson distribution, it can be denoted by XPois(λ), where λ is mean of the variable.

Distribution Details

  • p(y)=λyy!eλ
  • Y{0,1,2,...}
  • E[Y]=λ
  • var(Y)=λ


The Hypergeometric Distribution

Notation

If a random variable Y follows a Hypergeometric distribution, it can be denoted by XHypergeometric(N,n,r), where N is the total size of the population, n is the size of the sample taken, and r is the number of favorable objects in the population.

Distribution Details

  • p(y)=(ry)(Nrny)(Nn);yr;nyNr
  • Y{0,1,...,n}
  • E[Y]=nrN
  • var(Y)=n(rN)(NrN)(NnN1)


The Uniform Distribution

Notation

If a random variable X follows a Uniform distribution, it can be denoted by XUnif(N), where N is the total number of outcomes. A uniform distribution has equal probability over all possible outcomes, which is simply p=1N.

Distribution Details

  • X{set of all possible outcomes}

If the set of outcomes contains only consecutive integers starting at 1, then:

  • E[X]=n+12
  • var(X)=n2112


Probability Distributions of Continuous Random Variables

Notes for Discrete Random Variables:

  • f(x) is the probability density value that X takes on the value x. This is commonly referred to as the Probability Density Function (PDF) for continuous random variables. For continuous variables, since any range of real numbers has infinitely many numbers in it, the probability of X taking on any single number is 0. For continuous random variables, we instead ask about the probability that X will fall within some range of numbers. To get this probability, we define f(x) as a density function that gives us probability when integrated. In other words, the probability that X is between the real numbers a and b is P(a<X<b)=P(aXb)=abf(x)dx.

The Uniform Distribution

The Normal Distribution

The Poisson Scatter Distribution

The Exponential Distribution

The Gamma Distribution

The Chi-square Distribution

The Beta Distribution

The Distribution