Random Processes
The random variables in a stochastic process do not have to be independently and identically distributed. In fact, if they are not, then we can get additional modeling power.
Stationarity is often a good assumption that can simplify systems which have been running for a long period of time.
Discrete Time Markov Chains
Definition 49
is a Markov Chain if each random variable takes values in a discrete set (the state space), and,
In words, a Markov Chain is a sequence of random variables satisfying the Markov Property where probability of being in a state during the next time step only depends on the current state.
Definition 50
A temporally homogenous Markov Chain is one where the transition probabilities for all and .
Temporally Homogenous Markov Chains don’t change their transition probabilities over time. Since the are conditional probabilities, they must satisfy
The transition matrix encodes the one-step transition probabilities of the Markov Chain.
Theorem 27 (Chapman-Kolmogorov Equation)
The n-step transition probabilities (i.e starting in and ending in steps later) of the Markov Chain are given by .
One useful thing we can comptue with Markov Chain is when the chain first enters a particular state.
Computing the expected hitting time is an example of a broader type of Markov Chain Analysis called First Step Analysis. In First Step Analysis, we set up a system of equations that relies on the Markov property to generate a system of equations that only look at the first transition in the chain. For expected hitting time, these look like
For ,
For ,
Properties of Markov Chains
By convention, we say that . It turns out that is an equivalence relation on the state space . An equivalence relation means that
This means that partitions the state-space into equivalence classes (i.e classes of communicating states).
Definition 56
An irreducible Markov Chain is reversible if and only if there exists a probability vector that satisfies the Detailed Balance Equations
Markov Chains which satisfy the detailed balance equations are called reversible because if , then the random vectors and are equal in distribution.
Theorem 28
If the graph of a Markov Chain (transform the state transition diagram by making edges undirected, removing self-loops, and removing multiple edges) is a tree, then the Markov Chain is reversible.
Class Properties
A class property is a property where if one element of a class has the property, all elements of the class have the property. Markov Chains have several of these properties which allow us to classify states.
Recurrence means that we will visit a state infinitely often in the future if we start in that state, while transience means we will only visit the state finitely many times. Recurrence and transience can be easily identified from the transition diagram.
Any finite communicating class which has no edges leaving the class is recurrent
If a state has an edge leading outside its communicating class, then it is transient
If a state is recurrent, then any state it can reach is recurrent
We can further break recurrence down if we modify the definition of hitting time to be (the first time the chain enters state ).
Positive recurrence means we visit a recurrent state so frequently that we spend a positive fraction of time in that state. Null recurrencce means we visit a recurrent state so infrequently (but still infinitely many times) that we spend virtually no time in that state.
If we start in state , then revists to only occur at integer multiples of the period.
All of the above properties are class properties.
Long-Term Behavior of Markov Chains
Since the completely characterize the Markov Chain, we can also describe what happens to the chain in the limit.
It is called a stationary distribution because the distribution over states is invariant with time. A Markov Chain is only at stationarity if and only if it has been started from the stationary distribution. The relationship can be expanded for the jth element to show that any stationary distribution must satisfy the Global Balance Equations:
Note that if a distribution satisfies the detailed balance equations from Definition 56, then also satisfies Definition 63.
Both the global balance equations and detailed balance equations can be conceptualized as statements of flow. If each indicates how much mass is placed on state , then the global balance equations tell us the mass leaving the node (going to each neighbor in proportion to ) is equal to the mass entering the node (which must sum to since it is a stationary distribution. Rather than looking at the flow of the whole chain, the detailed balance equations look at the flow between two states. The mass gives to is equal to the mass gives to .
Theorem 30
If an irreducible Markov Chain is at stationarity, then the flow-in equals flow-out relationship holds for any cut of the Markov Chain where a cut is a partition of the chain into two disjoint subsets.
Theorem 30 is one useful result can help solve for stationary distributions.
Theorem 31 (Big Theorem for Markov Chains)
Let be an irreducible Markov Chain. Then one of the following is true.
Either all states are transient, or all states are null recurrent, and no stationary distribution exists, and .
All states are positive recurrent and the stationary distribution exists, is unique, and satisfies
If the Markov Chain is aperiodic, then
One consequence of Theorem 31 is that it means the stationary distribution of a reversible Markov Chain is unique. This makes solving the detailed balance equations a good technique of solving for the stationary distribution. If a stationary distribution exists, then we can also say when the chain will converge to the stationary distribution.
Theorem 32 (Convergence Theorem)
If a chain is irreducible, positive, recurrent, and aperiodic with stationary distribution , then the distribution at time
Continuous Time Markov Chains
Definition 64
A process taking values in a countable state space is a temporally homogenous continuous time markov chain if it satisfies the Markov Property
To characterize how a CTMC functions, we need to define some additional quantities.
is the transition rate of state
is the transition probability bewteen states and
Every time a CTMC enters a state , it will hold in that state for time before transitioning to the next state with probability .
Definition 65
The jump chain is a DTMC which describes the transition probabilities between states in the CTMC
Note that the jump chain cannot have self-loops () because otherwise the amount of time spent in state would not be exponentially distributed. An alternative interpretation of a CTMC is
Define jump rates
On entering state , jump to where for all and are independent from each other.
Essentially, every time we enter a state, we set an alarm clock for all other states, and then jump to the state whose alarm clock goes off first. This equivalent interpretation allows us to summarize a CTMC using the rate matrix.
Following from the first interprentation, all entries of are non-negative, and the rows must sum to 0. One useful quantity which we can define is how long it takes to come back to a particular state.
Since a CTMC is essentially a DTMC where we hold in each state for an exponential amount of time, we can apply First Step Analysis in essentially the same way that we do for DTMCs. In fact, hitting probabilities will look exactly the same since we can just use the jump chain to comute the transition probabilities. The only differences will arise when we consider the time dependent quantities. For hitting times (how long it takes to enter a state from ),
If
If
Class Properties
Just like in DTMCs, we can classify states in the CTMC.
Definition 68
State is transient if given , the process enters finitely many times with probability 1. Otherwise, it is recurrent.
Definition 69
A state is positive recurrent if its time to first re-entry is finite, and null recurrent otherwise.
Long Term Behavior of CTMCs
CTMCs also have stationary distributions.
The stationary distribution of the CTMC is also related to the jump chain, but we need to normalize for the hold times.
Theorem 33
If is a stationary distribution for a CTMC, then the stationary distribution of the jump chain is given by
To describe how a CTMC behaves over time, first define and .
Theorem 34 (Big Theorem for CTMCs)
For an irreducible CTMC, exactly one of the following is true.
All states are transient or null recurrent, no stationary distribution exists, and
All states are positive recurrent, a unique stationary distribution exists, and the stationary distribution satisfies
Uniformization
Let denote the matrix of transition probabiltiies at time . By the Markov property, we know that . For . This approximation allows us to compute the derivative of .
Theorem 35 tells us that the transition probabilties for all . This is why Q is sometimes called the generator matrix: it generates the transition probabilities. However, matrix exponentials are difficult to compute. Instead, we can turn to Uniformization, which allows us to estimate by simulating it through a DTMC.
Definition 71
Given a CTMC where such that for all . Fix a , and the uniformized chain will be a DTMC with transition probabilities and .
It turns out that
when is small. This means that we can approximate the transition probabilties of the CTMC using the uniformized chain. Observe that uniformization also helps in finding the stationary distribution since the stationary distribution of the uniformized chain is identical to the original chain.
Poisson Processes
Definition 72
A counting process is a non-decreasing, continuous time, integer valued random process which has right continuous sample paths.
There are two important metrics which describe counting processes.
Definition 75
A rate Poisson Process is a counting process with independently and identically distributed inter-arrival times .
The name Poisson comes from the distribution of each varible in the process.
A Poisson Process is a special case of a CTMC where the transition rates and the transition probabilties are 1 if and 0 otherwise. Since the inter-arrival times are memoryless and i.i.d, Poisson Processes have many useful properties.
Theorem 37
If is a rate Poisson Process, then is also a rate Poisson Process for all and is independent of the original process.
Poisson Processes are the only counting processes with these particular properties.
It turns out that Poisson Processes can be connected with the Order Statistics of Uniform Random Variables.
Theorem 39 (Conditional Distribution of Arrivals)
Conditioned on , the random vector has the same distribution as the order statistics of random variables .
What Theorem 39 says is that given arrivals up to time occur, the distribution of arrival times is equivalent to taking i.i.d uniform random variables and sorting them.
Two other useful properties of Poisson Processes involve combining and separating them.
Theorem 40 (Poisson Merging)
If and are independent Poisson Processes with rates and , then is a Poisson Process with rate .
Last updated
Was this helpful?