Markov Chains in Julia
Introduction
I am currently working on a legal research series where I perform statistical analysis and ml models to legal datasets. My intention is to model the behavior of courts, determine the outcome of cases, and build a pipeline capable of identifying relevant case law by issue area.
That data set is nearly complete, but I have not decided which models to apply to it. This is where Julia comes into play.
I plan to perform the statistical analysis and possible the ml workload with Julia.
In this post, I share an Algorithm to compute Markov Chains.
Markov Chains
I don't understand them completely. I am grateful to the text for helping me to bettter grasp how they operate.
As far as my understanding goes they permit modeling the transition of states, possibly in infinite time, according to a probability distribution. I recommend reading this source for authority.
My Translation of Markov Chains
Lets start with a finite set of of elements that we call S. In Cs terms think of this is as an array in the C Language. It must be defined prior to an operation. I think markov chains can be performed on unbounded sets as well but I'm not at that level yet.
The set is called the set space. Each value within it considered an individual state.
Markov Chains are sets that contain the Markov Property. A markov property considers the probability of the model having a state within the space at a point in time.
Thus, the probability of going from x to y in one step of unit time can be computed. If we consider the state changes within a stochastic matrix we can then determine the overall probabily of arriving at states within a system.
Review the formal definition.
Application to My Work
I will be using this model to determine the probability of a justice transitioning from emotional states. The central thesis of my understanding of judicial behavior is that the justices develop attitudinal states towards issue areas, legal provisions, states, religions, objects in general. I will compute those states by first determining them. I will then calculate the bayesian probability of state transition across time. Finally, those distribution will be inputted in Markov Chain models.
Markov Simulation the Hard Way
The work below is not my own. I attribute it to julia.quantecon.org/.
The alogorithm takes a stochastic probability matrix(square in this case) and create a random distribution according to those probabilities.
The simulation then randomly selects values from the random distibution. That discrete value is stored in the output. As I understand it currently, the output is the Markov Chain.
The comments in the code are mine.
using LinearAlgebra, Statistics
using Distributions, Plots, Printf, QuantEcon, Random
Markov Simulation the Easier Way
Given a stochastic probability matrix, the Markov Chain function will produce a chain.
The simulate method will then simulate the chain across a n steps.
We can then take the mean of the output to determine the average amount of time spent in a state. In this case the average amount of time spent in state 1 which should correlate to unemployment.
using LinearAlgebra, Statistics
using Distributions, Plots, Printf, QuantEcon, Random