As a refresher on Markov Models, I’ve been watching Professor Scott E. Page’s excellent videos on YouTube. He does all the computation by hand – and now I’ve written some code to perform it faster.
In video 2, we are given a transition matrix of 1 hypothetical student in a classroom transitioning between alertness and boredom:
Alert: t | Bored: t | |
Alert: t+1 | 0.8 | 0.25 |
Bored: t+1 | 0.2 | 0.75 |
This can be represented in Python as:
1 2 |
matrix = [[0.8, 0.25], [0.2, 0.75]] |
The vector of students is again, a simple list:
1 |
students_vector = [1,0] |
Let’s calculate one stage:
1 2 3 4 5 6 7 8 9 |
def markov_stage(matrix, vector): """Calculates one iteration of the markov process""" v1 = [] # blank list to store the next stage for x in xrange(len(matrix)): #for each row of the matrix #multiply each element of the row by each element of the vector and sum #the "zip" function is great for this v1.append(sum([a*b for a,b in zip(matrix[x],vector)])) print v1 return v1 |
Calling this produces:
1 2 3 4 5 |
>>> student_vector = markov_stage(student_vector) [0.8, 0.2] >>> >>> student_vector = markov_stage(student_vector) [0.6900000000000002, 0.31000000000000005] |
Or, let’s try to find the equilibrium state by looping markov_stage until the values basically stop changing (to a certain decimal place accuracy):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
def markov_equilibrium(matrix, vector, acc): """Tries to find the equilibrium of a markov process. Accepts a matrix [i,j], a vector [v1,v2,vx..] and accuracy to any number of decimal points""" #"its" = iterations #We must have at least 2 iterations performed in order to judge #whether it looks like it is stabilising (basically just comparing the two) its = [markov_stage(matrix, vector)] #input first iteration its.append(markov_stage(matrix, its[-1])) #append another #now loop with the condition that, when rounded, the vector is not the same #as the one before while([round(x,acc) for x in its[-1]] != [round(x,acc) for x in its[-2]]): its.append(markov_stage(matrix, its[-1])) #we can call the function #from before result = "Markov process took {0} stages to reach {1} d.p. equilibrium" print result.format(len(its), acc) #output a nice result |
This produces, with an accuracy set at 2 decimal places:
1 2 3 4 5 6 7 8 9 10 |
>>> markov_equilibrium(matrix,student_vector,2) [0.8, 0.2] [0.6900000000000002, 0.31000000000000005] [0.6295000000000002, 0.37050000000000005] [0.5962250000000002, 0.4037750000000001] [0.5779237500000002, 0.42207625000000015] [0.5678580625000003, 0.4321419375000002] [0.5623219343750003, 0.4376780656250002] [0.5592770639062503, 0.44072293609375024] Markov process took 8 stages to reach 2 d.p. equilibrium |
I probably could have written this in Numpy – which would calculate faster using less memory (and probably has built-in functions for the vector-matrix row multiplication), but it was fun just doing this. I’ll try and extend the markov_equilibrium to give some more detailed stats such as the “churn” as mentioned by Prof. Page.