-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Initial seeding of assorted likelihood and viterbi decoding demos
- Loading branch information
Edward Ocampo-Gooding
committed
Jul 22, 2008
1 parent
4a50327
commit 982ea16
Showing
7 changed files
with
385 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
.DS_Store |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
Hidden Markov Model Notes | ||
|
||
This repo currently contains a few clips of notes on Hidden Markov Models. | ||
|
||
HMMs are essentially regular expressions with mushiness, or rather, regex with the ability to match patterns and return a probability of a match as opposed to yes/no. | ||
|
||
This property allows them to be really good at matching sequences of symbols where some parts of the sequences might sometimes be one symbol, but other times another. Examples lie in matching genomic sequences, music and voice recognition, and several other fields. | ||
|
||
|
||
Usages | ||
|
||
Typically, HMMs are being used to answer one of three questions: | ||
|
||
How good/probable of a match is this sequence (to a HMM)? [Matching, or Likelihood] | ||
|
||
For a sequence of observed events, what’s the most likely sequence of hidden events? [Decoding] | ||
|
||
What’s the HMM that best matches this set of sequences? [Learning] | ||
|
||
(The Wikipedia version of ^^^ follows) | ||
|
||
There are three canonical problems associated with HMM: | ||
|
||
Given the parameters of the model, compute the probability of a particular output sequence, and the probabilities of the hidden state values given that output sequence. This problem is solved by the forward-backward algorithm. | ||
|
||
Given the parameters of the model, find the most likely sequence of hidden states that could have generated a given output sequence. This problem is solved by the Viterbi algorithm. | ||
|
||
Given an output sequence or a set of such sequences, find the most likely set of state transition and output probabilities. In other words, discover the parameters of the HMM given a dataset of sequences. This problem is solved by the Baum-Welch algorithm. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
# This demos an HMM answering the question “Given a sequence, how probable is | ||
# it?”, or “how well does this match?” | ||
|
||
# The story is that you’re in a casino, and based on what you’ve seen by just watching the dealer is that the bastard uses both a fair coin and a biased one. Not only that, but you’ve determined the probabilities of each coin’s observable states: how often it’s heads or tails. | ||
|
||
# Your job is to take this HMM (really just a table of the hidden, unknown | ||
# states and their observable probabilities) and given a sequence of coin flips | ||
# if the dealer has switched coins on you so you can rightfully beat the crap | ||
# out of him. | ||
|
||
require "pp" | ||
include Math | ||
|
||
def flip_fair_coin | ||
rand < 0.5 ? "H" : "T" | ||
end | ||
|
||
def flip_biased_coin | ||
rand < 0.75 ? "H" : "T" | ||
end | ||
|
||
class String | ||
def is_biased? | ||
log_odds_ratio(self) < 0 ? true : false | ||
end | ||
end | ||
|
||
# By taking the log(Probability of a fair coin) - log(Probability of a biased | ||
# coin), we can make a good guess at which coin was used: if it's below 0, | ||
# then the list was probably produced using a biased coin. | ||
def log_odds_ratio(flip_list) | ||
flip_list.length - flip_list.count("H") * (log(3)/log(2)) | ||
end | ||
|
||
puts "Welcome to the Fair Bet casino! We only use fair coins around here..." | ||
|
||
# generate array of 10 flips using a fair coin | ||
true_flips = String.new | ||
10.times {true_flips << flip_fair_coin} | ||
|
||
pp true_flips | ||
|
||
biased_flips = String.new | ||
10.times {biased_flips << flip_biased_coin} | ||
|
||
pp biased_flips | ||
|
||
puts "Log odds ratio for the fair list: #{log_odds_ratio(true_flips)}" | ||
|
||
true_flips.is_biased? ? puts("This list is biased") : puts("This list is fair") | ||
|
||
puts "Log odds ratio for the biased list: #{log_odds_ratio(biased_flips)}" | ||
|
||
biased_flips.is_biased? ? puts("This list is biased") : puts("This list is fair") |
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,105 @@ | ||
# From http://en.wikipedia.org/wiki/Viterbi_algorithm (ported from the Python version) | ||
|
||
# The key idea here is that the Viterbi algorithm is used when you know the parameters of the HMM and a string of observation symbols, but you don’t know the sequence of hidden states that produced those observations. This algorithm guesses at what that sequence of hidden states was. This sequence of hidden paths is also known as the Viterbi path. | ||
|
||
# Note that the Forward and Viterbi algorithms are different, but closely related enough that we can roll them into one: | ||
# Forward: “What’s the probability of a sequence of observed events?” | ||
# Viterbi: “What’s the most likely sequence of hidden states?” | ||
|
||
# The function forward_viterbi takes the following arguments: y is the sequence of observations, e.g. ['walk', 'shop', 'clean']; X is the set of hidden states; sp is the start probability; tp are the transition probabilities; and ep are the emission probabilities. | ||
# | ||
# The algorithm works on the mappings T and U. Each is a mapping from a state to a triple (prob, v_path, v_prob), where prob is the total probability of all paths from the start to the current state, v_path is the Viterbi path up to the current state, and v_prob is the probability of the Viterbi path up to the current state. The mapping T holds this information for a given point t in time, and the main loop constructs U, which holds similar information for time t+1. Because of the Markov property, information about any point in time prior to t is not needed. | ||
# | ||
# The algorithm begins by initializing T to the start probabilities: the total probability for a state is just the start probability of that state; and the Viterbi path to a start state is the singleton path consisting only of that state; the probability of the Viterbi path is the same as the start probability. | ||
# | ||
# The main loop considers the observations from y in sequence. Its loop invariant is that T contains the correct information up to but excluding the point in time of the current observation. | ||
# | ||
# The algorithm then computes the triple (prob, v_path, v_prob) for each possible next state. | ||
# | ||
# The total probability of a given next state, total, is obtained by adding up the probabilities of all paths reaching that state. More precisely, the algorithm iterates over all possible source states. | ||
# | ||
# For each source state, T holds the total probability of all paths to that state. This probability is then multiplied by the emission probability of the current observation and the transition probability from the source state to the next state. The resulting probability prob is then added to total. | ||
# | ||
# The probability of the Viterbi path is computed in a similar fashion, but instead of adding across all paths one performs a discrete maximization. Initially the maximum value valmax is zero. For each source state, the probability of the Viterbi path to that state is known. This too is multiplied with the emission and transition probabilities and replaces valmax if it is greater than its current value. The Viterbi path itself is computed as the corresponding argmax of that maximization, by extending the Viterbi path that leads to the current state with the next state. The triple (prob, v_path, v_prob) computed in this fashion is stored in U and once U has been computed for all possible next states, it replaces T, thus ensuring that the loop invariant holds at the end of the iteration. | ||
# | ||
# In the end another summation/maximization is performed (this could also be done inside the main loop by adding a pseudo-observation after the last real observation). | ||
|
||
hidden_states = %w(Rainy Sunny) | ||
observations = %w(walk shop clean) | ||
start_probability = {'Rainy' => 0.6, 'Sunny' => 0.4} | ||
|
||
transition_probabilities = { | ||
'Rainy' => {'Rainy' => 0.7, 'Sunny' => 0.3}, | ||
'Sunny' => {'Rainy' => 0.4, 'Sunny' => 0.6}, | ||
} | ||
|
||
emission_probabilties = { | ||
'Rainy' => {'walk' => 0.1, 'shop' => 0.4, 'clean' => 0.5}, | ||
'Sunny' => {'walk' => 0.6, 'shop' => 0.3, 'clean' => 0.1}, | ||
} | ||
|
||
def forward_viterbi(observations, hidden_states, start_probability, | ||
transition_probabilities, emission_probabilties) | ||
|
||
# Both t and u are mappings of a state to a triple (prob, v_path, v_prob) | ||
# where prob: total probability of all paths from the start to the current state | ||
# v_path: path of hidden states up to the current | ||
# v_prob: probability of path up to the current state | ||
# | ||
# t starts out at the first state, so give it the start probabilities | ||
t = {} | ||
for state in hidden_states | ||
# prob. V. path V. prob | ||
t[state] = [start_probability[state], [state], start_probability[state]] | ||
end | ||
|
||
|
||
# Find the next state by computing u for each possible next state | ||
# (Look through each observable emission) | ||
for emission in observations | ||
|
||
u = {} | ||
for next_state in hidden_states | ||
|
||
# total probability of a given next state (sum of probabilities of paths leading there) | ||
total = 0 | ||
argmax = nil | ||
valmax = 0 | ||
|
||
# Get total prob. of next state by adding up the probabilities of all | ||
# paths reaching that state. (Prob so far is carried in t[source_state]) | ||
for source_state in hidden_states | ||
prob, v_path, v_prob = t[source_state] | ||
p = emission_probabilties[source_state][emission] * | ||
transition_probabilities[source_state][next_state] | ||
prob *= p | ||
v_prob *= p | ||
total += prob | ||
if v_prob > valmax | ||
argmax = v_path + [next_state] | ||
valmax = v_prob | ||
end | ||
end | ||
u[next_state] = [total, argmax, valmax] | ||
end | ||
t = u | ||
end | ||
|
||
# apply sum/max to the final states | ||
total = 0 | ||
argmax = nil | ||
valmax = 0 | ||
for state in hidden_states | ||
prob, v_path, v_prob = t[state] | ||
total += prob | ||
if v_prob > valmax | ||
argmax = v_path | ||
valmax = v_prob | ||
end | ||
end | ||
return [total, argmax, valmax] | ||
end | ||
|
||
puts forward_viterbi(observations, hidden_states, start_probability, transition_probabilities, emission_probabilties).inspect | ||
|
||
# This reveals that the total probability of ['walk', 'shop', 'clean'] is 0.033612 and that the Viterbi path is ['Sunny', 'Rainy', 'Rainy', 'Rainy']. The Viterbi path contains four states because the third observation was generated by the third state and a transition to the fourth state. In other words, given the observed activities, it was most likely sunny when your friend went for a walk and then it started to rain the next day and kept on raining. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,197 @@ | ||
# I don’t remember if I (Edward Ocampo-Gooding) wrote this or not. If I didn’t, | ||
# please give me a shoutout. | ||
# This code is massaged out of Sean R. Eddy’s “What is a hidden Markov model?”, which I’ll throw in this repo. | ||
|
||
# When you run this thing, the key idea is that we’re taking a genomic sequence | ||
# (the top sequence), and trying to decode that to find where the end of the | ||
# gene is (or at least, that’s what I loosely remember). | ||
# | ||
# The end of a gene (again, I’m probably wrong, but whatever) corresponds to | ||
# | ||
# [some E states] [5' (five-prime) splice site] [some I states (for intron)] | ||
# | ||
# An E state is an exon, or stuff that gets genetically expressed | ||
# A 5' state (pronounced “five-prime”) is the splice site (between E and I) | ||
# An I state is an intron, representing intragenic regions which don’t get | ||
# genetically expressed | ||
|
||
# Note that this is just an exercise in demonstrating Viterbi decoding | ||
|
||
# Represents states in a Markov model | ||
# | ||
# Example usage: | ||
# | ||
# e = State.new("E") | ||
# five = State.new("5") | ||
# i = State.new("I") | ||
# | ||
# e.emissions = {"A" => 0.25, "C" => 0.25, "G" => 0.25, "T" => 0.25} | ||
# e.transitions = {:E => 0.9, :"5" => 0.1} | ||
# puts e.emit | ||
# puts e.transmit | ||
# | ||
# five.emissions = {"A" => 0.05, "C" => 0.0, "G" => 0.95, "T" => 0.0} | ||
# five.transitions = {:I => 1.0} | ||
# | ||
# i.emissions = {"A" => 0.4, "C" => 0.1, "G" => 0.1, "T" => 0.4} | ||
# i.transitions = {:end => 0.1, :I => 0.9} | ||
# | ||
|
||
# Move the check for prob = 1.0 to initialize, emissions(), and transitions() | ||
|
||
class State | ||
def initialize(name, emissions = nil, transitions = nil) | ||
@name = name.to_sym | ||
end | ||
attr_accessor :name, :emissions, :transitions | ||
|
||
# Based on the emission probabilities, emit something | ||
def emit | ||
probs_choose(@emissions) | ||
end | ||
|
||
def transmit | ||
probs_choose(@transitions) | ||
end | ||
|
||
def to_s | ||
@name.to_s | ||
end | ||
|
||
private | ||
|
||
# Takes a probabilities hash +h+ and returns a symbol | ||
def probs_choose(h) | ||
# Seriously lame but simple way to choose: use an array of size 100 | ||
p = [] | ||
h.each do |key, val| | ||
# insert probabilities into the array | ||
(val * 100).to_int.times { p << key } | ||
end | ||
raise "Probabilities don't add up to 100" if p.size != 100 | ||
p[rand(100)] | ||
end | ||
end | ||
|
||
# Represents a Markov model | ||
class MModel | ||
def initialize(state_list) | ||
@state_list = state_list | ||
sanity_check | ||
end | ||
|
||
# Step through the machine, transitioning according to each state's | ||
# probabilities; used to generate a string of states | ||
def transit_and_emit | ||
# Contains an array of tuples [observed state, hidden state] | ||
chains = [] | ||
|
||
s_list = @state_list | ||
|
||
# print "States: " | ||
# s_list.each {|s| print s.to_s + " "} | ||
# print "\n" | ||
|
||
current_state = s_list.first | ||
loop do | ||
# puts "Current state: #{current_state}" | ||
|
||
emitted = current_state.emit | ||
# puts "Emitted state: #{emitted}" | ||
|
||
chains << [emitted, current_state.name] | ||
|
||
next_state = current_state.transmit | ||
|
||
# puts "Transitioning to " + next_state.inspect | ||
|
||
break if next_state == :end | ||
|
||
if next_state.nil? | ||
raise "Transition to non-existant state; no can do" | ||
end | ||
|
||
current_state = s_list.find do |s| | ||
s.name == next_state | ||
end | ||
end | ||
return chains | ||
end | ||
|
||
# # Given a list of observable emissions, calculate its probability/likelihood by multiplying the probabilities of each emission from each matching (hidden) state | ||
# def forward(emissions) | ||
# emissions = emissions.split if emissions.class? == String | ||
# probs = [] | ||
# emissions.each do |emission| | ||
# state_list.each do |state| | ||
# | ||
# end | ||
# end | ||
# end | ||
|
||
# Align a sequence of emissions to the model by matching each emission with the most likely (hidden) state. | ||
def viterbi(emissions) | ||
emissions = emissions.split if emissions.class? == String | ||
viterbi_path = [] | ||
viterbi_probability = 0 | ||
|
||
# Look at each emission | ||
emissions.each do |emission| | ||
|
||
# Look at each possible next state | ||
|
||
# Choose the most likely next state and add it to the Viterbi path | ||
# Choose by multiplying the transition and emission probabilities of the possible next state and choosing the state that scores best | ||
|
||
|
||
end | ||
|
||
return [viterbi_path, viterbi_probability] | ||
end | ||
|
||
private | ||
|
||
# Check that each state's emmission and transition probabilities add up to 1.0 | ||
# and that there's at least one :end symbol in a transition somewhere | ||
def sanity_check | ||
# Check probabilities | ||
@state_list.each do |s| | ||
if (s.emissions.values.inject { |sum, p| sum + p }) < 1.0 | ||
raise "Emission probabilities don't add up to 1.0" | ||
end | ||
|
||
if (s.transitions.values.inject { |sum, p| sum + p }) < 1.0 | ||
raise "Transition probabilities don't add up to 1.0" | ||
end | ||
end | ||
|
||
# Check for :end | ||
unless @state_list.find { |s| s.transitions[:end] } | ||
raise "Cannot find a transition to a final state." | ||
end | ||
end | ||
end | ||
|
||
e = State.new("E") | ||
five = State.new("5") | ||
i = State.new("I") | ||
|
||
e.emissions = {"A" => 0.25, "C" => 0.25, "G" => 0.25, "T" => 0.25} | ||
e.transitions = {:E => 0.9, :"5" => 0.1} | ||
|
||
five.emissions = {"A" => 0.05, "C" => 0.0, "G" => 0.95, "T" => 0.0} | ||
five.transitions = {:I => 1.0} | ||
|
||
i.emissions = {"A" => 0.4, "C" => 0.1, "G" => 0.1, "T" => 0.4} | ||
i.transitions = {:end => 0.1, :I => 0.9} | ||
|
||
m = MModel.new([e, five, i]) | ||
|
||
answer = m.transit_and_emit | ||
|
||
def print_answer(answer) | ||
answer.each { |o| print o[0].to_s + " "} ; print "\n" | ||
answer.each { |h| print h[1].to_s + " " } | ||
end | ||
|
||
print_answer(answer) |