∫ Hidden Markov models: baseline algorithms
Kernel:
- A review of baseline algorithms for hidden Markov model inference and learning.
In this blog post, I will review some of the baseline algorithms for hidden Markov model (HMM) inference and learning, Although, there are tons of explanations and tutorials out there, but I find that most of them using inconsistent terminologies and notations. For example, do you know that there are two Viterbi algorithms? And the forward-backward algorithm is sometimes known as the Baum-Welch algorithm? while in the wikipedia these are two different algorithms? This is wild! And hopefully, I can make things clearer.
∂ Hidden Markov models
Hidden Markov models are a class of probabilistic graphical models that are widely used in speech recognition, bioinformatics, and other fields. They are used to model sequences of observations, where the observations are assumed to be generated by a sequence of hidden states. The hidden states are not observed, but the observations are assumed to depend only on the current state, hence the name Markov.
A hidden Markov model is defined by the following parameters:
- \( X_t \) is a random variable of the hidden state at time \( t \).
- \( Y_t \) is a random variable the observation at time \( t \).
- \( T \) is a positive integer specifying a sequence length.
- \( \Pr(X_{t+1} | X_{t}) \) is a parameter that represents the transition probability from state \( X_t \) to state \( X_{t+1} \).
- \( \Pr(Y_t | X_t) \) is a parameter that represents the observation probability of observation \( Y_t \) given state \( X_t \).
In general, there are four types of problems that we want to solve with hidden Markov models:
- Given HMM parameters, sample some observations.
- Given a sequence of observations and HMM parameters, what is the most likely sequence of hidden states that generated the observations? This is known as the decoding problem.
- Given a sequence of observations and HMM parameters, what is the probability of a sequence of observations? This is known as the evaluation problem.
- Given a sequence of observations, how can we learn the parameters of the model? This is known as the learning problem.
∂ Forward algorithm
The forward algorithm is used to solve the evaluation problem of the latest hidden state given the sequence of observations so far.
One nice thing about inference algorithms in HMM is that they can usually be arranged in a recursive form. The recursive form of the forward algorithm is as follows:
Then in order to solve the evaluation problem, we can use the following equation:
Because of this property, the forward algorithm can be used to solve the evaluation problems where we want to know the most likely state at the current time given the sequence of observations so far. We often find the use for this algorithm in realtime inference tasksRabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.Welch, G., & Bishop, G. (1995). An introduction to the Kalman filter.Djuric, P. M., Kotecha, J. H., Zhang, J., Huang, Y., Ghirmai, T., Bugallo, M. F., & Miguez, J. (2003). Particle filtering. IEEE signal processing magazine, 20(5), 19-38..
∂ Forward-backward algorithm
What if you want to solve the evaluation problem of the hidden state at any time given the sequence of observations so far, for the benefit of hindsight? This is where the forward-backward algorithm comes in.
The forward-backward algorithm has two folds: the forward pass and the backward pass. The forward pass is the same as the forward algorithm. Let me show you how the backward pass is computed:
Now we combine the forward and backward passes to solve the evaluation problem of the hidden state at any time given the sequence of observations so far.
If you want to compute the probability, you can use the following equation:
∂ Viterbi algorithm
Next stop is the Viterbi algorithm. We use to solve the decoding problem of the most likely sequence of hidden states given the sequence of observations.
Same as the forward algorithm, the Viterbi algorithm can be arranged in a recursive form:
Then in order to solve the decoding problem, we can use the following equation:
You may notice that the Viterbi algorithm is very similar to the forward algorithm, but instead of summing over all possible previous states, we take the maximum over all possible previous states. The similarity is not a coincidence. It depends on the way we choose to arrange the probability terms by pulling out the time \( t \) terms first, similar to the way we did in the forward algorithm. (If you want to challenge yourself, try to solve the decoding problem backward and see what you come up with.)
∂ Viterbi training algorithm
Do not confuse this with the Viterbi decoding algorithm. The Viterbi training algorithm is a training algorithm for hidden Markov model where the goal is to learn the parameters of the model given sequences of observations.
The Viterbi training algorithm uses the Viterbi decoding algorithm to find the most likely sequence of states, and then uses it to update the parameters of the model. It's a special case of the Expectation-Maximization (EM) algorithmBishop, C. (2006). Pattern recognition and machine learning. Springer google schola, 2, 5-43., which has to be run iteratively until the parameters converge.
- Compute the most likely sequence of states using the Viterbi algorithm.
- Update the observation parameters $$ \begin{align*} \Pr(Y=y | X=x) = \frac{\text{frequency of observing both} \, y \, \text{and} \, x \, \text{from all steps}}{\text{frequency of} \, x \, \text{from all steps in the sequence}} \end{align*} $$
- Update the transition parameters $$ \begin{align*} \forall t \, \Pr(X_{t+1 }= x' | X_{t} = x) = \frac{\text{frequency of} \, x \, \text{then} \, x' \, \text{from all steps}}{\text{frequency of} \, x \, \text{from all steps in the sequence}} \end{align*} $$
- Repeat until the parameters converge.
∂ Baum-Welch algorithm
This is another training algorithm for hidden Markov model. It uses the forward-backward algorithm to update the parameters of the model instead of relying on statistical counting. In practice, this is more stable than the Viterbi training algorithmLam, T. Y., & Meyer, I. M. (2010). Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training. Algorithms for Molecular Biology, 5, 1-16..
- \( Y_{1:T} \setminus Y_{t} \) means the sequence of all observations except \( Y_t \).
- If you see this form \( \sum_{A} P(A) K \), it means that we are compute the expectation of \( K \) following the distribution of \( A \). If \( A = Y_{1:T} \), it's the distribution of the observed sequences.
This is the original Baum-Welch algorithm. Notice that equation (4) reuses the current parameters of the model to compute the expected statistics. And we run this iteratively until the parameters converge.
One might wonder can't we simplify the equation like this:
This simplified version uses only the forward-backward algorithm at each time \(t\) while the original Baum-Welch algorithm uses pincer estimation of both \(X_{t+1}\) from the backward path (3), and \(X_t\) from the forward path (1), bridge by the current transition probability and the observation probability (4).
However, this derivation will not work! Because it's the exact equation! The principle of Expectation-Maximization is to compute first the expectation of the estimated statistics given the current parameters. The estimated statistics in this case are 1) \( \Pr(Y_{t} | X_{t}) \) and 2) \(\Pr(X_{t+1}| X_t, Y_{1:T})\) which are computed inside the Baum-Welch algorithm but not in the simplified version.
∂ Conclusion
To recap, we have reviewed some of the baseline algorithms for hidden Markov model inference and learning.
Algorithm | Problem | Objective |
---|---|---|
Forward | Evaluate the most likely state at the current time given the sequence of observations so far. | \( \arg\max_{X_t} \, \Pr(X_t | Y_{1:t}) \) |
Forward-backward | Evaluate the most likely state at any time given the sequence of observations so far. | \( \arg\max_{X_t} \, \Pr(X_t | Y_{1:T}) \) |
Viterbi | Decode the most likely sequence of hidden states given the sequence of observations. | \( \arg\max_{X_{1:t}} \, \Pr( X_{1:t}| Y_{1:t}) \) |
Viterbi training | Learn the parameters of the model given sequences of observations. | \( \arg\max_{\theta} \, \Pr(Y_{1:T} | \theta) \) |
Baum-Welch | Learn the parameters of the model given sequences of observations. | \( \arg\max_{\theta} \, \Pr(Y_{1:T} | \theta) \) |
That is it! These are important algorithms for hidden Markov models that we should know!