# Computational Genomics Lecture 7c Hidden Markov Models (hmms) Outline Finite, or Discrete, Markov Models

 Sana 13.02.2018 Hajmi 501 b. • ## Q1: Compute the probability of a given sequence of observations.

• A1: Forward – Backward dynamic programming algorithm (Baum Welch).

• ## states, given a sequence of observations.

• A2: Viterbi’s dynamic programming Algorithm
• ## Q3:Learn best model, given an observation,.

• A3: The Expectation Maximization (EM) heuristic. • ## A discrete (finite) system:

• N distinct states.
• Begins (at time t=1) in some initial state(s).
• At each time step (t=1,2,…) the system moves
• from current to next state (possibly the same as
• the current state) according to transition
• probabilities associated with current state.

• ## After Andrei Andreyevich Markov (1856 -1922) • ## Biological Relevance • ## The probability to start in a given state i is i , The vector repre-sents these startprobabilities. ## Markov Property ## Markov Chains ## Simple Minded Weather Example ## Simple Minded Weather Example ## Coke vs. Pepsi (a cental cultural dilemma) ## Coke vs. Pepsi ## Coke vs. Pepsi ## Coke vs. Pepsi • ## must hold, yielding p0= 2/3, p1=1/3 . • ## every state. • ## If the Markov chain is positive recurrent, there exists a stationary distribution. If it is positive recurrent and irreducible, there exists a unique stationary distribution, and furthermore the process constructed by taking the stationary distribution as the initial distribution is ergodic (defined shortly). Then the average of a function f over samples of the Markov chain is equal to the average with respect to the stationary distribution, • Pπ = π .
• ## In this case, the stationary distribution π is an eigenvector of the transition matrix, associated with the eigenvalue 1. • ## Problem – given that the weather on day 1 (t=1) is sunny(3), what is the probability for the observation O: • ## The answer is - • ## directed graph) ## Third Example: The Friendly Gambler ## Third Example: The Friendly Gambler 