Lecture 4: Structured Prediction Models Kai-Wei Chang CS

Lecture 4: Structured Prediction Models Kai-Wei Chang CS

Lecture 4: Structured Prediction Models Kai-Wei Chang CS @ UCLA [email protected] Couse webpage: https://uclanlp.github.io/CS269-17/ ML in NLP 1 Previous Lecture Binary linear classification Perceptron, SVMs, Logistic regression, Nave Bayes Output: Multi-class classification Multiclass Perceptron, Multiclass SVM Output: ML in NLP

2 What we have seen: multiclass Reducing multiclass to binary One-against-all & One-vs-one Error correcting codes Extension: Learning to search Training a single classifier Multiclass Perceptron: Keslers construction Multiclass SVMs: Crammer&Singer formulation Multinomial logistic regression Extension: Graphical models ML in NLP 3 This lecture What is structured output? Multiclass as structure Sequence as structure General graph structure

ML in NLP 4 Global decisions Understanding is a global decision Several local decisions play a role There are mutual dependencies on their outcome. Essential to make coherent decisions Joint, Global Inference ML in NLP 5 Inference with Constraints [Roth&Yih04,07, .]

Bernies wife, Jane, is a native of Brooklyn E1 E2 R12 E3 R2 3 Models could be learned separately/jointly; constraints may come up only at decision time. ML in NLP 6 Inference with Constraints [Roth&Yih04,07,.] Models could be learned separately/jointly; constraints may come up only at decision time. ML in NLP

7 Inference with Constraints [Roth&Yih04,07,.] other 0.05 other 0.10 other 0.05 per 0.85 per

0.60 per 0.50 loc 0.10 loc 0.30 loc 0.45 Bernies wife, Jane, is a native of Brooklyn E1

E2 R12 E3 R2 irrelevant irrelevant 0.05 irrelevant 3 0.10 spouse_of spouse_of 0.45 0.45 spouse_of

0.05 born_in born_in 0.50 born_in 0.85 Models could be learned separately/jointly; constraints may come up only at decision time. ML in NLP 8 Structured output is A predefine structure Can be represented as a graph ML in NLP

9 Sequential tagging The process of assigning a part-of-speech to each word in a collection (sentence). WORD tag the koala put the keys on the table DET N V DET

N P DET N ML in NLP 10 Dont worry! There is no problem with your eyes or computer. Lets try a/DT dog/NN is/VBZ chasing/VBG a/DT cat/NN ./. a/DT fox/NN is/VBZ running/VBG ./. a/DT boy/NN is/VBZ singing/VBG ./. a/DT happy/JJ bird/NN What is the POS tag sequence of the following sentence? a happy cat was singing . ML in NLP 11

Lets try a/DT dog/NN is/VBZ chasing/VBG a/DT cat/NN ./. a/DT dog/NN is/VBZ chasing/VBG a/DT cat/NN ./. a/DT fox/NN is/VBZ running/VBG ./. a/DT fox/NN is/VBZ running/VBG ./. a/DT boy/NN is/VBZ singing/VBG ./. a/DT boy/NN is/VBZ singing/VBG ./. a/DT happy/JJ bird/NN a/DT happy/JJ bird/NN a happy cat was singing . a happy cat was singing . ML in NLP 12 How you predict the tags? Two types of information are useful Relations between words and tags Relations between tags and tags DT NN, DT JJ NN Fed in The Fed is a Noun because it follows a

Determiner ML in NLP 13 Combinatorial optimization problem input model parameters output space Inference/Test: given , solve Learning/Training: find a good ML in NLP 14 Challenges with structured output We cannot train a separate weight vector for each possible inference outcome (why?)

For multi-class we train one weight vector for each class We cannot enumerate all possible structures for inference Inference for multiclass was easy ML in NLP 15 Deal with combinatorial output Decompose the output into parts that are labeled Define a graph to represent how the parts interact with each other These labeled interacting parts are scored; the total score for the graph is the sum of scores of each part an inference algorithm to assign labels to all the parts ML in NLP 16

A history-based model Each token is dependent on all the tokens that came before it Simple conditioning Each P(xi | ) is a multinomial probability distribution over the tokens ML in NLP 17 Example: A Language model It was a bright cold day in April. Probability of a word starting a sentence Probability of a word following It Probability of a word following It was Probability of a word following It was a ML in NLP

18 A history-based model Each token is dependent on all the tokens that came before it Simple conditioning Each P(xi | ) is a multinomial probability distribution over the tokens What is the problem here? How many parameters do we have? Grows with the size of the sequence! ML in NLP 19 Solution: Lose the history Discrete Markov Process A system can be in one of K states at a time State at time t is xt First-order Markov assumption

The state of the system at any time is independent of the full sequence history given the previous state Defined by two sets of probabilities: Initial state distribution: P(x1 = Sj) State transition probabilities: P(xi = Sj | xi-1 = Sk) ML in NLP 20 Example: Another language model It was a bright cold day in April Probability of a word starting a sentence Probability of a word following It Probability of a word following was Probability of a word following a If there are K tokens/states, how many parameters do we need? O(K2) ML in NLP 21

Example: The weather Three states: rain, cloudy, sunny State transitions: Observations are Markov chains: Eg: cloudy sunny sunny rain Probability of the sequence = P(cloudy) P(sunny|cloudy) P(sunny | sunny) P(rain | sunny) Initial probability Transition probabilities ML in NLP 22 Example: The weather Three states: rain, cloudy, sunny State transitions: These probabilities

define the model; Observations are Markov chains: can find P(any sequence) Eg: cloudy sunny sunny rain Probability of the sequence = P(cloudy) P(sunny|cloudy) P(sunny | sunny) P(rain | sunny) Initial probability Transition probabilities ML in NLP 23 Outline Sequence models Hidden Markov models Inference with HMM

Learning Conditional Models and Local Classifiers Global models Conditional Random Fields Structured Perceptron for sequences ML in NLP 24 Hidden Markov Model Discrete Markov Model: States follow a Markov chain Each state is an observation Hidden Markov Model: States follow a Markov chain States are not observed Each state stochastically emits an observation ML in NLP

25 Toy part-of-speech example The Fed raises interest rates Transitions Determiner Emissions Noun P(The | Determiner) = 0.5 P(A | Determiner) = 0.3 P(An | Determiner) = 0.1 P(Fed | Determiner) = 0 P(Fed| Noun) = 0.001 P(raises| Noun) = 0.04 P(interest| Noun) = 0.07 P(The| Noun) = 0

Verb start Initial Determiner Noun Verb Noun Noun The Fed raises

interest rates ML in NLP 26 Joint model over states and observations Notation Number of states = K, Number of observations = M : Initial probability over states (K dimensional vector) A: Transition probabilities (KK matrix) B: Emission probabilities (KM matrix) Probability of states and observations Denote states by y1, y2, and observations by x1, x2, ML in NLP 27 Other applications

Speech recognition Input: Speech signal Output: Sequence of words NLP applications Information extraction Text chunking Computational biology Aligning protein sequences Labeling nucleotides in a sequence as exons, introns, etc. Questions? ML in NLP 28 Three questions for HMMs [Rabiner 1999] 1. Given an observation sequence, x1, x2, xn and

a model (, A, B), how to efficiently calculate the probability of the observation? 2. Given an observation sequence, x1, x2, , xn and a model (, A, B), how to efficiently calculate the most probable state sequence? Inference 3. How to calculate (, A, B) from observations? Learning ML in NLP 29 Outline Sequence models Hidden Markov models Inference with HMM Learning Conditional Models and Local Classifiers Global models Conditional Random Fields Structured Perceptron for sequences

ML in NLP 30 Most likely state sequence Input: A hidden Markov model (, A, B) An observation sequence x = (x1, x2, , xn) Output: A state sequence y = (y1, y2, , yn) that corresponds to Maximum a posteriori inference (MAP inference) Computationally: combinatorial optimization ML in NLP 31 MAP inference We want We have defined

But And we dont care about P(x) we are maximizing over y So, ML in NLP 32 How many possible sequences? The Fed raises interest rates Determiner Verb Noun

Verb Noun Verb Noun Verb Noun 1 2 2 2 2 List of allowed tags for each word In this simple case, 16 sequences (12222) ML in NLP

33 Nave approaches 1. Try out every sequence Score the sequence y as P(y|x, , A, B) Return the highest scoring one What is the problem? Correct, but slow, O(Kn) 2. Greedy search Construct the output left to right For each i, elect the best yi using yi-1 and xi What is the problem? Incorrect but fast, O(nK) ML in NLP 34 Solution: Use the independence assumptions Recall: The first order Markov assumption The state at token i is only influenced by the

previous state, the next state and the token itself Given the adjacent labels, the others do not matter Suggests a recursive algorithm ML in NLP 35 Scroll to the bottom to see a graph of what states and transitions the model thinks are likely on each day. Those likely states and transitions can be used to reestimate the red probabilities (this is the "forwardbackward" or Baum-Welch algorithm), incr p(|C) p(|H) p(|START) p(1|) 0.5 0.1 p(2|) 0.4 0.2 p(3|) 0.1

0.7 Jasons ice cream #cones p(C|) p(H|) 0.8 0.2 0.2 0.8 0.5 0.5 Best tag sequence for P(1,2,1)? 0.5 0.5 C

0.8 0.4 C 0.2 0.5 0.1 0.2 H 0.8 C 0.2 0.2 H 0.8

0.5 0.2 ML in NLP H 0.8 0.1 36 Deriving the recursive algorithm Transition probabilities Emission probabilities y1 y2 y3

x1 x2 x3 ML in NLP Initial probability yn xn 37 Deriving the recursive algorithm The only terms that depend on y1 y1

y2 y3 x1 x2 x3 ML in NLP yn xn 38 Deriving the recursive algorithm Abstract away the score for all

decisions till here into score y1 y2 y3 x1 x2 x3 ML in NLP yn xn 39 Deriving the recursive algorithm

Only terms that depend on y2 y1 y2 y3 x1 x2 x3 ML in NLP yn xn 40 Deriving the recursive algorithm

y1 y2 y3 x1 x2 x3 ML in NLP Abstract away the score for all decisions till here into score yn xn 41

Deriving the recursive algorithm y1 y2 y3 x1 x2 x3 ML in NLP yn xn Abstract away the score for all decisions till here into score

42 Deriving the recursive algorithm ML in NLP 43 Viterbi algorithm Max-product algorithm for first order sequences : Initial probabilities A: Transitions B: Emissions 1. Initial: For each state s, calculate 2. Recurrence: For i = 2 to n, for every state s, calculate 3. Final state: calculate This only calculates the max. To get final answer (argmax), keep track of which state corresponds to the max at each step build the answer using these back pointers

ML in NLP Questions? 44 General idea Dynamic programming The best solution for the full problem relies on best solution to sub-problems Memoize partial computation Examples Viterbi algorithm Dijkstras shortest path algorithm ML in NLP 45 Complexity of inference Complexity parameters Input sequence length: n

Number of states: K Memory Storing the table: nK (scores for all states at each position) Runtime At each step, go over pairs of states O(nK2) ML in NLP 46 Outline Sequence models Hidden Markov models Inference with HMM Learning Conditional Models and Local Classifiers Global models Conditional Random Fields

ML in NLP 47 Learning HMM parameters Assume we know the number of states in the HMM Two possible scenarios 1. We are given a data set D = {} of sequences labeled with states And we have to learn the parameters of the HMM (, A, B) Supervised learning with complete data 2. We are given only a collection of sequences D = {xi} And we have to learn the parameters of the HMM (, A, B) Unsupervised learning, with incomplete data ML in NLP 48 Supervised learning of HMM We are given a dataset D = {}

Each xi is a sequence of observations and yi is a sequence of states that correspond to xi Goal: Learn initial, transition, emission distributions (, A, B) How do we learn the parameters of the probability distribution? The maximum likelihood principle Where have we seen this before? And we know how to write this in terms of the parameters of the HMM ML in NLP 49 Supervised learning details , A, B can be estimated separately just by counting Makes learning simple and fast [Exercise: Derive the following using derivatives of the log likelihood. Requires Lagrangian multipliers.] Number of instances where the first state is s Number of examples Initial probabilities

Transition probabilities Emission probabilities ML in NLP 50 Outline Sequence models Hidden Markov models Inference with HMM Learning Conditional Models and Local Classifiers Global models Conditional Random Fields ML in NLP 51 Outline Sequence models

Hidden Markov models Inference with HMM Learning Conditional Models and Local Classifiers Global models Conditional Random Fields ML in NLP 52 Modeling next-state directly Instead of modeling the joint distribution P(x, y) only focus on P(y|x) Which is what we care about eventually anyway For sequences, different formulations Maximum Entropy Markov Model [McCallum, et al 2000] Projection-based Markov Model [Punyakanok and Roth, 2001] (other names: discriminative/conditional markov model, )

ML in NLP 53 Generative vs Discriminative models Generative models learn P(x, y) Characterize how the data is generated (both inputs and outputs) Eg: Nave Bayes, Hidden Markov Model Discriminative models learn P(y | x) Directly characterizes the decision boundary only Eg: Logistic Regression, Conditional models (several names) ML in NLP 54 Generative vs Discriminative models Generative models learn P(x, y)

Characterize how the data is generated (both inputs and outputs) Eg: Nave Bayes, Hidden Markov Model A generative model tries to characterize the distribution of the inputs, a discriminative model doesnt care Discriminative models learn P(y | x) Directly characterizes the decision boundary only Eg: Logistic Regression, Conditional models (several names) ML in NLP 55 Another independence assumption yt-1 HMM yt yt-1

yt xt Conditional model xt This assumption lets us write the conditional probability of the output as ML in NLP 56 Modeling P(yi | yi-1, xi) Different approaches possible 1. Train a log-linear classifier 2. Or, ignore the fact that we are predicting a probability, we only care about maximizing some score. Train any classifier (e.g, perceptron algorithm)

For both cases: Use rich features that depend on input and previous state We can increase the dependency to arbitrary neighboring xis Eg. Neighboring words influence this words POS tag ML in NLP 57 Log-linear models for multiclass Consider multiclass classification Inputs: x Output: y {1, 2, , K} Feature representation: (x, y) We have seen this before Define probability of an input x taking a label y as Interpretation: Score for label, converted to a well-formed probability distribution by exponentiating + normalizing

A generalization of logistic regression to multiclass ML in NLP 58 Training a log-linear model (multi-class) Given a data set D = {} Apply the maximum likelihood principle Here Maybe with a regularizer ML in NLP 59 Training a log-linear model Gradient based methods to minimize Usual stochastic gradient descent Initialize

Iterate through examples for multiple epochs For each example take gradient step for the loss at that example Update Return ML in NLP 60 Back to sequences The next-state model yt-1 HMM yt yt-1 yt

xt Conditional model xt This assumption lets us write the conditional probability of the output as We need to learn this function ML in NLP 61 Maximum Entropy Markov Model Goal: Compute P(y | x) start Determiner The Noun

Fed Verb raises Noun interest Noun rates The prediction task: Using the entire input and the current label, predict the next label ML in NLP 62 Maximum Entropy Markov Model Goal: Compute P(y | x) start Determiner

The Noun Fed Verb raises Noun interest Noun rates word Caps -es Previous To model the probability, first, we need to define features for the current classification problem ML in NLP

63 Maximum Entropy Markov Model Goal: Compute P(y | x) start Determiner The word Caps -es Previous Noun Fed Verb raises Noun interest

Noun rates The Y N start (x, 0, start, y0) ML in NLP 64 Maximum Entropy Markov Model Goal: Compute P(y | x) start word Caps -es Previous

Determiner Noun Verb raises The Fed The Y N start Fed Y N Determiner raises

N (x, 1, y0, y1) (x, 2, y1, y2) (x, 0, start, y0) Y Noun Noun interest interest N N Verb (x, 3, y2, y3) Noun rates rates N

N Noun Can get very creative here (x, 4, y3, y4) Compare to HMM: Only depends on the word and the previous tag ML in NLP Questions? 65 Maximum Entropy Markov Model Goal: Compute P(y | x) start word Caps -es Previous

Determiner Noun Verb Noun Noun The Fed raises interest rates The Y N

start Fed Y N Determiner raises N interest N N Verb rates N N Noun (x, 0, start, y0) (x, 1, y0, y1)

Y Noun (x, 2, y1, y2) (x, 3, y2, y3) Can get very creative here (x, 4, y3, y4) Compare to HMM: Only depends on the word and the previous tag ML in NLP Questions? 66 Maximum Entropy Markov Model Goal: Compute P(y | x) start word

Caps -es Previous Determiner Noun Verb Noun Noun The Fed raises interest rates

The Y N start Fed Y N Determiner raises N interest N N Verb rates N N Noun

(x, 0, start, y0) (x, 1, y0, y1) Y Noun (x, 2, y1, y2) (x, 3, y2, y3) Can get very creative here (x, 4, y3, y4) Compare to HMM: Only depends on the word and the previous tag ML in NLP Questions? 67 Using MEMM

Training Next-state predictor locally as maximum likelihood Similar to any maximum entropy classifier Prediction/decoding Modify the Viterbi algorithm for the new independence assumptions HMM Conditional Markov model ML in NLP 68 Generalization: Any multiclass classifier Viterbi decoding: we only need a score for each decision So far, probabilistic classifiers In general, use any learning algorithm to build get a score for the label yi given yi-1 and x

Multiclass versions of perceptron, SVM Just like MEMM, these allow arbitrary features to be defined Exercise: Viterbi needs to be re-defined to work with sum of scores rather than the product of probabilities ML in NLP 69 Comparison to HMM What we gain 1. Rich feature representation for inputs Helps generalize better by thinking about properties of the input tokens rather than the entire tokens Eg: If a word ends with es, it might be a present tense verb (such as raises). Could be a feature; HMM cannot capture this 2. Discriminative predictor Model P(y | x) rather than P(y, x) Joint vs conditional ML in NLP

70 Outline Sequence models Hidden Markov models Inference with HMM Learning Conditional Models and Local Classifiers Global models Conditional Random Fields ML in NLP 71 Outline Conditional models for predicting sequences Log-linear models for multiclass classification Maximum Entropy Markov Models The Label Bias Problem

ML in NLP 72 The next-state model for sequences yt-1 HMM yt yt-1 yt xt Conditional model xt This assumption lets us write the conditional probability of the output

as We need to train local multiclass classifiers that predicts the next state given the previous state and the input ML in NLP 73 local classifiers! Label bias problem Lets look at the independence assumption Next-state classifiers are locally normalized Eg: Part-of-speech tagging the sentence The robot 0.8 D 1

wheels N 1 are V round 1 A N 0.2 1 1 V N

R Suppose these are the only state transitions allowed Example based on [Wallach 2002] Option 1: P(D | The) P(N | D, robot) P(N | N, wheels) P(V | N, are) P(A | V, round) ML in NLP Option 2: P(D | The) P(N | D, robot) P(V | N, wheels) P(N | V, are) P( R| N, round) 74 local classifiers! Label bias problem

Lets look at the independence assumption Next-state classifiers are locally normalized Eg: Part-of-speech tagging the sentence The robot 0.8 D 1 wheels N 1 are V round

1 A N 0.2 1 1 V N R Suppose these are the only state transitions allowed Example based on [Wallach 2002] Option 1: P(D | The) P(N | D, robot) P(N | N, wheels)

P(V | N, are) P(A | V, round) ML in NLP Option 2: P(D | The) P(N | D, robot) P(V | N, wheels) P(N | V, are) P( R| N, round) 75 local classifiers Label bias problem ML in NLP 76 local classifiers Label bias problem The path scores are the same Even if the word Fred is never observed as a verb in the data,

it will be predicted as one The input Fred does not influence the output at all ML in NLP 77 Example: Label bias problem Very hot P( P( Dinosaur attack C H C,0 0.9 0.1

0 0 0 C,1 0.6 0.2 0 0 0 C,5 0.8 0

0.2 0 0 H, 1 0.3 0.7 0 0 0 H, 2 0.4

0.6 0 0 0 , 0 0 0 0 1 0 , 1 0

0 0 1 0 In MEMM: even youve never seen this event, you still need to make this row sum up to 1. For CRF, there is no such restriction. ML in NLP 78

Label Bias States with a single outgoing transition effectively ignore their input States with lower-entropy next states are less influenced by observations Why? Each the next-state classifiers are locally normalized. If a state has fewer next states, each of those will get a higher probability mass and hence preferred Side note: Surprisingly doesnt affect some tasks Eg: part-of-speech tagging ML in NLP 79 Summary: Local models for Sequences Conditional models Use rich features in the mode Possibly suffer from label bias problem

ML in NLP 80 Outline Sequence models Hidden Markov models Inference with HMM Learning Conditional Models and Local Classifiers Global models Conditional Random Fields ML in NLP 81 Outline Sequence models Hidden Markov models Inference with HMM

Learning Conditional Models and Local Classifiers Global models Conditional Random Fields ML in NLP 82 So far Hidden Markov models Pros: Decomposition of total probability with tractable Cons: Doesnt allow use of features for representing inputs Also, generative model not really a downside, but we may get better performance with conditional models if we care only about predictions Local, conditional Markov Models Pros: Conditional model, allows features to be used Cons: Label bias problem ML in NLP

83 Global models Train the predictor globally Instead of training local decisions independently Normalize globally Make each edge in the model undirected Not associated with a probability, but just a score Recall the difference between local vs. global for multiclass ML in NLP 84 HMM vs. A local model vs. A global model P(yt | yt-1, xt) P(yt | yt-1) yt-1

yt HMM yt-1 yt fT(yt, yt-1) yt-1 yt P(xt | yt) xt Conditional model Global model xt

Generative fE(yt, xt) xt Discriminative Local: P is locally normalized to add up to one for each t ML in NLP Global: The functions fT and fE are scores that are not normalized 85 HMM vs. A local model vs. A global model P(yt | yt-1, xt) P(yt | yt-1) yt-1

yt HMM yt-1 yt fT(yt, yt-1) yt-1 yt P(xt | yt) xt Conditional model Global model xt

Generative fE(yt, xt) xt Discriminative Local: P is locally normalized to add up to one for each t ML in NLP Global: The functions fT and fE are scores that are not normalized 86 HMM vs. A local model vs. A global model P(yt | yt-1, xt) P(yt | yt-1) yt-1

yt HMM P(xt | yt) xt yt-1 yt Conditional model yt-1 Global model xt Generative fT(yt, yt-1)

yt fE(yt, xt) xt Discriminative Local: P is locally normalized to add up to one for each t ML in NLP Global: The functions fT and fE are scores that are not normalized 87 Conditional Random Field y0 y1 y2

y3 x Each node is a random variable We observe some nodes and the rest are unobserved The goal: To characterize a probability distribution over the unobserved variables, given the observed ML in NLP 88 Conditional Random Field y0 y1 y2 y3

x score(x, y0, y1) score(x, y1, y2) score(x, y2, y3) Each node is a random variable We observe some nodes and need to assign the rest Each clique is associated with a score ML in NLP 89 Conditional Random Field y0 y1 y2 y3

x (x, y0, y1) (x, y1, y2) (x, y2, y3) Each node is a random variable We observe some nodes and need to assign the rest Each clique is associated with a score ML in NLP 90 Conditional Random Field: Factor graph Factors y0 y1 y2

y3 x (x, y0, y1) (x, y1, y2) (x, y2, y3) Each node is a random variable We observe some nodes and need to assign the rest Each clique is associated with a score factor ML in NLP 91 Conditional Random Field: Factor graph A different factorization: Recall decomposition of structures into parts. Same idea Each node is a random variable

We observe some nodes and need to assign the rest Each factor is associated with a score ML in NLP 92 Conditional Random Field: Factor graph Each node is a random variable We observe some nodes and need to assign the rest Each factor is associated with a score ML in NLP 93 Conditional Random Field for sequences 1 ( )= Z: Normalizing constant, sum over all sequences

exp = exp ^ ML in NLP 94 CRF: A different view Input: x, Output: y, both sequences (for now) Define a feature vector for the entire input and output sequence:(x, y) Define a giant log-linear model, P(y | x) parameterized by w 1

( )= exp Just like any other log-linear model, except Space of y is the set of all possible sequences of the correct length Normalization constant sums over all sequences In an MEMM, probabilities were locally normalized ML in NLP 95 Global features The feature function decomposes over the sequence ML in NLP

96 Prediction Goal: To predict most probable sequence y an input x But the score decomposes as Prediction via Viterbi (with sum instead of product) ML in NLP 97 Training a chain CRF Input: Dataset with labeled sequences, D = {} A definition of the feature function How do we train? Maximize the (regularized) log-likelihood Recall: Empirical loss minimization ML in NLP

98 Training with inference Many methods for training Numerical optimization Can use a gradient or hessian based method Simple gradient ascent Training involves inference! A different kind than what we have seen so far Summing over all sequences is just like Viterbi With summation instead of maximization ML in NLP 99 Training with inference Many methods for training Numerical optimization Can use a gradient or hessian based method Simple gradient ascent

Training involves inference! A different kind than what we have seen so far Summing over all sequences is just like Viterbi With summation instead of maximization ML in NLP 100 CRF summary An undirected graphical model Decompose the score over the structure into a collection of factors Each factor assigns a score to assignment of the random variables it is connected to Training and prediction Final prediction via argmax (x, y) Train by maximum (regularized) likelihood (also need inference) Relation to other models Effectively a linear classifier A generalization of logistic regression to structures An instance of Markov Random Field, with some random variables observed (We will see this soon)

ML in NLP 101 From generative models to CRF ML in NLP [Figure from Sutton and McCallum, 05] 102 General CRFs y3 y1 y2 x1 x2 x3

ML in NLP 103 General CRFs (y1, y2, y3) y3 y1 (x3, y2, y3) y2 (x1, y1) x1 x2 x3

(x1, x2, y2) (x, y) = (x1, y1) + (y1, y2, y3) + (x3, y2, y3) + (x1, x2, y2) ML in NLP 104 Computational questions 1. Learning: Given a training set {} Train via maximum likelihood (typically regularized) Need to compute partition function during training 2. Prediction Go over all possible assignments to the ys Find the one with the highest probability/score ML in NLP 105 Computational questions 1. Learning: Given a training set {} Train via maximum likelihood (typically regularized)

Need to compute partition function during training 2. Prediction Go over all possible assignments to the ys Find the one with the highest probability/score ML in NLP 106 Computational questions 1. Learning: Given a training set {} Train via maximum likelihood (typically regularized) Need to compute partition function during training 2. Prediction: Go over all possible assignments to the ys Find the one with the highest probability/score ML in NLP 107

3 2 1 Inference in graphical models 5 4 In general, compute probability of a subset of states P(xA), for some subsets of random variables xA Exact inference Variable elimination Marginalize by summing out variables in a good order Think about what we did for Viterbi Belief propagation (exact only for graphs without loops) What makes an ordering good? Nodes pass messages to each other about their estimate of what

the neighbors state should be Generally efficient for trees, sequences (and maybe other graphs too) Approximate inference ML in NLP 108 Inference in graphical models In general, compute probability of a subset of states P(xA), for some subsets of random variables xA Exact inference NP-hard in general, works for simple graphs Approximate inference 3 2 1

5 4 Markov Chain Monte Carlo Gibbs Sampling/Metropolis-Hastings Variational algorithms Frame inference as an optimization problem, perturb it to an approximate one and solve the approximate problem Loopy Belief propagation Run BP and hope it works! ML in NLP 109

Recently Viewed Presentations

  • Putting the Care Act into practice: For Information

    Putting the Care Act into practice: For Information

    Assessments are becoming more strengths based and there is a new national eligibility threshold. There are changes to financial support for adult care. Care and support planning is becoming more personalised, with less emphasis on needs being met by funded...
  • THE OUTDOOR EDUCATION ADVISERS PANEL Educational Visits Coordinator

    THE OUTDOOR EDUCATION ADVISERS PANEL Educational Visits Coordinator

    Year-long drive calls on young people to spearhead environmental action Government pledges £10million boost to connect children with nature Greener schools and more educational visits will be delivered through the funding Speaking at ZSL London Zoo , the Environment Secretary...
  • Diapositive 1 - uliege.be

    Diapositive 1 - uliege.be

    Conclusion : The IDS-iSYS 25-OH D is a robust method, presenting analytical performances in accordance with the clinical expectations. The IDS-iSYS 25-OH D values were in excellent agreement with those determined with LC-MS/MS, DiaSorin Liaison and DiaSorin RIA.
  • Remote sensing of aerosol from the GOES-R Advanced Baseline ...

    Remote sensing of aerosol from the GOES-R Advanced Baseline ...

    5. Validations SM/AOD product with ground-based (AERONET) measurements 6. Comparisons with MODIS AOD over ocean GOES-E conus GOES-W global LAND As figure 3 shows , the difference between ABI AOD and MODIS AOD has not only large magnitude but also...
  • 1 Hurricane Forecast Model Enron Email Corpus LCDR

    1 Hurricane Forecast Model Enron Email Corpus LCDR

    Enron set up a joint venture in energy investments with CalPERS, the California state pension fund, called the Joint Energy Development Investments (JEDI) Skilling, (COO), asked CalPERS to join Enron in a better investment.
  • Science Jeopardy - csun.edu

    Science Jeopardy - csun.edu

    Science Jeopardy A-Physics B-Chemistry C-Equipment in the Chem. Lab D-Electrical Components E- Metric Measurements 100 100 100 100 100 200 200 200 200 200 300
  • Organizational Silence - The College of Business

    Organizational Silence - The College of Business

    Organizational Silence ... Unity, agreement, and consensus are signs of organizational health, while disagreement and dissent should be avoided. The self-fulfilling prophecy If managers believe that employees are self-interested and untrustworthy, they are likely to act in ways that discourage...
  • Human Behavior - Houston Community College

    Human Behavior - Houston Community College

    Types. Autonomic - unintentional (raised body hair) Deliberate - intentional (gestures and vocalizations) ... More extensive vocabulary - more than 700 ASL signs. ... Sentence construction. Nouns before or after verbs. Grandmother Hypothesis. Primate Life Expectancy.