kl800.com省心范文网

A Short Tutorial on the Expectation-Maximization Algorithm


A Short Tutorial on the Expectation-Maximization Algorithm
Detlef Prescher Institute for Logic, Language and Computation University of Amsterdam prescher@science.uva.nl

1

Introduction

The paper gives a brief review of the expectation-maximization algorithm (Dempster, Laird, and Rubin 1977) in the comprehensible framework of discrete mathematics. In Section 2, two prominent estimation methods, the relative-frequency estimation and the maximum-likelihood estimation are presented. Section 3 is dedicated to the expectation-maximization algorithm and a simpler variant, the generalized expectation-maximization algorithm. In Section 4, two loaded dice are rolled. Enjoy!

2

Estimation Methods

A statistics problem is a problem in which a corpus1 that has been generated in accordance with some unknown probability distribution must be analyzed and some type of inference about the unknown distribution must be made. In other words, in a statistics problem there is a choice between two or more probability distributions which might have generated the corpus. In practice, there are often an in?nite number of di?erent possible distributions – statisticians bundle these into one single probability model – which might have generated the corpus. By analyzing the corpus, an attempt is made to learn about the unknown distribution. So, on the basis of the corpus, an estimation method selects one instance of the probability model, thereby aiming at ?nding the original distribution. In this section, two common estimation methods, the relative-frequency and the maximum-likelihood estimation, are presented.

Corpora
De?nition 1 Let X be a countable set. A real-valued function f : X → R is called a corpus, if f ’s values are non-negative numbers f (x) ≥ 0
1

for all x ∈ X

Statisticians use the term sample but computational linguists prefer the term corpus

1

Each x ∈ X is called a type, and each value of f is called a type frequency. The corpus size2 is de?ned as |f | = f (x)
x∈X

Finally, a corpus is called non-empty and ?nite if 0 < |f | < ∞ In this de?nition, type frequencies are de?ned as non-negative real numbers. The reason for not taking natural numbers is that some statistical estimation methods de?ne type frequencies as weighted occurrence frequencies (which are not natural but non-negative real numbers). Later on, in the context of the EM algorithm, this point will become clear. Note also that a ?nite corpus might consist of an in?nite number of types with positive frequencies. The following de?nition shows that De?nition 1 covers the standard notion of the term corpus (used in Computational Linguistics) and of the term sample (used in Statistics). De?nition 2 Let x1 , . . . , xn be a ?nite sequence of type instances from X . Each xi of this sequence is called a token. The occurrence frequency of a type x in the sequence is de?ned as the following count f (x) = | { i | xi = x} | Obviously, f is a corpus in the sense of De?nition 1, and it has the following properties: The type x does not occur in the sequence if f (x) = 0; In any other case there are f (x) tokens in the sequence which are identical to x. Moreover, the corpus size |f | is identical to n, the number of tokens in the sequence.

Relative-Frequency Estimation
Let us ?rst present the notion of probability that we use throughout this paper. De?nition 3 Let X be a countable set of types. A real-valued function p : X → R is called a probability distribution on X , if p has two properties: First, p’s values are non-negative numbers p(x) ≥ 0 for all x ∈ X and second, p’s values sum to 1 p(x) = 1
x∈X

Readers familiar to probability theory will certainly note that we use the term probability distribution in a sloppy way (Duda et al. (2001), page 611, introduce the term probability mass function instead). Standardly, probability distributions allocate a probability value p(A) to subsets A ? X , so-called events of an event space X , such that three speci?c axioms are satis?ed (see e.g. DeGroot (1989)): Axiom 1 p(A) ≥ 0 for any event A.
2 Note that the corpus size |f | is well-de?ned: The order of summation is not relevant for the value of the (possible in?nite) series f (x), since the types are countable and the type frequencies are non-negative x∈X numbers

2

corpus of data

probability model

an instance of the probability model maximizing the corpus probability
(input)

Maximum?Likelihood Estimation

(output)

corpus of data

the probability distribution comprising the relative frequencies of the corpus types
(input)

Relative?Frequency Estimation

(output)

Figure 1: Maximum-likelihood estimation and relative-frequency estimation Axiom 2 p(X ) = 1. Axiom 3 p(
∞ i=1 Ai )

=

∞ i=1 p(Ai )

for any in?nite sequence of disjoint events A1 , A2 , A3 , ...

Now, however, note that the probability distributions introduced in De?nition 3 induce rather naturally the following probabilities for events A ? X p(A) :=
x∈ A

p(x)

Using the properties of p(x), we can easily show that the probabilities p(A) satisfy the three axioms of probability theory. So, De?nition 3 is justi?ed and thus, for the rest of the paper, we are allowed to put axiomatic probability theory out of our minds. De?nition 4 Let f be a non-empty and ?nite corpus. The probability distribution p ?: X → [0, 1] where p ?(x) = f (x) |f |

is called the relative-frequency estimate on f . The relative-frequency estimation is the most comprehensible estimation method and has some nice properties which will be discussed in the context of the more general maximumlikelihood estimation. For now, however, note that p ? is well de?ned, since both |f | > 0 and |f | < ∞. Moreover, it is easy to check that p ?’s values sum to one: ?(x) = x∈X p ? 1 ? 1 ? 1 · f (x) = |f | · x∈X f (x) = |f | · |f | = 1. x∈X |f |

Maximum-Likelihood Estimation
Maximum-likelihood estimation was introduced by R. A. Fisher in 1912, and will typically yield an excellent estimate if the given corpus is large. Most notably, maximum-likelihood estimators ful?ll the so-called invariance principle and, under certain conditions which 3

are typically satis?ed in practical problems, they are even consistent estimators (DeGroot 1989). For these reasons, maximum-likelihood estimation is probably the most widely used estimation method. Now, unlike relative-frequency estimation, maximum-likelihood estimation is a fully-?edged estimation method that aims at selecting an instance of a given probability model which might have originally generated the given corpus. By contrast, the relative-frequency estimate is de?ned on the basis of a corpus only (see De?nition 4). Figure 1 reveals the conceptual di?erence of both estimation methods. In what follows, we will pay some attention to describe the single setting, in which we are exceptionally allowed to mix up both methods (see Theorem 1). Let us start, however, by presenting the notion of a probability model. De?nition 5 A non-empty set M of probability distributions on a set X of types is called a probability model on X . The elements of M are called instances of the model M. The unrestricted probability model is the set M(X ) of all probability distributions on the set of types M(X ) = p : X → [0, 1]
x∈X

p(x) = 1

A probability model M is called restricted in all other cases M ? M(X ) and M = M(X )

In practice, most probability models are restricted since their instances are often de?ned on a set X comprising multi-dimensional types such that certain parts of the types are statistically independent (see examples 4 and 5). Here is another side note: We already checked that the relative-frequency estimate is a probability distribution, meaning in terms of De?nition 5 that the relative-frequency estimate is an instance of the unrestricted probability model. So, from an extreme point of view, the relative-frequency estimation might be also regarded as a fully-?edged estimation method exploiting a corpus and a probability model (namely, the unrestricted model). In the following, we de?ne maximum-likelihood estimation as a method that aims at ?nding an instance of a given model which maximizes the probability of a given corpus. Later on, we will see that maximum-likelihood estimates have an additional property: They are the instances of the given probability model that have a “minimal distance” to the relative frequencies of the types in the corpus (see Theorem 2). So, indeed, maximum-likelihood estimates can be intuitively thought of in the intended way: They are the instances of the probability model that might have originally generated the corpus. De?nition 6 Let f be a non-empty and ?nite corpus on a countable set X of types. Let M be a probability model on X . The probability of the corpus allocated by an instance p of the model M is de?ned as L(f ; p) = p(x)f (x)
x∈X

An instance p ? of the model M is called a maximum-likelihood estimate of M on f , if and only if the corpus f is allocated a maximum probability by p ? L(f ; p ?) = max L(f ; p)
p∈M

(Based on continuity arguments, we use the convention that p0 = 1 and 00 = 1.) 4

~ p=^ p

M(X) M

~ p ^ p ???

M(X) M

Figure 2: Maximum-likelihood estimation and relative-frequency estimation yield for some “exceptional” probability models the same estimate. These models are lightly restricted or even unrestricted models that contain an instance comprising the relative frequencies of all corpus types (left-hand side). In practice, however, most probability models will not behave like that. So, maximum-likelihood estimation and relative-frequency estimation yield in most cases di?erent estimates. As a further and more serious consequence, the maximum-likelihood estimates have then to be searched for by genuine optimization procedures (right-hand side).

By looking at this de?nition, we recognize that maximum-likelihood estimates are the solutions of a quite complex optimization problem. So, some nasty questions about maximumlikelihood estimation arise: Existence Is there for any probability model and any corpus a maximum-likelihood estimate of the model on the corpus? Uniqueness Is there for any probability model and any corpus a unique maximumlikelihood estimate of the model on the corpus? Computability For which probability models and corpora can maximum-likelihood estimates be e?ciently computed? For some probability models M, the following theorem gives a positive answer. Theorem 1 Let f be a non-empty and ?nite corpus on a countable set X of types. Then: (i) The relative-frequency estimate p ? is a unique maximum-likelihood estimate of the unrestricted probability model M(X ) on f . (ii) The relative-frequency estimate p ? is a maximum-likelihood estimate of a (restricted or unrestricted) probability model M on f , if and only if p ? is an instance of the model M. In this case, p ? is a unique maximum-likelihood estimate of M on f . Proof Ad (i): Combine theorems 2 and 3. Ad (ii): “?” is trivial. “?” by (i) q.e.d. On a ?rst glance, proposition (ii) seems to be more general than proposition (i), since proposition (i) is about one single probability model, the unrestricted model, whereas proposition (ii) gives some insight about the relation of the relative-frequency estimate to a maximumlikelihood estimate of arbitrary restricted probability models (see also Figure 2). Both propositions, however, are equivalent. As we will show later on, proposition (i) is equivalent to the famous information inequality of information theory, for which various proofs have been given in the literature. 5

Example 1 On the basis of the following corpus f (a) = 2, f (b) = 3, f (c) = 5 we shall calculate the maximum-likelihood estimate of the unrestricted probability model M({a, b, c}), as well as the maximum-likelihood estimate of the restricted probability model M = p ∈ M({a, b, c}) The solution is instructive, but is left to the reader. p(a) = 0.5

The Information Inequality of Information Theory
De?nition 7 The relative entropy D(p || q ) of the probability distribution p with respect to the probability distribution q is de?ned by D(p || q ) =
x∈X

p(x) log

p(x) q (x)

p (Based on continuity arguments, we use the convention that 0 log 0 q = 0 and p log 0 = ∞ and 0 0 log 0 = 0. The logarithm is calculated with respect to the base 2.)

Connecting maximum-likelihood estimation with the concept of relative entropy, the following theorem gives the important insight that the relative-entropy of the relative-frequency estimate is minimal with respect to a maximum-likelihood estimate. Theorem 2 Let p ? be the relative-frequency estimate on a non-empty and ?nite corpus f , and let M be a probability model on the set X of types. Then: An instance p ? of the model M is a maximum-likelihood estimate of M on f , if and only if the relative-entropy of p ? is minimal with respect to p ? D(? p || p ?) = min D(? p || p)
p∈M

Proof First, the relative entropy D(? p || p) is simply the di?erence of two further entropy values, the so-called cross-entropy H (? p; p) = ? x∈ X p ?(x) log p(x) and the entropy H (? p) = ? x∈ X p ?(x) log p ?(x) of the relative-frequency estimate D(? p || p) = H (? p; p) ? H (? p) (Based on continuity arguments and in full agreement with the convention used in De?nition 7, we use here that p ? log 0 = ?∞ and 0 log 0 = 0.) It follows that minimizing the relative entropy is equivalent to minimizing the cross-entropy (as a function of the instances p of the given probability model M). The cross-entropy, however, is proportional to the negative log-probability of the corpus f H (? p; p) = ? 1 log L(f ; p) |f |

6

So, ?nally, minimizing the relative entropy D(? p || p) is equivalent to maximizing the corpus 3 probability L(f ; p). Together with Theorem 2, the following theorem, the so-called information inequality of information theory, proves Theorem 1. The information inequality states simply that the relative entropy is a non-negative number – which is zero, if and only if the two probability distributions are equal. Theorem 3 (Information Inequality) Let p and q be two probability distributions. Then D(p || q ) ≥ 0 with equality if and only if p(x) = q (x) for all x ∈ X . Proof See, e.g., Cover and Thomas (1991), page 26.

*Maximum-Entropy Estimation
Readers only interested in the expectation-maximization algorithm are encouraged to omit this section. For completeness, however, note that the relative entropy is asymmetric. That means, in general D(p||q ) = D(q ||p) It is easy to check that the triangle inequality is not valid too. So, the relative entropy D(.||.) is not a “true” distance function. On the other hand, D(.||.) has some of the properties of a distance function. In particular, it is always non-negative and it is zero if and only if p = q (see Theorem 3). So far, however, we aimed at minimizing the relative entropy with respect to its second argument, ?lling the ?rst argument slot of D(.||.) with the relative-frequency estimate p ?. Obviously, these observations raise the question, whether it is also possible to derive other “good” estimates by minimizing the relative entropy with respect to its ?rst argument. So, in terms of Theorem 2, it might be interesting to ask for model instances p? ∈ M with D(p? ||p ?) = min D(p||p ?)
p∈M

For at least two reasons, however, this initial approach of relative-entropy estimation is too simplistic. First, it is tailored to probability models that lack any generalization power. Second, it does not provide deeper insight when estimating constrained probability models. Here are the details:
3

For completeness, note that the perplexity of a corpus f allocated by a model instance p is de?ned as
|f |

1 as well as the common perp(f ;p) interpretation that the perplexity value measures the complexity of the given corpus from the model instance’s view: the perplexity is equal to the size of an imaginary word list from which the corpus can be generated by the model instance – assuming that all words on this list are equally probable. Moreover, the equations state that minimizing the corpus perplexity perp(f ; p) is equivalent to maximizing the corpus probability L(f ; p).

?;p) perp(f ; p) = 2H (p . This yields perp(f ; p) =

1 L(f ;p)

|f |

and L(f ; p) =

7

? A closer look at De?nition 7 reveals that the relative entropy D(p||p ?) is ?nite for those model instances p ∈ M only that ful?ll p ?(x) = 0 ? p(x) = 0 So, the initial approach would lead to model instances that are completely unable to generalize, since they are not allowed to allocate positive probabilities to at least some of the types not seen in the training corpus. ? Theorem 2 guarantees that the relative-frequency estimate p ? is a solution to the initial approach of relative-entropy estimation, whenever p ? ∈ M. Now, De?nition 8 introduces the constrained probability models Mconstr , and indeed, it is easy to check that p ? is always an instance of these models. In other words, estimating constrained probability models by the approach above does not result in interesting model instances. Clearly, all the mentioned drawbacks are due to the fact that the relative-entropy minimization is performed with respect to the relative-frequency estimate. As a resource, we switch simply to a more convenient reference distribution, thereby generalizing formally the initial problem setting. So, as the ?nal request, we ask for model instances p? ∈ M with D(p? ||p0 ) = min D(p||p0 )
p∈M

In this setting, the reference distribution p0 ∈ M(X ) is a given instance of the unrestricted probability model, and from what we have seen so far, p0 should allocate all types of interest a positive probability, and moreover, p0 should not be itself an instance of the probability model M. Indeed, this request will lead us to the interesting maximum-entropy estimates. Note ?rst, that D(p||p0 ) = H (p; p0 ) ? H (p) So, minimizing D(p||p0 ) as a function of the model instances p is equivalent to minimizing the cross entropy H (p; p0 ) and simultaneously maximizing the model entropy H (p). Now, simultaneous optimization is a hard task in general, and this gives reason to focus ?rstly on maximizing the entropy H (p) in isolation. The following de?nition presents maximumentropy estimation in terms of the well-known maximum-entropy principle (Jaynes 1957). Sloppily formulated, the maximum-entropy principle recommends to maximize the entropy H (p) as a function of the instances p of certain “constrained” probability models. De?nition 8 Let f1 , . . . , fd be a ?nite number of real-valued functions on a set X of types, the so-called feature functions4 . Let p ? be the relative-frequency estimate on a non-empty
4 Each of these feature functions can be thought of as being constructed by inspecting the set of types, thereby measuring a speci?c property of the types x ∈ X . For example, if working in a formal-grammar framework, then it might be worthy to look (at least) at some feature functions fr directly associated to the rules r of the given formal grammar. The “measure” fr (x) of a speci?c rule r for the analyzes x ∈ X of the grammar might be calculated, for example, in terms of the occurrence frequency of r in the sequence of those rules which are necessary to produce x. For instance, Chi (1999) studied this approach for the context-free grammar formalism. Note, however, that there is in general no recipe for constructing “good” feature functions: Often, it is really an intellectual challenge to ?nd those feature functions that describe the given data as best as possible (or at least in a satisfying manner).

8

and ?nite corpus f on X . Then, the probability model constrained by the expected values of f1 . . . fd on f is de?ned as Mconstr = p ∈ M(X ) Ep fi = Ep ?fi for i = 1, . . . , d

Here, each Ep fi is the model instance’s expectation of fi Ep fi =
x∈X

p(x)fi (x)

constrained to match Ep ?fi , the observed expectation of fi Ep ?fi =
x∈X

p ?(x)fi (x)

Furthermore, a model instance p? ∈ Mconstr is called a maximum-entropy estimate of Mconstr if and only if H (p? ) = max H (p)
p∈Mconstr

It is well-known that the maximum-entropy estimates have some nice properties. For example, as De?nition 9 and Theorem 4 show, they can be identi?ed to be the unique maximumlikelihood estimates of the so-called exponential models (which are also known as log-linear models). De?nition 9 Let f1 , . . . , fd be a ?nite number of feature functions on a set X of types. The exponential model of f1 , . . . , fd is de?ned by Mexp = p ∈ M(X ) p(x) = 1 λ1 f1 (x)+...+λd fd (x) e with λ1 , . . . , λd , Zλ ∈ R Zλ

Here, the normalizing constant Zλ (with λ as a short form for the sequence λ1 , . . . , λd ) guarantees that p ∈ M(X ), and it is given by Zλ =
x∈X

eλ1 f1 (x)+...+λd fd (x)

Theorem 4 Let f be a non-empty and ?nite corpus, and f1 , . . . , fd be a ?nite number of feature functions on a set X of types. Then (i) The maximum-entropy estimates of Mconstr are instances of Mexp , and the maximumlikelihood estimates of Mexp on f are instances of Mconstr . (ii) If p? ∈ Mconstr ∩ Mexp , then p? is both a unique maximum-entropy estimate of Mconstr and a unique maximum-likelihood estimate of Mexp on f . Part (i) of the theorem simply suggests the form of the maximum-entropy or maximumlikelihood estimates we are looking for. By combining both ?ndings of (i), however, the search space is drastically reduced for both estimation methods: We simply have to look at the intersection of the involved probability models. In turn, exactly this fact makes the second part of the theorem so valuable. If there is a maximum-entropy or a maximum-likelihood 9

maximum-likelihood of estimation

? ? ? ? ? ? arbitrary probability models ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
exponential models

??

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

minimum relative-entropy estimation minimize D(? p||.) (? p=relative-frequency estimate) maximum-entropy estimation of constrained models minimum relative-entropy estimation of constrained models minimize D(.||p0 ) (p0 =reference distribution)

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

? ? ? ? ? ? ? ? ? ?

exponential models with reference distributions

? ? ? ? ? ? ? ? ? ?

Figure 3: Maximum-likelihood estimation generalizes maximum-entropy estimation, as well as both
variants of minimum relative-entropy estimation (where either the ?rst or the second argument slot of D(.||.) is ?lled by a given probability distribution).

estimate, then it is in the intersection of both models, and thus according to Part (ii), it is a unique estimate, and even more, it is both a maximum-entropy and a maximum-likelihood estimate. Proof See e.g. Cover and Thomas (1991), pages 266-278. For an interesting alternate proof of (ii), see Ratnaparkhi (1997). Note, however, that the proof of Ratnaparkhi’s Theorem 1 is incorrect, whenever the set X of types is in?nite. Although Ratnaparkhi’s proof is very elegant, it relies on the existence of a uniform distribution on X that simply does not exist in this special case. By contrast, Cover and Thomas prove Theorem 11.1.1 without using a uniform distribution on X , and so, they achieve indeed the more general result. Finally, we are coming back to our request of minimizing the relative entropy with respect to a given reference distribution p0 ∈ M(X ). For constrained probability models, the relevant results di?er not much from the results described in Theorem 4. So, let Mexp·ref = p ∈ M(X ) p(x) = 1 λ1 f1 (x)+...+λd fd (x) e · p0 (x) with λ1 , . . . , λd , Zλ ∈ R Zλ

Then, along the lines of the proof of Theorem 4 it can be also proven that the following propositions are valid. (i) The minimum relative-entropy estimates of Mconstr are instances of Mexp·ref , and the maximum-likelihood estimates of Mexp·ref on f are instances of Mconstr . (ii) If p? ∈ Mconstr ∩ Mexp·ref , then p? is both a unique minimum relative-entropy estimate of Mconstr and a unique maximum-likelihood estimate of Mexp·ref on f . All results are displayed in Figure 3.

3

The Expectation-Maximization Algorithm

The expectation-maximization algorithm was introduced by Dempster et al. (1977), who also presented its main properties. In short, the EM algorithm aims at ?nding maximumlikelihood estimates for settings where this appears to be di?cult if not impossible. The trick 10

incomplete data

complete data

symbolic analyzer

incomplete?data corpus

complete?data model starting instance
(input)

sequence of instances of the complete?data model aiming at maximizing the probability of the incomplete?data corpus

Expectation?Maximization Algorithm

(output)

Figure 4: Input and output of the EM algorithm. of the EM algorithm is to map the given data to complete data on which it is well-known how to perform maximum-likelihood estimation. Typically, the EM algorithm is applied in the following setting: ? Direct maximum-likelihood estimation of the given probability model on the given corpus is not feasible. For example, if the likelihood function is too complex (e.g. it is a product of sums). ? There is an obvious (but one-to-many) mapping to complete data, on which maximumlikelihood estimation can be easily done. The prototypical example is indeed that maximum-likelihood estimation on the complete data is already a solved problem. Both relative-frequency and maximum-likelihood estimation are common estimation methods with a two-fold input, a corpus and a probability model5 such that the instances of the model might have generated the corpus. The output of both estimation methods is simply an instance of the probability model, ideally, the unknown distribution that generated the corpus. In contrast to this setting, in which we are almost completely informed (the only thing that is not known to us is the unknown distribution that generated the corpus), the expectation-maximization algorithm is designed to estimate an instance of the probability model for settings, in which we are incompletely informed. To be more speci?c, instead of a complete-data corpus, the input of the expectationmaximization algorithm is an incomplete-data corpus together with a so-called symbolic analyzer. A symbolic analyzer is a device assigning to each incomplete-data type a set of analyzes, each analysis being a complete-data type. As a result, the missing completedata corpus can be partly compensated by the expectation-maximization algorithm: The application of the the symbolic analyzer to the incomplete-data corpus leads to an ambiguous complete-data corpus. The ambiguity arises as a consequence of the inherent analytical ambiguity of the symbolic analyzer: the analyzer can replace each token of the incompletedata corpus by a set of complete-data types – the set of its analyzes – but clearly, the symbolic analyzer is not able to resolve the analytical ambiguity. The expectation-maximization algorithm performs a sequence of runs over the resulting ambiguous complete-data corpus. Each of these runs consists of an expectation step followed by a maximization step. In the E step, the expectation-maximization algorithm combines the symbolic analyzer with an instance of the probability model. The results of
5

We associate the relative-frequency estimate with the unrestricted probability model

11

this combination is a statistical analyzer which is able to resolve the analytical ambiguity introduced by the symbolic analyzer. In the M step, the expectation-maximization algorithm calculates an ordinary maximum-likelihood estimate on the resolved complete-data corpus. In general, however, a sequence of such runs is necessary. The reason is that we never know which instance of the given probability model leads to a good statistical analyzer, and thus, which instance of the probability model shall be used in the E-step. The expectationmaximization algorithm provides a simple but somehow surprising solution to this serious problem. At the beginning, a randomly generated starting instance of the given probability model is used for the ?rst E-step. In further iterations, the estimate of the M-step is used for the next E-step. Figure 4 displays the input and the output of the EM algorithm. The procedure of the EM algorithm is displayed in Figure 5.

Symbolic and Statistical Analyzers
De?nition 10 Let X and Y be non-empty and countable sets. A function A : Y → 2X is called a symbolic analyzer if the (possibly empty) sets of analyzes A(y ) ? X are pair-wise disjoint, and the union of all sets of analyzes A(y ) is complete X =
y ∈Y

A(y )

In this case, Y is called the set of incomplete-data types, whereas X is called the set of complete-data types. So, in other words, the analyzes A(y ) of the incomplete-data types y form a partition of the complete-data X . Therefore, for each x ∈ X exists a unique y ∈ Y , the so-called yield of x, such that x is an analysis of y y = yield(x) if and only if x ∈ A(y )

For example, if working in a formal-grammar framework, the grammatical sentences can be interpreted as the incomplete-data types, whereas the grammatical analyzes of the sentences are the complete-data types. So, in terms of De?nition 10, a so-called parser – a device assigning a set of grammatical analyzes to a given sentence – is clearly a symbolic analyzer: The most important thing to check is that the parser does not assign a given grammatical analysis to two di?erent sentences – which is pretty obvious, if the sentence words are part of the grammatical analyzes. De?nition 11 A pair < A, p > consisting of a symbolic analyzer A and a probability distribution p on the complete-data types X is called a statistical analyzer. We use a statistical analyzer to induce probabilities for the incomplete-data types y ∈ Y p(y ) :=
x∈A(y )

p(x)

Even more important, we use a statistical analyzer to resolve the analytical ambiguity of an incomplete-data type y ∈ Y by looking at the conditional probabilities of the analyzes x ∈ A(y ) p(x) p(x|y ) := where y = yield(x) p(y ) 12

symbolic analyzer

incomplete?data corpus

E step: generate the complete?data?corpus expected by q

complete?data corpus

fq

q
instance of the complete?data model (input/output)

M step: maximum?likelihood estimation on complete data (corpus and model)

complete?data model

Figure 5: Procedure of the EM algorithm. An incomplete-data corpus, a symbolic analyzer (a device
assigning to each incomplete-data type a set of complete-data types), and a complete-data model are given. In the E step, the EM algorithm combines the symbolic analyzer with an instance q of the probability model. The results of this combination is a statistical analyzer that is able to resolve the ambiguity of the given incomplete data. In fact, the statistical analyzer is used to generate an expected complete-data corpus fq . In the M step, the EM algorithm calculates an ordinary maximum-likelihood estimate of the complete-data model on the complete-data corpus generated in the E step. In further iterations, the estimates of the M-steps are used in the subsequent E-steps. The output of the EM algorithm are the estimates that are produced in the M steps.

It is easy to check that the statistical analyzer induces a proper probability distribution on the set Y of incomplete-data types p(y ) =
y ∈Y y ∈Y x∈A(y )

p(x) =
x∈X

p(x) = 1

Moreover, the statistical analyzer induces also proper conditional probability distributions on the sets of analyzes A(y ) p(x|y ) =
x∈A(y )

p(x) = p(y ) x∈A(y )

x∈A(y ) p(x)

p(y )

=

p(y ) =1 p(y )

Of course, by de?ning p(x|y ) = 0 for y = yield(x), p(.|y ) is even a probability distribution on the full set X of analyzes.

Input, Procedure, and Output of the EM Algorithm
De?nition 12 The input of the expectation-maximization (EM) algorithm is (i) a symbolic analyzer, i.e., a function A which assigns a set of analyzes A(y ) ? X to each incomplete-data type y ∈ Y , such that all sets of analyzes form a partition of the set X of complete-data types X =
y ∈Y

A(y )

13

(ii) a non-empty and ?nite incomplete-data corpus, i.e., a frequency distribution f on the set of incomplete-data types f:Y →R such that f (y ) ≥ 0 for all y ∈ Y and 0 < |f | < ∞

(iii) a complete-data model M ? M(X ), i.e., each instance p ∈ M is a probability distribution on the set of complete-data types p : X → [0, 1] and
x∈X

p(x) = 1

(*) implicit input: an incomplete-data model M ? M(Y ) induced by the symbolic analyzer and the complete-data model. To see this, recall De?nition 11. Together with a given instance of the complete-data model, the symbolic analyzer constitutes a statistical analyzer which, in turn, induces the following instance of the incomplete-data model p : Y → [0, 1] and p(y ) =
x∈A(y )

p(x)

(Note: For both complete and incomplete data, the same notation symbols M and p are used. The sloppy notation, however, is justi?ed, because the incomplete-data model is a marginal of the complete-data model.) (iv) a (randomly generated) starting instance p0 of the complete-data model M. (Note: If permitted by M, then p0 should not assign to any x ∈ X a probability of zero.) De?nition 13 The procedure of the EM algorithm is (1) (2) (3) for each i = 1, 2, 3, ... do q := pi?1 E-step: compute the complete-data corpus fq : X → R expected by q fq (x) := f (y ) · q (x|y ) (4) where y = yield(x)

M-step: compute a maximum-likelihood estimate p ? of M on fq L(fq ; p ?) = max L(fq , p)
p∈M

(5) (6) (7)

(Implicit pre-condition of the EM algorithm: it exists!) pi := p ? end // for each i print p0 , p1 , p2 , p3 , ...

In line (3) of the EM procedure, a complete-data corpus fq (x) has to be generated on the basis of the incomplete-data corpus f (y ) and the conditional probabilities q (x|y ) of the analyzes of y (conditional probabilities are introduced in De?nition 11). In fact, this generation procedure is conceptually very easy: according to the conditional probabilities q (x|y ), the frequency f (y ) has to be distributed among the complete-data types x ∈ A(y ). Figure 6 displays the procedure. Moreover, there exists a simple reversed procedure (summation of all frequencies 14

incomplete?data corpus

f

...
y3

y1

y2

distribute f(y) to the analyzes x of y according q(x|y)

...
x 32 x 33 ... x ...

x11

x 12

x 13

... x ...

x 21

x 22

x 23

... x...

x31

analyzes of y 1 total frequency = f( y1 )

analyzes of y 2 total frequency = f( y2 )

analyzes of y 3 total frequency = f( y3 )

...
complete?data corpus

fq

Figure 6: The E step of the EM algorithm. A complete-data corpus fq (x) is generated on the basis of the incomplete-data corpus f (y ) and the conditional probabilities q (x|y ) of the analyzes of y . The frequency f (y ) is distributed among the complete-data types x ∈ A(y ) according to the conditional probabilities q (x|y ). A simple reversed procedure guarantees that the original incomplete-data corpus f (y ) can be recovered from the generated corpus fq (x): Sum up all frequencies fq (x) with x ∈ A(y ). So the size of both corpora is the same |fq | = |f |. Memory hook : fq is the q omplete data corpus. fq (x) with x ∈ A(y )) which guarantees that the original corpus f (y ) can be recovered from the generated corpus fq (x). Finally, the size of both corpora is the same |fq | = |f | In line (4) of the EM procedure, it is stated that a maximum-likelihood estimate p ? of the complete-data model has to be computed on the complete-data corpus fq expected by q . Recall for this purpose that the probability of fq allocated by an instance p ∈ M is de?ned as L(fq ; p) = p(x)fq (x)
x∈X

In contrast, the probability of the incomplete-data corpus f allocated by an instance p of the incomplete-data model is much more complex. Using De?nition 12.*, we get an expression involving a product of sums
? ?f (y)

L(f ; p) =
y ∈Y

?
x∈A(y )

p(x)?

Nevertheless, the following theorem reveals that the EM algorithm aims at ?nding an instance of the incomplete-data model which possibly maximizes the probability of the incomplete-data corpus.

15

Theorem 5 The output of the EM algorithm is: A sequence of instances of the complete-data model M, the so-called EM re-estimates, p0 , p1 , p2 , p3 , ... such that the sequence of probabilities allocated to the incomplete-data corpus is monotonic increasing L(f ; p0 ) ≤ L(f ; p1 ) ≤ L(f ; p2 ) ≤ L(f ; p3 ) ≤ . . . It is common wisdom that the sequence of EM re-estimates will converge to a (local) maximumlikelihood estimate of the incomplete-data model on the incomplete-data corpus. As proven by Wu (1983), however, the EM algorithm will do this only in speci?c circumstances. Of course, it is guaranteed that the sequence of corpus probabilities (allocated by the EM re-estimates) must converge. However, we are more interested in the behavior of the EM re-estimates itself. Now, intuitively, the EM algorithm might get stuck in a saddle point or even a local minimum of the corpus-probability function, whereas the associated model instances are hopping uncontrolled around (for example, on a circle-like path in the “space” of all model instances). Proof See theorems 6 and 7.

The Generalized Expectation-Maximization Algorithm
The EM algorithm performs a sequence of maximum-likelihood estimations on complete data, resulting in good re-estimates on incomplete-data (“good” in the sense of Theorem 5). The following theorem, however, reveals that the EM algorithm might overdo it somehow, since there exist alternative M-steps which can be easier performed, and which result in re-estimates having the same property as the EM re-estimates. De?nition 14 A generalized expectation-maximization (GEM) algorithm has exactly the same input as the EM-algorithm, but an easier M-step is performed in its procedure: (4) M-step (GEM): compute an instance p ? of the complete-data model M such that L(fq ; p ?) ≥ L(fq ; q ) Theorem 6 The output of a GEM algorithm is: A sequence of instances of the complete-data model M, the so-called GEM re-estimates, such that the sequence of probabilities allocated to the incomplete-data corpus is monotonic increasing. Proof Various proofs have been given in the literature. The ?rst one was presented by Dempster et al. (1977). For other variants of the EM algorithm, the book of McLachlan and Krishnan (1997) is a good source. Here, we present something along the lines of the original proof. Clearly, a proof of the theorem requires somehow that we are able to express the probability of the given incomplete-data corpus f in terms of the the probabilities of completedata corpora fq which are involved in the M-steps of the GEM algorithm (where both types of corpora are allocated a probability by the same instance p of the model M). A certain entity, which we would like to call the expected cross-entropy on the analyzes, plays a major role for solving this task. To be speci?c, the expected cross-entropy on the analyzes is 16

de?ned as the expectation of certain cross-entropy values HA(y) (q, p) which are calculated on the di?erent sets A(y ) of analyzes. Then, of course, the “expectation” is calculated on the basis of the relative-frequency estimate p ? of the given incomplete-data corpus HA (q ; p) =
y ∈Y

p ?(y ) · HA(y) (q ; p)

Now, for two instances q and p of the complete-data model, their conditional probabilities q (x|y ) and p(x|y ) form proper probability distributions on the set A(y ) of analyzes of y (see De?nition 11). So, the cross-entropy HA(y) (q ; p) on the set A(y ) is simply given by HA(y) (q ; p) = ?
x∈A(y )

q (x|y ) log p(x|y )

Recalling the central task of this proof, a bunch of relatively straight-forward calculations leads to the following interesting equation6 L(f ; p) = 2HA (q;p) Using this equation, we can state that L(f ; p) = 2HA (q;p)?HA (q,q) L(f ; q )
|f | |f |

· L(fq ; p)

·

L(fq ; p) L(fq ; q )

In what follows, we will show that, after each M-step of a GEM algorithm (i.e. for p being a GEM re-estimate p ?), both of the factors on the right-hand side of this equation are not less than one. First, an iterated application of the information inequality of information theory (see Theorem 3) yields HA (q ; p) ? HA (q, q ) =
y ∈Y

p ?(y ) · HA(y) (q ; p) ? HA(y) (q ; q ) p ?(y ) · DA(y) (q ||p)
y ∈Y

= ≥ 0

So, the ?rst factor is never (i.e. for no model instance p) less than one 2HA (q;p)?HA (q,q)
6

|f |

≥1

It is easier to show that H (? p; p) = H (? pq ; p) ? HA (q ; p).

Here, p ? is the relative-frequency estimate on the incomplete-data corpus f , whereas p ?q is the relative-frequency estimate on the complete-data corpus fq . However, by de?ning an “average perplexity of the analyzes”, perpA (q ; p) := 2HA (q;p) (see also Footnote 3), the true spirit of the equation can be revealed: L(fq ; p) = L(f ; p) · 1 perpA (q ; p)
|f |

This equation states that the probability of a complete-data corpus (generated by a statistical analyzer) is the product of the probability of the given incomplete-data corpus and |f |-times the average probability of the di?erent corpora of analyzes (as generated for each of the |f | tokens of the incomplete-data corpus).

17

Second, by de?nition of the M-step of a GEM algorithm, the second factor is also not less than one L(fq ; p ?) ≥1 L(fq ; q ) So, it follows L(f ; p ?) ≥1 L(f ; q ) yielding that the probability of the incomplete-data corpus allocated by the GEM re-estimate p ? is not less than the probability of the incomplete-data corpus allocated by the model instance q (which is either the starting instance p0 of the GEM algorithm or the previously calculated GEM re-estimate) L(f ; p ?) ≥ L(f ; q ) Theorem 7 An EM algorithm is a GEM algorithm. Proof In the M-step of an EM algorithm, a model instance p ? is selected such that L(fq ; p ?) = max L(fq , p)
p∈M

So, especially L(fq ; p ?) ≥ L(fq , q ) and the requirements of the M-step of a GEM algorithm are met.

4

Rolling Two Dice

Example 2 We shall now consider an experiment in which two loaded dice are rolled, and we shall compute the relative-frequency estimate on a corpus of outcomes. If we assume that the two dice are distinguishable, each outcome can be represented as a pair of numbers (x1 , x2 ), where x1 is the number that appears on the ?rst die and x2 is the number that appears on the second die. So, for this experiment, an appropriate set X of types comprises the following 36 outcomes: (x1 , x2 ) x2 = 1 x2 = 2 x2 = 3 x2 = 4 x2 = 5 x2 = 6 x1 = 1 (1, 1) (1, 2) (1, 3) (1, 4) (1, 5) (1, 6) x1 = 2 (2, 1) (2, 2) (2, 3) (2, 4) (2, 5) (2, 6) x1 = 3 (3, 1) (3, 2) (3, 3) (3, 4) (3, 5) (3, 6) x1 = 4 (4, 1) (4, 2) (4, 3) (4, 4) (4, 5) (4, 6) x1 = 5 (5, 1) (5, 2) (5, 3) (5, 4) (5, 5) (5, 6) x1 = 6 (6, 1) (6, 2) (6, 3) (6, 4) (6, 5) (6, 6) If we throw the two dice a 100 000 times, arise f (x1 , x2 ) x2 = 1 x2 = 2 x1 = 1 3790 3773 x1 = 2 3735 3794 x1 = 3 4903 4956 x1 = 4 2495 2519 x1 = 5 3820 3735 x1 = 6 6369 6290 then the following occurrence frequencies might x2 = 3 x2 = 4 x2 = 5 x2 = 6 1520 1498 2233 2298 1497 1462 2269 2184 1969 2035 2883 3010 1026 1049 1487 1451 1517 1498 2276 2191 2600 2510 3685 3673 18

The size of this corpus is |f | = 100 000. So, the relative-frequency estimate p ? on f can be easily computed (see De?nition 4) p ?(x1 , x2 ) x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 x2 = 1 0.03790 0.03735 0.04903 0.02495 0.03820 0.06369 x2 = 2 0.03773 0.03794 0.04956 0.02519 0.03735 0.06290 x2 = 3 0.01520 0.01497 0.01969 0.01026 0.01517 0.02600 x2 = 4 0.01498 0.01462 0.02035 0.01049 0.01498 0.02510 x2 = 5 0.02233 0.02269 0.02883 0.01487 0.02276 0.03685 x2 = 6 0.02298 0.02184 0.03010 0.01451 0.02191 0.03673

Example 3 We shall consider again Experiment 2 in which two loaded dice are rolled, but we shall now compute the relative-frequency estimate on the corpus of outcomes of the ?rst die, as well as on the corpus of outcomes of the second die. If we look at the same corpus as in Example 2, then the corpus f1 of outcomes of the ?rst die can be calculated as f1 (x1 ) = x2 f (x1 , x2 ). An analog summation yields the corpus of outcomes of the second die, f2 (x2 ) = x1 f (x1 , x2 ). Obviously, the sizes of all corpora are identical |f1 | = |f2 | = |f | = 100 000. So, the relative-frequency estimates p ? ? 1 on f1 and p 2 on f2 are calculated as follows f1 (x1 ) x1 15112 1 14941 2 19756 3 10027 4 15037 5 25127 6 p ?1 (x1 ) x1 0.15112 1 0.14941 2 0.19756 3 0.10027 4 0.15037 5 0.25127 6 f2 (x2 ) x2 25112 1 25067 2 10129 3 10052 4 14833 5 14807 6 p ?2 (x2 ) x2 0.25112 1 0.25067 2 0.10129 3 0.10052 4 0.14833 5 0.14807 6

Example 4 We shall consider again Experiment 2 in which two loaded dice are rolled, but we shall now compute a maximum-likelihood estimate of the probability model which assumes that the numbers appearing on the ?rst and second die are statistically independent. First, recall the de?nition of statistical independence (see e.g. Duda et al. (2001), page 613). De?nition 15 The variables x1 and x2 are said to be statistically independent given a joint probability distribution p on X if and only if p(x1 , x2 ) = p1 (x1 ) · p2 (x2 ) where p1 and p2 are the marginal distributions for x1 and x2 p1 (x1 ) =
x2

p(x1 , x2 ) p(x1 , x2 )
x1

p2 (x2 ) =

So, let M1/2 be the probability model which assumes that the numbers appearing on the ?rst and second die are statistically independent M1/2 = {p ∈ M(X ) | x1 and x2 are statistically independent given p} 19

In Example 2, we have calculated the relative-frequency estimator p ?. Theorem 1 states that p ? is the unique maximum-likelihood estimate of the unrestricted model M(X ). Thus, p ? is also a candidate for a maximum-likelihood estimate of M1/2 . Unfortunately, however, x1 and x2 are not statistically independent given p ? (see e.g. p ?(1, 1) = 0.03790 and p ?1 (1)·p ?2 (1) = 0.0379493). This has two consequences for the experiment in which two (loaded) dice are rolled: ? the probability model, which assumes that the numbers appearing on the ?rst and second die are statistically independent, is a restricted model (see De?nition 5), and ? the relative-frequency estimate is in general not a maximum-likelihood estimate of the standard probability model assuming that the numbers appearing on the ?rst and second die are statistically independent. Therefore, we are now following De?nition 6 to compute the maximum-likelihood estimate of M1/2 . Using the independence property, the probability of the corpus f allocated by an instance p of the model M1/2 can be calculated as
? ? ? ?

L(f ; p) = ?
x1 =1,...,6

p1 (x1 )f1 (x1 ) ? · ?
x2 =1,...,6

p2 (x2 )f2 (x2 ) ? = L(f1 ; p1 ) · L(f2 ; p2 )

De?nition 6 states that the maximum-likelihood estimate p ? of M1/2 on f must maximize L(f ; p). A product, however, is maximized, if and only if its factors are simultaneously maximized. Theorem 1 states that the corpus probabilities L(fi ; pi ) are maximized by the relative-frequency estimators p ?i . Therefore, the product of the relative-frequency estimators p ? and p ? (on f and f respectively) might be a candidate for the maximum-likelihood 1 2 1 2 estimate p ? we are looking for p ?(x1 , x2 ) = p ? ? 1 (x1 ) · p 2 (x2 ) Now, note that the marginal distributions of p ? are identical with the relative-frequency estimators on f1 and f2 . For example, p ?’s marginal distribution for x1 is calculated as p ?1 (x1 ) =
x2

p ?(x1 , x2 ) =
x2

p ?1 (x1 ) · p ?2 (x2 ) = p ? 1 (x1 ) ·
x2

p ? ?1 (x1 ) · 1 = p ?1 (x1 ) 2 (x2 ) = p

A similar calculation yields p ?2 (x2 ) = p ?2 (x2 ). Both equations state that x1 and x2 are indeed statistically independent given p ? p ?(x1 , x2 ) = p ? ? 1 (x1 ) · p 2 (x2 ) So, ?nally, it is guaranteed that p ? is an instance of the probability model M1/2 as required for a maximum-likelihood estimate of M1/2 . Note: p ? is even an unique maximum-likelihood estimate since the relative-frequency estimates p ?i are unique maximum-likelihood estimates (see Theorem 1). The relative-frequency estimates p ?1 and p ?2 have already been calculated in Example 3. So, p ? is calculated as follows p ?(x1 , x2 ) x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 x2 = 1 0.0379493 0.0375198 0.0496113 0.0251798 0.0377609 0.0630989 x2 = 2 0.0378813 0.0374526 0.0495224 0.0251347 0.0376932 0.0629859 x2 = 3 0.0153069 0.0151337 0.0200109 0.0101563 0.015231 0.0254511 20 x2 = 4 0.0151906 0.0150187 0.0198587 0.0100791 0.0151152 0.0252577 x2 = 5 0.0224156 0.022162 0.0293041 0.014873 0.0223044 0.0372709 x2 = 6 0.0223763 0.0221231 0.0292527 0.014847 0.0222653 0.0372055

Example 5 We shall consider again Experiment 2 in which two loaded dice are rolled. Now, however, we shall assume that we are incompletely informed: the corpus of outcomes (which is given to us) consists only of the sums of the numbers which appear on the ?rst and second die. Nevertheless, we shall compute an estimate for a probability model on the complete-data (x1 , x2 ) ∈ X . If we assume that the corpus which is given to us was calculated on the basis of the corpus given in Example 2, then the occurrence frequency of a sum y can be calculated as f (y ) = x1 +x2 =y f (x1 , x2 ). These numbers are displayed in the following table f (y ) 3790 7508 10217 10446 12003 17732 13923 8595 6237 5876 3673 For example, f (4) = f (1, 3) + f (2, 2) + f (3, 1) = 1520 + 3794 + 4903 = 10217 The problem is now, whether this corpus of sums can be used to calculate a good estimate on the outcomes (x1 , x2 ) itself. Hint: Examples 2 and 4 have shown that a unique relative-frequency estimate p ?(x1 , x2 ) and a unique maximum-likelihood estimate p ?(x1 , x2 ) can be calculated on the basis of the corpus f (x1 , x2 ). However, right now, this corpus is not available! Putting the example in the framework of the EM algorithm (see De?nition 12), the set of incomplete-data types is Y = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} whereas the set of complete-data types is X . We also know the set of analyzes for each incomplete-data type y ∈ Y A(y ) = {(x1 , x2 ) ∈ X | x1 + x2 = y } As in Example 4, we are especially interested in an estimate of the (slightly restricted) complete-data model M1/2 which assumes that the numbers appearing on the ?rst and second die are statistically independent. So, for this case, a randomly generated starting instance p0 (x1 , x2 ) of the complete-data model is simply the product of a randomly generated probability distribution p01 (x1 ) for the numbers appearing on the ?rst dice, and a randomly generated probability distribution p02 (x2 ) for the numbers appearing on the second dice p0 (x1 , x2 ) = p01 (x1 ) · p02 (x2 ) 21 y 2 3 4 5 6 7 8 9 10 11 12

The following tables display some randomly generated numbers for p01 and p02 p01 (x1 ) x1 0.18 1 0.19 2 0.16 3 0.13 4 0.17 5 0.17 6 p02 (x2 ) x2 0.22 1 0.23 2 0.13 3 0.16 4 0.14 5 0.12 6

Using the random numbers for p01 (x1 ) and p02 (x2 ), a starting instance p0 of the complete-data model M1/2 is calculated as follows p0 (x1 , x2 ) x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 For example, p0 (1, 3) = p01 (1) · p02 (3) = 0.18 · 0.13 = 0.0234 p0 (2, 2) = p01 (2) · p02 (2) = 0.19 · 0.23 = 0.0437 p0 (3, 1) = p01 (3) · p02 (1) = 0.16 · 0.22 = 0.0352 So, we are ready to start the procedure of the EM algorithm. First EM iteration. In the E-step, we shall compute the complete-data corpus fq expected by q := p0 . For this purpose, the probability of each incomplete-data type given the starting instance p0 of the complete-data model has to be computed (see De?nition 12.*) p0 (y ) =
x1 + x2 = y

x2 = 1 0.0396 0.0418 0.0352 0.0286 0.0374 0.0374

x2 = 2 0.0414 0.0437 0.0368 0.0299 0.0391 0.0391

x2 = 3 0.0234 0.0247 0.0208 0.0169 0.0221 0.0221

x2 = 4 0.0288 0.0304 0.0256 0.0208 0.0272 0.0272

x2 = 5 0.0252 0.0266 0.0224 0.0182 0.0238 0.0238

x2 = 6 0.0216 0.0228 0.0192 0.0156 0.0204 0.0204

p0 (x1 , x2 )

The above displayed numbers for p0 (x1 , x2 ) yield the following instance of the incomplete-data model p0 (y ) y 0.0396 2 0.0832 3 0.1023 4 0.1189 5 0.1437 6 0.1672 7 0.1272 8 0.0867 9 0.0666 10 0.0442 11 0.0204 12 22

For example, p0 (4) = p0 (1, 3) + p0 (2, 2) + p0 (3, 1) = 0.0234 + 0.0437 + 0.0352 = 0.1023 So, the complete-data corpus expected by q := p0 is calculated as follows (see line (3) of the EM procedure given in De?nition 13) fq (x1 , x2 ) x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 For example, p0 (1, 3) 0.0234 = 10217 · = 2337.03 p0 (4) 0.1023 p0 (2, 2) 0.0437 fq (2, 2) = f (4) · = 10217 · = 4364.45 p0 (4) 0.1023 p0 (3, 1) 0.0352 fq (3, 1) = f (4) · = 10217 · = 3515.53 p0 (4) 0.1023 fq (1, 3) = f (4) · (The frequency f (4) of the dice sum 4 is distributed to its analyzes (1,3), (2,2), and (3,1), simply by correlating the current probabilities q = p0 of the analyses...) In the M-step, we shall compute a maximum-likelihood estimate p1 := p ? of the complete-data model M1/2 on the complete-data corpus fq . This can be done along the lines of Examples 3 and 4. Note: This is more or less the trick of the EM-algorithm! If it appears to be di?cult to compute a maximum-likelihood estimate of an incomplete-data model then the EM algorithm might solve your problem. It performs a sequence of maximum-likelihood estimations on complete-data corpora. These corpora contain in general more complex data, but nevertheless, it might be well-known, how one has to deal with this data! In detail: On the basis of the complete-data corpus fq (where currently q = p0 ), the corpus fq1 of outcomes of the ?rst die is calculated as fq1 (x1 ) = x2 fq (x1 , x2 ), whereas the corpus of outcomes of the second die is calculated as fq2 (x2 ) = x1 fq (x1 , x2 ). The following tables display them: fq1 (x1 ) x1 16788.86 1 18162.42 2 15556.19 3 12344.34 4 17326.93 5 19821.28 6 For example, fq1 (1) = fq (1, 1) + fq (1, 2) + fq (1, 3) + fq (1, 4) + fq (1, 5) + fq (1, 6) 23 fq2 (x2 ) x2 20680.56 1 22257.42 2 12646.63 3 15304.87 4 14574.86 5 14535.68 6 x2 = 1 3790 3772.05 3515.53 2512.66 3123.95 3966.37 x2 = 2 3735.95 4364.45 3233.08 2497.49 4146.66 4279.79 x2 = 3 2337.03 2170.03 1737.39 1792.29 2419.01 2190.88 x2 = 4 2530.23 2539.26 2714.95 2276.72 2696.47 2547.24 x2 = 5 2104.91 2821 2451.85 1804.26 2228.84 3164 x2 = 6 2290.74 2495.63 1903.39 1460.92 2712 3673

= 3790 + 3735.95 + 2337.03 + 2530.23 + 2104.91 + 2290.74 = 16788.86 fq2 (1) = fq (1, 1) + fq (2, 1) + fq (3, 1) + fq (4, 1) + fq (5, 1) + fq (6, 1) = 3790 + 3772.05 + 3515.53 + 2512.66 + 3123.95 + 3966.37 = 20680.56 The sizes of both corpora are still |fq1 | = |fq2 | = |f | = 100 000, resulting in the following relative-frequency estimates (p11 on fq1 respectively p12 on fq2 ) p11 (x1 ) x1 0.167889 1 0.181624 2 0.155562 3 0.123443 4 0.173269 5 0.198213 6 p12 (x2 ) x2 0.206806 1 0.222574 2 0.126466 3 0.153049 4 0.145749 5 0.145357 6

So, the following instance is the maximum-likelihood estimate of the model M1/2 on fq p1 (x1 , x2 ) x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 For example, p1 (1, 1) = p11 (1) · p12 (1) = 0.167889 · 0.206806 = 0.0347204 p1 (1, 2) = p11 (1) · p12 (2) = 0.167889 · 0.222574 = 0.0373677 p1 (2, 1) = p11 (2) · p12 (1) = 0.181624 · 0.206806 = 0.0375609 p1 (2, 2) = p11 (2) · p12 (2) = 0.181624 · 0.222574 = 0.0404247 So, we are ready for the second EM iteration, where an estimate p2 is calculated. If we continue in this manner, we will arrive ?nally at the 1584th EM iteration. The estimate which is calculated here is p1584,1 (x1 ) x1 0.158396 1 0.141282 2 0.204291 3 0.0785532 4 0.172207 5 0.24527 6 p1584,2 (x2 ) x2 0.239281 1 0.260559 2 0.104026 3 0.111957 4 0.134419 5 0.149758 6 x2 = 1 0.0347204 0.0375609 0.0321711 0.0255287 0.035833 0.0409916 x2 = 2 0.0373677 0.0404247 0.034624 0.0274752 0.0385651 0.044117 x2 = 3 0.0212322 0.0229692 0.0196733 0.0156113 0.0219126 0.0250672 x2 = 4 0.0256952 0.0277973 0.0238086 0.0188928 0.0265186 0.0303363 x2 = 5 0.0244696 0.0264715 0.022673 0.0179917 0.0252538 0.0288893 x2 = 6 0.0244038 0.0264003 0.022612 0.0179433 0.0251858 0.0288116

24

yielding p1584 (x1 , x2 ) x1 = 1 x1 = 2 x1 = 3 x1 = 4 x1 = 5 x1 = 6 x2 = 1 0.0379012 0.0338061 0.048883 0.0187963 0.0412059 0.0586885 x2 = 2 0.0412715 0.0368123 0.0532299 0.0204678 0.0448701 0.0639074 x2 = 3 0.0164773 0.014697 0.0212516 0.00817158 0.017914 0.0255145 x2 = 4 0.0177336 0.0158175 0.0228718 0.00879459 0.0192798 0.0274597 x2 = 5 0.0212914 0.018991 0.0274606 0.0105591 0.0231479 0.032969 x2 = 6 0.0237211 0.0211581 0.0305942 0.011764 0.0257894 0.0367312

In this example, more EM iterations will result in exactly the same re-estimates. So, this is a strong reason to quit the EM procedure. Comparing p1584,1 and p1584,2 with the results of Example 3 (Hint: where we have assumed that a complete-data corpus is given to us!), we see that the EM algorithm yields pretty similar estimates.

Acknowledgments
Parts of the paper cover parts of the teaching material of two courses at ESSLLI 2003 in Vienna. One of them has been sponsored by the European Chapter of the Association for Computational Linguistics (EACL), and both have been co-lectured by Khalil Sima’an and me. Various improvements of the paper have been suggested by Wietse Balkema, Gabriel ? Nuall? Infante-Lopez, Karin M¨ uller, Mark-Jan Nederhof, Breannd? an O ain, Khalil Sima’an, and Andreas Zollmann.

References
Chi, Z. (1999). Statistical properties of probabilistic context-free grammars. Computational Linguistics 25 (1). Cover, T. M. and J. A. Thomas (1991). Elements of Information Theory. New York: Wiley. DeGroot, M. H. (1989). Probability and statistics (2 ed.). Addison-Wesley. Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statist. Soc. 39 (B), 1–38. Duda, R. O., P. E. Hart, and D. G. Stork (2001). Pattern Classi?cation — 2nd ed. New York: Wiley. Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review 106, 620–630. McLachlan, G. J. and T. Krishnan (1997). The EM Algorithm and Extensions. New York: Wiley. Ratnaparkhi, A. (1997). A simple introduction to maximum-entropy models for natural language processing. Technical report, University of Pennsylvania. Wu, C. F. J. (1983). On the convergence properties of the EM algorithm. The Annals of Statistics 11 (1), 95–103.

25


赞助商链接