kl800.com省心范文网

Errors in binary images and an Lp version of the Hausdor metric

A.J. Baddeley

CWI P.O. Box 4079, 1009 AB Amsterdam The Netherlands Department of Mathematics and Computer Science University of Leiden, The Netherlands

This paper introduces a metric on the class of binary images, which can be used to assess the quality of image processing algorithms. Traditionally such assessment is done using the `statistical' misclassi cation error rate, or Pratt's gure of merit. However, we show that these measures have practical weaknesses. Further we show that they have undesirable topological properties in the continuous case. The classical Hausdor metric generates the desired topology (the myopic topology on compact subsets) but is highly sensitive to noise and thus unsuitable for image processing purposes. We introducea metric which is an Lp modi cationof the Hausdor metric. It generates the myopic topology yet has many features in common with Lp metrics, including better robustness against noise. The various error measures are compared on synthetic image data, including examples of edge detection and the Besag ICM algorithm.

AMS Mathematics Subject Classi cation (1991 Revision): 68U10, 54E45, 60D05. Key words and Phrases: Average risk, Distance transform, Edge detection, Figure Of Merit, Hausdor metric, ICM algorithm, Image classi cation, Image segmentation, Lp metrics, Mathematical morphology, Misclassi cation error, Myopic topology, Random compact sets, Stochastic geometry, Symmetric di erence, Vietoris topology.

Introduction

Most image processing tasks require us to nd an algorithm which approximates or estimates a `true' image as closely as possible. Examples are image reconstruction (de-blurring, tomographic reconstruction, registration), encod-

ing (compression, discretisation, hierarchical representation) and low-level scene analysis (classi cation, segmentation and edge detection). To objectively assess the performance of such algorithms, one needs a numerical measure (f; g) of the discrepancy between two digital images f; g. In theoretical treatments is typically a function space metric on the class of all possible images, and an optimal lter is one which achieves minimum(expected) error. In computer experiments may be a more general \loss function". Grey-scale images are traditionally compared using the root mean squared di erence of corresponding pixel values, (1) jjf ? gjj2 = N (f (x) ? g(x)) x2X where f (x) denotes the brightness value of image f at pixel x, and X is the raster of N pixels. This has many theoretical and computational advantages and is

2

" X 1

#=

1 2

the basis of the usual optimal linear ltering theory 28, 49, 73]. Alternatives are the L1 metric 22, 34] and Sobolev norms 38]. Error metrics are particularly important in the discussion of segmentation and classi cation 2, 8, 9, 32, 37, 48, 59, 60, 61, 65, 67] and edge detection in computer vision 1, 14, 25, 27, 29, 30, 45, 46, 64, 66]. Here the pixel values are binary, or unordered labels, and images are often compared using the

misclassi cation error rate

However, it is widely agreed that (1){(2), and other similar Lp metrics, are inadequate to express the human observer's sense of reconstruction delity or quality 13, 26, 44, 48, 61, 58, 59]. They are insensitive to errors which severely a ect `shape' but involve relatively few pixels. See x3.2 below. In recent work on Bayesian methods of segmentation and classi cation 8, 9, 23, 24, 32, 37, 48, 47] it has been observed 9, discussion, p. 299], 48, pp. 97,110] that does not adequately detect degenerative e ects such as over-smoothing in the ICM algorithm. See x6.4. For binary images, a popular error measure is Pratt's 1, 46] gure of merit . We discuss this in x3.3 below. It too shows a number of undesirable features in practice. A theoretically attractive, but practically unusable, alternative is the Hausdor metric (e.g. 35, p. 15], 56, p. 72 .]), discussed in x2, x3.4 below. This is fundamentally important in mathematical morphology 20, 35, 56] and random set theory 31, 39, 63, 70]. The Hausdor metric generates the myopic topology 35, p.12] on the class of compact subsets of X . Serra 56, p. 72 .] argues that it is very desirable that image processing operations be continuous with respect to the myopic topology. However, the Hausdor metric itself is extremely 2

1 (f; g) = N numberfx 2 X : f (x) 6= g(x)g:

(2)

sensitive to `noise' and even to changes in a single pixel. It cannot be used in practical experiments and its use as the basis of an optimal ltering theory is questionable. In this paper we study the failings of the abovementioned metrics from a practical and theoretical viewpoint, and propose a new metric combining their desirable features. It is topologically equivalent to the Hausdor metric, satisfying the desiderata of 56, p. 72 .]; it is de ned as an Lp mean of pixel contributions so that it has an `average risk' interpretation and is reasonably stable to noise. The underlying theory 3] is also applicable to grey-level images, but here we discuss only the binary case; see also 4]. Related work is in 2, 65, 72]. Section 1 lists notation and assumptions. Section 2 introduces the desired topology, the myopic topology, and the associated Hausdor metric. In section 3 we study error measures that are in current use, giving some examples of undesirable properties in practice, and some topological characterisations. Section 4 puts forward a list of precise conditions for choosing a suitable metric. Section 5 introduces the new metric and examines its topological properties. Finally in section 6 we give some practical examples of application to digital images.

A discrete binary image is a function f : X ! f0; 1g where X is the image raster. In applications, X is usually a nite subset of a two- or three-dimensional regular square or hexagonal lattice 46, 56]. The value 1 will be interpreted as logical `true' and displayed as black. A binary image b can of course be identi ed with a subset B X , namely the set of black pixels, B = fx 2 X : b(x) = 1g. Images de ned on more general X , such as arbitrary nite graphs, have recently been studied. We follow Serra 56, 57] in allowing X to be a general topological space, including Euclidean space d .

1 Setup

De nition 1 Let (X; ) be a locally compact, second countable metric space. A binary image is a nonempty compact subset B X . The class of all binary images is denoted by K0. An image metric is a metric on the space K0.

In the continuous case X = d the metric will usually be Euclidean distance (x; y) = jjx ? yjj. In the discrete case where X is a nite subset of a regular square or hexagonal lattice, will be a shortest path length metric 11, 12, 50, 51]. Note that the formulation is not symmetric in `black' and `white', i.e. the set complement of an element of K0 is not in K0. We also need a measure on X ; in the discrete case this will be (B ) = n(B ) = number of points in B , while in the continuous case X = d typically will be Lebesgue measure. 3

De nition 2 (Assumptions) Assume that X is equipped with a Radon measure which is Borel regular and satis es

for any xed r > 0, where

x2X

inf (D(x; r)) > 0

(3)

D(x; r) = fy 2 X : (x; y) rg is the closed unit ball of radius r > 0 and centre x 2 X .

2 Hausdor metric and myopic topology

2.1 Hausdor metric

De nition 3 Let (X; ) be a metric space as in De nition 1. The distance function of a nonempty subset A X is d(x; A) = d (x; A) = inf f (x; a) : a 2 Ag; x 2 X: i.e. d(x; A) is the shortest distance from pixel x 2 X to a pixel in A.

The distance function has a Lipschitz property d (x; A) d (y; A) + (x; y) (4) for any x; y 2 X . In particular, d ( ; A) is uniformly continuous. Closed sets are characterised by their distance functions: d ( ; A) d ( ; B ) i A = B . It is interesting to note that, when X is a two- or three-dimensional rectangular or hexagonal grid and is a shortest path metric, the function d( ; A) can be computed very rapidly by a recursive algorithm 12, 50, 51] based on repeated application of (4).

De nition 4 The Hausdor distance (e.g. 35, p. 15], 56, p. 72 .]) between two nonempty subsets A; B X is

H (A; B ) = max sup d(a; B ); sup d(b; A)

a2A b2B

(5)

i.e. this is the maximum distance from a point in one set to the nearest point in the other set.

Important representations of H are the following (see 19, 2.10.21]). Lemma 1 For any and for nonempty compact A; B X , H (A; B ) = sup jd (x; A) ? d (x; B )j

x2X

(6)

4

This is trivial using (4). It is an immediate consequence that H is a nitevalued metric on K0.

Lemma 2

H (A; B ) = inf r 0 : A B (r) ; B A(r)

n

o

(7)

where

A(r) = fx 2 X : d (x; A) rg (8) is called the r-envelope of A. In the case X = d with (x; y) = jjx ? yjj, the envelope is familiar as the Minkowski dilation A rD of mathematical morphology 56, p. 43] where rD is the ball of radius r centred at the origin in d . In case X is a discrete rectangular or hexagonal grid with a translation-invariant metric (cf. 12, 50, 51]) again A(r) = A Dr where Dr is the ball of radius r in the given metric. Interpretation (7) connects Hausdor distance with the partial order of set inclusion.

Topologies on spaces of subsets were introduced by Michael 36], Fell 20] and Matheron 35]. See also 6, 7, 15, 21, 33, 71]. Related topologies have been constructed for uppersemicontinuous functions 52, 68], Radon measures 10, 31], and capacities in general 39, 40, 41, 42, 43, 69, 70]. The present section collects de nitions and important facts from the above and 3].

2.2 Myopic topology

De nition 5 The myopic topology on K0 36], 35, p. 12] is the weakest topology on K0 generated by all classes fK 2 K0 : K \ F = ;g for all F 2 F fK 2 K0 : K \ G = ;g for all G 2 G 6 where F ; G are the classes of all closed and open subsets of X , respectively.

The myopic topology is locally compact, second countable and Hausdor 35, p. 13] so that Kn ! K myopically in K0 i

K \ F = ; ) Kn \ F = ; eventually K \ G 6= ; ) Kn \ G 6= ; eventually for all F 2 F ; G 2 G , where `eventually' means `for all but nitely many n'. The discussion in 35, pp. ix{x, 12-26] and 56, p. 72 .] makes it clear that continuity with respect to the myopic topology is a very desirable property for image processing algorithms. The connection with the Hausdor metric is the following.

5

Proposition 1 Let (X; ) be as in De nition 1. For K; Kn 2 K0, the following

are equivalent:

(a) Kn ! K myopically; (b) d(x; Kn) ! d(x; K ) uniformly in x 2 X ; (c) H (Kn; K ) ! 0.

The equivalence of (b),(c) is clear from (6); equivalence of (a), (c) is proved by Matheron 35, p. 15]. See also 71, thm 3.1], 6, 7], 15, p.41], 33, p. 41]. The Hausdor metric thus provides the foundation for mathematical morphology and random set theory 35, pp 12{26], 56, pp. 63-92].

3 Error measures in current use

In this section we study the error measures for binary images surveyed by Peli & Malah 45] and Van Vliet et al 66].

3.1 General criteria

Important general principles for error measurement were enunciated by Canny 14] in the context of edge detection. He argued that an edge lter should be considered `good' if it exhibits 1. good detection: low probability of failing to detect an edge, and low probability of incorrectly labelling a background pixel as an edge; 2. good localization: points identi ed as edge pixels should be as close as possible to the centre of the true edge; 3. unicity: there should be only one response to a single edge. Canny showed that (in a suitable real-analytic framework) there is an uncertainty principle balancing good detection against good localization.

3.2 Detection performance (\statistical") measures

Let A be the \true" binary image and B the putative or \estimated" image. Pixels that belong to B but not A will be called false positives or Type I errors; pixels that belong to A but not B will be called false negatives or Type II errors. In case X is nite, de ne the type I error rate 45] by n(B (9) (A; B ) = n(X n A) n A) 6

the type II error rate

where a(x) = 1 if x 2 A, and 0 otherwise. Thus -optimality has an `average risk' interpretation: the estimate b of an image a which minimises expected a error is that which maximises the pixelwise likelihood fb(x) = a(x)g for every a pixel x. There is a similar Bayesian statement. It is widely acknowledged 9, p.299], 48, pages 97,110], 58, 59] that pixel misclassi cation errors are a poor measure of reconstruction delity. Discrepancies between A and B are measured by the number of disagreements, regardless of the pattern. Errors such as the displacement of a boundary, that a ect a large number of pixels but do not severely a ect `shape', are given high values by ; while errors such as the deletion of a linear feature, lling-in of small holes or deletion of small islands, that involve only a small number of pixels but severely a ect `shape', have low values. An example of an e ect which is not well detected by ; ; is the over-smoothing of segmented images by iterative algorithms such as ICM and deterministic and stochastic relaxation 9, discussion], 23, 48, 47]. See x6.4 for an example. These comments support Canny's observations. More rigorous statements of this kind can be obtained by considering topological properties of the continuous counterparts of ; ; . For arbitrary (X; ) with measure as in De nition 2 de ne ~(A; B ) = (B n A) ~(A; B ) = (A n B ) ~(A; B ) = (A 4 B ): One cannot normalise and as done in (9),(11) unless (X ) < 1. Lemma 3 For An; A 2 K0, if An ! A myopically then ~(A; An) ! 0. The converse is generally false, and so is the corresponding statement concerning ~(An ; A). The analogous statements for ~ and ~ are also generally false. 7

AnB (10) (A; B ) = n(n(A) ) and the overall misclassi cation rate A (11) (A; B ) = n(n(4)B ) X where n(S ) = number of pixels in S and 4 denotes set symmetric di erence. Some other quantities derived from ; are discussed in 17, 45]. Two attractive features of are that it is a metric on K0, and that it is linear. Under any stochastic model for A and B the mean value of (A; B ) equals the average over all pixels x of the disagreement probability: X (12) (A; B ) = n(1 ) X x2X fa(x) 6= b(x)g:

Proof : If H (An ; A) ! 0 then by (7) for every r > 0 we have for all su ciently large n that An A r , so that (An n A) (A r n A). Now A r # A as r # 0 so limr# (A r ) = (A). Thus ~ (A; An) ! 0. The other statements mentioned are generally false, since e.g. the nite sets are dense in K0 with the myopic topology and have zero measure under any

( ) ( ) ( ) 0 ( )

nonatomic . 2

The following result states that ~ convergence and myopic convergence are equivalent for images without `small' features. De ne the inner envelope of A X as A(?r) = fx 2 A : d(x; Ac) rg for r > 0. Say that A is r-closed if

A(r)

and r-open if

(

?r)

( )

=A

A(?r) = A: These are analogues of standard de nitions in mathematical morphology 56, 57]. If X = d and is Euclidean distance, r-closure implies s-closure for every 0 < s r and similarly for openings. Every compact convex set in d is r-closed for every r > 0. Lemma 4 Let r > 0 be xed and let Cr ; Or be the classes of compact subsets of X that are respectively s-closed and s-open for every 0 < s r. (a) on Cr , An ! A myopically implies ~(A; An) ! 0. (b) on Or , ~(A; An) ! 0 implies An ! A myopically. Thus ~ and H are topologically equivalent on Cr \ Or . Proof : (a): Suppose H (An; A) ! 0 for An; A 2 Cr . For s r we have for all su ciently large n by (7) that A (An )(s) and An A(s) . Since the inner envelope operation is increasing (A B ) A(?r) B (?r) ) we have A(?s)

since An is s-closed. That is (An )(s) = An

r

h

i ?s

(

)

A(?s) An A(s) for all su ciently large n. Thus (An 4 A) (A(s) n A(?s) ). Now as s # 0 we have A(s) # A and A(?s) " int(A), the topological interior. Since is

8

Borel regular the measures of both these sequences converge to (A). Thus ~(A; An ) ! 0. (b): Given 0 < s r suppose A; B 2 Or are such that H (A; B) s. Then without loss of generality there is a point x 2 A with d(x; B ) s. Set t = s=3; then since A is t-open, x belongs to some ball D(y; t) of radius t in the metric contained entirely within A. But since d(x; B ) s we have D(y; t) \ B = ; so that (A 4 B ) (D(y; t)). The latter is bounded below in y by assumption (De nition 2). Hence inf f~(A; B ) : H (A; B ) s; A; B 2 Or g > 0: It follows that ~(An ; A) ! 0 implies H (An ; A) ! 0. 2 Part (a) is similar to the proof 35, p. 68] that Lebesgue measure is myopically continuous on the compact convex sets of d . Measures of localization performance discussed by Peli and Malah 45] for a discrete raster X were the mean error distance X e(A; B ) = n(1 ) d(x; A); (13) B x2B the mean square error distance

3.3 Localization performance (\distance") measures

X e2 (A; B ) = n(1 ) d(x; A)2 B

x2B

(14)

and Pratt's 1, 46] \ gure of merit"

X 1 1 FOM (A; B ) = maxfn(A); n(B )g 2 x2B 1 + d(x; A)

(15)

where is a scaling constant, usually set to 1=9 when is normalized so that the smallest nonzero distance between pixel neighbours equals 1. Here A is the true image and B the estimated image; note that FOM (A; B ) 6= FOM (B; A). One has 0 < FOM (A; B ) 1 and FOM (A; B ) = 1 i A = B . FOM is the most popular of these and is widely used 1, 5, 25, 45, 46]. The three error measures are insensitive to type II errors. For example, if all errors are of type II, B A, then e = e2 = 0 while FOM (A; B ) = n(B )=n(A) = 1 ? (A; B ). They are also insensitive to the pattern of type I errors, since they are averages of a loss function f (x; A) over all type I error pixels x. A striking example found by Peli and Malah 45] is shown in Figure 1. If the upper image is taken as the true image A, then the two lower images B1 ; B2 have the same 9

Figure 1: Peli-Malah counterexample. True picture A (top) and two error pictures B1 ; B2 (bottom) with the same value of FOM. FOM values, FOM (A; B1 ) = FOM (A; B2 ). Indeed they also have the same values of ; and . Peli and Malah 45] and Van Vliet et al 66] observed cases where FOM was large, but the visual quality was bad. When FOM was used as a criterion for choosing parameter values in edge detection algorithms 66, p. 186 and section 6] in the case of the classical Laplacian operator the FOM-optimal images often had sections of the true contour missing, or oscillated around the true contour. This behaviour can be explained by observing that for x 62 A B we have FOM (A; B fxg) < FOM (A; B ) if and only if d(x; A) > ?1=2 FOM (A; B )?1 ? 1 1=2. If FOM (A; B ) > 0:9 and = 1=9 then this translates into FOM (A; B fxg) < FOM (A; B ) i d(x; A) 1 That is, when FOM is large, preference will always be given to type II errors over type I errors, however innocuous the latter might be. Similar criticisms apply to e and e2 ; these have the additional disadvantage that they are highly sensitive to background noise. If the error image B contains even one single pixel x far distant from A, its distance value will drastically elevate the mean distance. This is connected with the statistical phenomenon of non-robustness of the arithmetic mean. Note e and e2 have an average risk interpretation analogous to (12), but FOM does not. Splitting the sum in (15) one has

X 1 1 5 FOM (A; B ) = maxfn(A); n(B )g 4n(A \ B ) + 2 x2BnA 1 + d(x; A)

10

2

3

or

1 1 ? FOM (A; B ) = maxfn(A); n(B )g

d(x; A)2 + maxf0; 1 ? n(B ) g: 2 n(A) x2BnA 1 + d(x; A)

X

The three error measures seem di cult to interpret because of the normalisation by a variable denominator n(B ) or maxfn(A); n(B )g. For example it is not clear how to compare FOM (A; B ) for xed A and di erent B if n(B ) > n(A). Peli and Malah concluded that FOM sometimes gives insu cient information and that a better measure is needed. The author is not aware of any theoretical justi cation for e; e2 or FOM. For general X , de ne analogues of e, e2 and FOM by replacing n( ) by , and sums by integrals with respect to , in (13){(15). These apply only when (A); (B ) > 0. The (Lebesgue) integrals are always well-de ned since the integrand is uniformly continuous by (4) and the domain of integration B is compact. Lemma 5 Suppose An ! A myopically where An; A 2 K0 satisfy (A) > 0, (An) > 0. Then e(A; An ) ! 0, and e2 (A; An ) ! 0. The converse statements

are generally false. The corresponding statements for FOM are also generally false.

Lemma 6 Let Cr be as in Lemma 4. Then on Cr , An ! A myopically implies FOM (A; An) ! 0, but not conversely. The proofs are similar to those for Lemmas 3 and 4(a) respectively.

The Hausdor metric H de ned in x2 is never used in practice (to the author's knowledge) as an error measure for digital images. It has been used in some research on numerical analysis and approximation theory 53, 54, 55, 16]. The practical objection to H is its extreme sensitivity to changes in even a small number of pixels, because of the supremum in (4). This is expressed by the following \minimax property" Lemma 7 For any sets A1; : : :; An and B1 ; : : :; Bm

3.4 Hausdor metric

0n m 1 H @ Ai ; Bj A = max max min H (Ai ; Bj ); max min H (Ai; Bj ) i j j i

i=1 j =1

(16)

This is an application of the de nition of H to the relation

d(t;

n

i=1

Ai ) = min d(t; Ai): i

11

H (A; A fxg) = d(x; A) is unbounded. Errors such as the addition of a small amount of background noise thus give inappropriately high values of H . The Hausdor metric is therefore so unstable as to be unusable in this context.

Hence for example

4 Discussion of problem

The objections to existing error measures, discussed above, may be summarised as follows. Firstly, in some cases, errors of a particular kind are completely undetected; for example, type II errors are not detected by e; e2 . Such error measures do not have a satisfactory topological interpretation at all. Secondly, in most cases, the relative values given to various possible types of errors are not in the desired relation to each other (cf. discussion in x3.2{3.3). In the cases considered, this imbalance was extreme, in that the continuous-space versions of these error measures are in con ict with the myopic topology. Thirdly, in some cases (e; e2; H ) the error measure is not `robust' in that the alteration of a relatively small number of pixel values can have an unbounded e ect on the measured error. We thus arrive at the following list of desiderata for an improved error measure: 1. The error measure should be a metric on K0; 2. should generate the myopic topology; 3. There should be an `average risk' interpretation of similar to (12); 4. The alteration of a xed number of pixels should have a bounded e ect on , or more generally, supf (A; A B ) : (B ) ug < 1 for each xed A 2 K0 and u < 1. See 56, p.72 ] for a discussion of metrics in the context of image processing. While a metric is desirable for theoretical purposes, in particular for developing an optimal ltering theory (e.g. 28, 49, 73]) it is arguable whether the metric property is desirable for practical purposes. The symmetry axiom implies equal treatment of type I and type II errors. The triangle inequality e ectively means we cannot normalise the error (A; B ) by some measure of the size of A or B (as done in the construction of FOM, , but not ). Yet these objections would be unimportant if one could nd a metric that behaved well in practical experiments. 12

It is also important to distinguish topologies and uniformities 18, pp. 200{ 204] which are not discussed in 56]. Two metrics may generate the same topology, yet not generate the same uniformity. The complaints which we have levelled at existing error metrics can best be understood as di erences in the corresponding uniformities in the continuous case. One can change a metric so as to preserve the desired topology but change the undesired uniformity. A number of authors 62, 65, 72] have proposed modi cations of of the form 1 X z (x; A; B ) (A; B ) = N

x2X

where the contribution z (x; A; B ) from pixel x depends on pixel values of A; B in a neighbourhood of x. These enjoy an `average risk' interpretation similar to (12) for , but are also liable to similar criticisms. Desideratum 4 above can be achieved by the standard device of concave transformation.

Lemma 8 Let w be any continuous function on 0; 1] that is concave

w(s + t) w(s) + w(t)

and strictly increasing at 0, w(t) = 0 i t = 0. If is a metric on a space S , then = w is also a metric, i.e. (x; y) = w( (x; y)): The metrics and generate the same topology and the same uniformity.

Examples which transform an unbounded metric to a bounded one include t w(t) = 1 + t (17) w(t) = tan?1 (t) (18) w(t) = minft; cg (19) for a xed c > 0. The e ect of concave transformation on the Hausdor metric H is particularly simple.

Lemma 9 Let w be a concave function as above. If H denotes the Hausdor

metric de ned by a pixel distance metric , then Hw = w H , i.e.

Hw (A; B ) = w(H (A; B )):

Hence Hw generates the same topology and uniformity as H .

(20)

13

The proposal is simply to replace the supremum in representation (6) of H by an Lp average. Thus in the discrete case

p (A; B ) =

5 Proposed metric

"

for 1 p < 1. The result is still a metric, and indeed topologically equivalent to H , since distance functions d(x; A) are equicontinuous Lipschitz functions by (4). More generally we can introduce a concave transformation as follows. De nition 6 (Delta metric) Let X; ; be as in De nition 2. Let w : 0; 1] ! 0; W ] be any bounded concave function with w(0) = 0. For 1 p < 1 de ne

p (A; B ) = w

1 X jd(x; A) ? d(x; B )jp n(X ) x2X

# =p

1

(21)

Z

K0. Proposition 2 Assume either X is compact (so that (X ) < 1); w is eventually constant, w(t) = W for t t0 where t0 < 1. Then p is a nite valued metric on K0 generating the myopic topology. w Proof : For any K1 ; K2 2 K0 the set of points x where max d(x; Ki)

for A; B

X

jw(d(x; A)) ? w(d(x; B ))jp d (x)

1

=p

(22)

t0 is compact. Thus the integrand of (22) is zero outside a compact region, and since is a Radon measure the integral in (22) is nite. It is then clear from standard properties of Lp ( ) that p is a metric. w Let Kn ; K 2 K0 and suppose H (Kn ; K ) ! 0. Then d ( ; Kn) ! d ( ; K ) uniformly. Hence w d ( ; Kn) ! w d ( ; Kn) uniformly, which (since the integrals are bounded) implies convergence in Lp , i.e. p (Kn ; K ) ! 0. w Conversely suppose Kn ! K in p , i.e. d ( ; Kn) ! d ( ; K ) in Lp , where w = w . Then writing Rn; = fx 2 X : jd (x; Kn ) ? d (x; K )j > g for > 0, we have (Rn; ) ! 0 as n ! 1 for each xed . Now the Lipschitz property (4) gives, for any x; y 2 X , jd (x; Kn) ? d (x; K )j jd (y; Kn ) ? d (y; K )j + 2 (x; y) so that x 2 Rn; ) D (x; =3) Rn; =3:

14

For xed > 0, the -measure of the right-hand set converges to zero; yet since closed balls in are closed balls in , (3) implies (D (x; =3)) is bounded below for all n. Thus Rn; must be empty for su ciently large n, i.e. the distance functions converge uniformly on X . Hence the distance functions converge (pointwise, hence uniformly). 2 Implementation is straightforward using the distance transform algorithm of Rosenfeld and Borgefors 12, 50, 51]. In applications we shall always use the cuto transformation w(t) = minft; cg for a xed c > 0. In this case the contributions to the integral in (22) are zero for points x further than c units away from A and B . This has the attractive property that the value of p(A; B ) does not change if we change the grid size (embed X in a larger space). The possible values of p(A; B ) then range from 0 to c. The parameters c and p determine the tradeo between localization error and misclassi cation error. The value of c e ectively controls scale: roughly speaking, a misclassi cation error is equivalent to an error in localization by distance c. For small c the e ect is similar to misclassi cation error; as c ! 0 on a discrete grid 1 p (A; B ) ! (A; B )1=p . The value of p determines the c w relative importance of large localization errors. For large p the e ect is similar to the Hausdor metric; 1 (A; B ) = Hw (A; B ). w P Since d(x; A) = 0 for x 2 A, the sum in (21) includes contributions x2A d(x; B )p P and x2B d(x; A)p which are analogous to e; e2 and FOM. However, the sum in (21) also includes other terms for x outside A and B . Clearly p has an `expected risk' interpretation analogous to (12): under w any stochastic model for A; B ( p (A; B ))p ] = w

Z

Another version is as follows. Let B be a random compact set 35] and A 2 K0 xed. Then by straightforward calculation ( where = w

p (A; B ))p ] = w

X

jw(d(x; A)) ? w(d(x; B ))jp d (x):

(23)

Z

X

qB (x; d (x; A))d (x)

(24)

qB (x; t) = p

Z1

t

and (s ? t)p?1 TB (D (x; s))ds + tp ? p

Zt

0

(t ? s)p?1 TB (D (x; s))ds

depends only on d ( ; A) and on the avoidance functional 35] of B ,

TB (K ) = fB \ K = ;g:

15

Throughout this section we have compared the gure of merit FOM for = 1=9 with the metric with p = 2 and w being the cuto transformation (19).

6 Examples

6.1 Peli-Malah example 6.2 Arti cial data

This example (Figure 1) yields a FOM value of 0.941 for both pictures B1 ; B2 . The corresponding 2 values, with cuto c = 5, are 0.323 (left picture) and 0.512 (right picture).

Figure 2: Synthetic true image A

Figure 3: Images gaps and lost 16

Figure 4: Images shift,

bend and barbs

Figures 2{4 show synthetic images (32 32 pixels) deviating in various ways from a straight edge. Table 1 reports the computed values of type I error , type II error , misclassi cation error , Pratt's gure of merit with = 1=9, and the 2 metric with cuto distance 5. The most dramatic disagreement between these measures is for the gaps image, which scores a very bad grade in FOM, an indi erent grade in , and scores better than all other images in 2. FOM gives roughly comparable, high scores to shift, bend and barbs, while and 2 spread them over a wide range.

gaps lost shift bend barbs

Image B

0 0 0.011 0.010 0.010

0.313 0.344 0.344 0.313 0

0.0098 0.0108 0.0214 0.0195 0.0097

FOM 0.688 0.656 0.966 0.969 0.952

0.149 0.682 0.319 0.291 0.463

2

Table 1: Error measures for the synthetic images

17

6.3 Edge detection

The next experiment is modelled on the standard edge detector test of Haralick 29] (see 27, 30, 66]) and compares optimality under FOM and under 2.

Figure 5: Chessboard image with additive Gaussian noise (SNR = 2) Figure 5 shows the test image, a chessboard pattern with additive Gaussian noise at signal-to-noise ratio 2.0. Figure 6 shows the true edge image, computed before adding the noise, and cropped from 256 256 pixels to 200 200 to standardise the image size for comparisons with ltered images.

Figure 6: True edges of chessboard

18

The edge detector consisted of Gaussian smoothing with standard deviation , followed by the classical 4-connected Laplacian, zero-crossing by thresholding and distance transform 66], then the Lee-Haralick morphological edge strength detector with a pseudocircular mask 66] of size 3 was applied to the smoothed data and the result multiplied by the zero-crossing image. The resulting edge strength image was thresholded at a xed value to obtain a binary edge image B . For various values of the smoothing parameter , we then compared FOM (A; B ) with 2(A; B ). The FOM parameter was set to the usual 1=9 and the cuto parameter of 2 was c = 5.

1.5

FOM delta

performance 0.0 0.5

1.0

1.6

1.8

2.0 sigma

2.2

2.4

Figure 7: FOM and

2

errors against smoothing parameter 19

Figure 8: Laplace edge detector with FOM-optimal (left) and -optimal smoothing The plot of FOM and 2 values (Figure 7) shows that FOM is almost indi erent to a wide range of values near its optimum. This is consistent with our theoretical comments about FOM. The optimal values of under the two measures were quite di erent: FOM chooses = 2:32 and 2 chooses = 2:08. The corresponding images are shown in Figure 8. The FOM values were 0:941 and 0:939 respectively; the corresponding 2 values were 0:663 and 0:617.

20

Our nal experiment studies the behaviour of Besag's ICM (Iterated Conditional Modes) algorithm for classi cation 9]. The `true' image is the 200 125 binary image in Figure 9 (originally obtained by thresholding a camera image of a newspaper).

6.4 ICM algorithm

Figure 9: True image of text

Figure 10: Greyscale data image for ICM algorithm

21

Figure 11: Pixelwise maximum likelihood classi cation

Figure 12: ICM iterations 1, 4, 7 and 10

22

The data image in Figure 10 was generated using independent Gaussian pixel values, with mean 100, standard deviation 40 for foreground (text) pixels and mean 150, standard deviation 20 for background pixels. Heteroscedastic data (i.e. with unequal variances) were used to encourage an imbalance of type II over type I errors, the most challenging situation for FOM.

error 0.0 0 0.2 0.4 0.6

0.8

1.0

1.2

1.4

2

4

6 iteration

8

10

Figure 13: Error measured by metric (solid lines), 1 ? FOM (dashed lines) and misclassi cation error (dotted lines) for successive ICM iterations (iteration 0 is the MLE)

23

normalised error

1 0

2

3

4

5

2

4

6 Iteration

8

10

Figure 14: Error measures normalised by value at iteration 11

24

Figure 11 shows the result of maximum likelihood classi cation from the data of Figure 10 (i.e. applied independently to each pixel). We then applied Besag's ICM algorithm 9] (eight-connected neighbourhood, 2 2 coding, constant interaction for all neighbours) for 11 iterations, with interaction parameter beginning at 0:4 and increasing in steps of 0:2 until the value 2:0 was reached. Figure 12 shows a selection of the ICM results. Features to note are the dramatic improvement over maximum likelihood, after one iteration, and the slow degeneration of the text shapes beyond iteration 4. Figure 13 shows the error measures 2, 1 ? FOM and for successive ICM iterations. The cuto for 2 was c = 4. To remove irrelevant scale e ects Figure 14 shows the same data normalised to have value 1 at iteration 11. The initial improvement from MLE to iteration 1 is registered clearly by all three error measures. The degeneration of shape from about iteration 4 onwards is almost undetected by FOM and the misclassi cation error, but quite clearly detected by 2 .

25

Acknowledgements

For helpful comments the author is grateful to G Beer, M Berman, J E Besag, L Bischof, T C Brown, A J Cabo, N I Fisher, J Freedman, R D Gill, H J A M Heijmans, M N M van Lieshout, P Nacken, D B Pollard, B D Ripley, D E Robinson, A N Shiryayev, A G Steenbeek, C C Taylor, T R Turner and W Vervaat, and participants in the Workshop on Statistics and Pattern Recognition, Edinburgh, 1985, `StatComp', Melbourne 1987, and ANZAAS Congress, Townsville 1987.

References

1] I. E. Abdou and W. K. Pratt. Quantitative design and evaluation of enhancement/thresholding edge detectors. Proceedings of the IEEE, 67:753{ 763, 1979. 2] A. J. Baddeley. A class of image metrics. In Australian and New Zealand

Association for the Advancement of Science, Proceedings 11th Congress, Townsville 1987, 1987.

3] A. J. Baddeley. Hausdor metric for capacities. Research report BS-R9127, Centrum voor Wiskunde en Informatica, 1991. 4] A. J. Baddeley. An error metric for binary images. In W. Forstner and S. Ruwiedel, editors, Robust Computer Vision, pages 59{78, Karlsruhe, 1992. Wichmann. 5] D. G. Bailey and R. M. Hodgson. Range lters: intensity subrange lters and their properties. Image Vision Computing, 3:99{110, 1985. 6] G. Beer. On convergence of closed sets in a metric space and distance functions. Bulletin of the Australian Mathematical Society, 31:421{432, 1985. 7] G. Beer. Metric spaces with nice closed balls and distance functions for closed sets. Bulletin of the Australian Mathematical Society, 35:81{96, 1987. 8] J. Besag. Discussion of paper by P. Switzer. Bulletin of the International Statistical Institute, 50:422{425, 1983. 9] J. Besag. On the statistical analysis of dirty pictures (with discussion). Journal of the Royal Statistical Society, series B, 48:259{302, 1986. 10] P. Billingsley. Convergence of probability measures. John Wiley and Sons, New York, 1968. 11] G. Borgefors. Distance transformations in arbitrary dimensions. Computer Vision, Graphics and Image Processing, 27:321{345, 1984. 26

12] G. Borgefors. Distance transformations in digital images. Computer Vision, Graphics and Image Processing, 34:344{371, 1986. 13] T. M. Cannon, H. J. Trussell, and B. R. Hunt. Comparison of image restoration methods. Applied Optics (Opt. Soc. Amer.), 17:3384{3390, 1978. 14] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8:679{698, 1986. 15] C. Castaing and M. Valadier. Convex analysis and measurable multifunctions. Lecture Notes in Mathematics 580. Springer, 1977. 16] P.J. Davis, R.A. Vitale, and E. Ben-Sabar. On the deterministic and stochastic approximation of regions. Journal of Approximation Theory, 21:60{88, 1977. 17] E. S. Deutsch and J. R. Fram. A quantitative study of the orientation bias of some edge detector schemes. IEEE Transactions on Computing, 27:205{213, 1978. 18] J. Dugundji. Topology. Allyn and Bacon, Boston, Mass., 1966. 19] H. Federer. Geometric Measure Theory. Springer Verlag, Heidelberg, 1969. 20] J. M. G. Fell. A Hausdor topology for the closed subsets of a locally compact non-Hausdor space. Proc. Amer. Math. Soc., 13:472{476, 1962. 21] Z. Frol k. Concerning topological convergence of sets. Czechoslovak Mathematical Journal, 10:168{180, 1960. 22] M. Gabbouj and E. J. Coyle. Minimum mean absolute error stack ltering with structural constraints and goals. IEEE Transactions on Acoustics, Speech and Signal Processing, 38:955{968, 1990. 23] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721{741, 1984. 24] S. Geman and D.E. McClure. Bayesian image analysis: application to single photon emission tomography. Preprint, Applied Mathematics Dept., Brown University. 25] J.J. Gerbrands, E. Backer, and W. A. G. van der Hoeven. Quantitative evaluation of edge detection by dynamic programming. In Gelsema and Kanal, editors, Pattern Recognition in Practice II, pages 91{99. Elsevier, North-Holland, New York, 1986. 26] F. Godtliebsen. Noise reduction using Markov random elds. J. Magnetic Resonance, 92:102{114, 1991. 27

27] W. E. L. Grimson and E. C. Hildreth. Comments on \digital step edges from zero crossings of second directional derivatives". IEEE Transactions on Pattern Analysis and Machine Intelligence, 7:121{127, 1985. 28] R.W. Hamming. Digital lters. Signal Processing Series. Prentice-Hall, Englewood Cli s, N.J., second edition, 1983. 29] R. M. Haralick. Digital step edges from zero crossing of second directional derivatives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:58{68, 1984. 30] R. M. Haralick. Author's reply. IEEE Transactions on Pattern Analysis and Machine Intelligence, 7:127{129, 1985. 31] O. Kallenberg. Random measures. Akademie Verlag/Academic Press, Berlin/New York, third edition, 1983. 32] H. Kiiveri and N. Campbell. Allocation of remote sensing data using Markov models for spectral variables and pixel labels. Preprint. 33] E. Klein and A. C. Thompson. Theory of Correspondences. Canadian Mathematical Society Monographs. Wiley, 1984. 34] J.-H. Lin, T. M. Sellke, and E. J. Coyle. Adaptive stack ltering under the mean absolute error criterion. IEEE Transactions on Acoustics, Speech and Signal Processing, 38:938{954, 1990. 35] G. Matheron. Random sets and integral geometry. John Wiley and Sons, New York, 1975. 36] E. Michael. Topologies on spaces of subsets. Trans. Amer. Math. Soc., 71:152{182, 1951. 37] E. Mohn, N. Hjort, and G. Storvik. A comparison of some classi cation methods in remote sensing by a Monte Carlo study. Norwegian Computing Centre Report Kart/03/86, Norsk Regnesentral, Oslo, 1986. 38] F. Natterer. The mathematics of computerized tomography. Teubner/Wiley, Stuttgart/Chichester, 1986. 39] T. Norberg. Random capacities and their distributions. Probability Theory and Related Fields, 73:281{297, 1986. 40] T. Norberg. Existence theorems for measures on continuous posets, with applications to random set theory. Mathematica Scandinavica, 64:15{51, 1989. 28

41] T. Norberg. On the convergence of probability measures on continuous posets. Preprint 1990-08 ISSN 0347-2809, Department of Mathematics, Chalmers University of Technology and the University of Goteborg, Goteborg, Sweden, 1990. 42] T. Norberg and W. Vervaat. Capacities on non-Hausdor spaces. Preprint 1989-11 ISSN 0347-2809, Department of Mathematics, Chalmers University of Technology and the University of Goteborg, Goteborg, Sweden, 1989. 43] G. L. O'Brien and W. Vervaat. Capacities, large deviations and loglog laws. In G. Samorodnitsky S. Cambanis and M. S. Taqqu, editors, Stable Processes and related topics, pages 43{83. Birkhauser, 1991. 44] W. A. Pearlman. A visual system model and a new distortion measure in the context of image processing. J. Opt. Soc. Amer., 68:374{386, 1978. 45] T. Peli and D. Malah. A study on edge detection algorithms. Computer Graphics and Image Processing, 20:1{21, 1982. 46] W.K. Pratt. Digital image processing. John Wiley and Sons, New York, 1977. 47] B. D. Ripley. Statistical inference for spatial processes. Cambridge University Press, 1988. 48] B.D. Ripley. Statistics, images and pattern recognition. Canad. J. Statist, 14:83{111, 1986. 49] A. Rosenfeld and A. C. Kak. Digital picture processing. Academic Press, Orlando, 2nd edition, 1982. 50] A. Rosenfeld and J. L. Pfalz. Sequential operations in digital picture processing. Journal of the Association for Computing Machinery, 13:471, 1966. 51] A. Rosenfeld and J. L. Pfalz. Distance functions on digital pictures. Pattern Recognition, 1:33{61, 1968. 52] G. Salinetti and R. J. B. Wets. On the convergence in distribution of measurable multifunctions (random sets), normal integrands, stochastic processes and stochastic in ma. Math. Operations Research, 11:385{419, 1986. 53] B. Sendov. Hausdor sche metrik und Approximation. Numerische Mathematik, 9:214{266, 1966. 54] B. Sendov. Some questions on the theory of approximation of functions and sets in the Hausdor metric. Russian Mathematical Surveys, 24:143{183, 1969. 29

55] B. Sendov. Hausdor approximations, volume 50 of Mathematics and its Applications. Kluwer, Dordrecht, Boston, London, 1990. 56] J. Serra. Image analysis and mathematical morphology. Academic Press, London, 1982. 57] J. Serra, editor. Image analysis and mathematical morphology, volume 2: Theoretical advances. Academic Press, London, 1988. 58] L. A. Shepp, S. K. Hilal, and R. A. Schulz. The tuning fork artifact in computerized tomography. Computer Graphics and Image Processing, 10:246{ 255, 1979. 59] L. A. Shepp and J. A. Stein. Simulated reconstruction artifacts in computerized X-ray tomography. In M. M. Ter-Pogossian, editor, Reconstruction Tomography in Diagnostic Radiology and Nuclear Medicine, 1976. 60] L.A. Shepp and Y. Vardi. Maximum likelihood reconstruction for emission tomography. IEEE Transactions on Medical Imaging, 1:113{122, 1982. 61] B. W. Silverman, M. C. Jones, J. D. Wilson, and D. W. Nychka. A smoothed EM approach to indirect estimation problems, with particular reference to stereology and emission tomography (with discussion). Journal of the Royal Statistical Society, series B, 52:271{324, 1990. 62] L.J. Spreeuwers and F. van der Heijden. An edge detector evaluation method based on average risk. In W. Forstner and S. Ruwiedel, editors, Robust Computer Vision, pages 79{89, Karlsruhe, 1992. Wichmann. 63] D. Stoyan, W. S. Kendall, and J. Mecke. Stochastic Geometry and its Applications. John Wiley and Sons, Chichester, 1987. 64] H. D. Tagare and R. J. P. deFigueiredo. On the localization performance measure and optimal edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:1186{1190, 1990. 65] C. C. Taylor. Measure of similarity between two images. Technical report, University of Strathclyde, 1988. 66] L. J. van Vliet, I. T. Young, and A. L. D. Beckers. A nonlinear Laplace operator as edge detector in noisy images. Computer vision, graphics and image processing, 45:167{195, 1989. 67] Y. Vardi, L.A. Shepp, and L. Kaufman. A statistical model for positron emission tomography. J. Amer. Statist. Assoc., 80:8{, 1985. 68] W. Vervaat. Stationary self-similar processes and random semi-continuous functions. In E. Eberlein and M. S. Taqqu, editors, Dependence in probability and statistics. Birkhauser, Boston, 1986. 30

69] W. Vervaat. Random semicontinuous functions and extremal processes. Research Report MS-R8801, Centrum voor Wiskunde en Informatica, Amsterdam, The Netherlands, 1987. 70] W. Vervaat. Narrow and vague convergence of set functions. Statistics and Probability Letters, 6:295{298, 1988. 71] R. Wijsman. Convergence of sequences of convex sets, cones and functions II. Transactions of the American Mathematical Society, 123:32{45, 1966. 72] B.S. Yandell, C.C. Taylor, and B.D. Ripley. Inference for image reconstructions. Draft manuscript, 1990. 73] C.K. Yuen and D. Fraser. Digital spectral analysis. CSIRO/Pitman, Melbourne, 1979.

31

yi|| Let *Hausdorff* Distance 1 d 1/p a ∈ A. *The* BD is *the* minimum...Kravets, “Goe*metric* Pattern Matching under Euclidian Motion,” Computational ...

[c],Osaka,Japan,1997:617～621.5HuttenlocherDP,KlandermanGA,Rucklidgeimagesusing*theHausdorff* 吴江琴1965年生,副教授,1988年、l WJ.ComparingTransactionsOn 991...

and *the* process is characterized by a large amount of calculation and complexity. To address these issues,we introduce *the* *Hausdorff* *metric* to hyperspectral...

SeveraIimprovements on *the* conventionaI IiRe segment *Hausdorff* distance(LHD) for face recognition were presented.Firstly,*the* result of active shape model(ASM...

Asymptotic Approximation of Convex Curves *the* *Hausdorff* *Metric* Case_专业资料。Let C and D be closed convex curves in *the* Euclidean plane. Their *Hausdorff* ...

关于度量及纲函数的*Hausdorff*测度的两个反例_自然科学_专业资料。Vo1.35(2015)...we investigate *the* relationship among Hausdorf measure,*metric* and gauge ...

一种改进的*Hausdorff*距离模板匹配算法_IT/计算机_专业资料。第l 9卷第5期 20...WSWitMoeg*tHe*sHegt 文 中后续的讨论 以此假设 为基础 。 假设 经过边缘检测 ...

In *the* literature, several measures of performance (MOP) for multiple object estimation, such as *the* *Hausdorff* *metric* [1] and *the* optimal mass transfer ...

Basically, *the* *Hausdorff* *metric* will serve to check if a template image is present in a test image ; *the* lower *the* distance value, *the* best *the* ...

使用改进的方向滤波与修正的*Hausdorff*距离的指静脉识别方法_电子/电路_工程科技_...distance*metric*to *the*image orientation.Thismethodimagewithin*the* achieve*the* ...

Naturally, *the* first question to be answered, is what *metric* notion curves...Figure 1: Two curves with *HAUSDORFF*-distance 6, between curves should be ...

In addition, for a system O satisfying Hypothesis 2.1, if *the* set-valued mapping t/Ot is continuous under *the* *Hausdorff* *metric*, we can verify that ...

Robust Face Detection Using *the* *Hausdorff* Distance_IT/计算机_专业资料。使用...2 Hausdor? Object Detection *The* Hausdor? distance (HD) is a *metric* ...

基于*Hausdorff*距离和遗传算法的物体匹配方法_刘健庄_生物学_自然科学_专

基于改进*Hausdorff*距离的多特征度量融合图像检索方法研究_数学_自然科学_专业资料 人阅读|次下载 基于改进*Hausdorff*距离的多特征度量融合图像检索方法研究_数学_自然...

In this paper, we use *the* algorithm in literature [ 1] to cauculate *the* *Hausdorff* dimension of *the* class of *the* Cantor sets determined by *the* ...

newmethodofimageedge matchingbasedtensorwas on *Hausdorff*distanceofintensity Riemannian*metric*(RM-HD)wasandedgegradient,since*the* proposed.*The*structuretensor structure...

- Completeness with respect to the probabilistic Pompeiu-Hausdorff metric
- Asymptotic Approximation of Convex Curves the Hausdorff Metric Case
- Gromov-Hausdorff Distance for Quantum Metric Spaces
- On the generalization of theorems from Riemannian to Finsler Geometry I Metric Theorems
- Analysis of the Non-singular Wyman-Schwarzschild Metric
- The Metric System(公尺法演讲)
- A New Weighted Metric the Relative Metric II
- On the parameterization dependence of the energy momentum tensor and the metric
- In-flight radio metric calibration of the Advanced Land Imager
- The Carmeli metric correctly describes spiral galaxy rotation curves