The Significance of Recall in Automatic Metrics for MT Evaluation

The Signi?cance of Recall in Automatic Metrics for MT Evaluation
Alon Lavie and Kenji Sagae and Shyamsundar Jayaraman
Language Technologies Institute Carnegie Mellon University {alavie,sagae,shyamj}@cs.cmu.edu

Abstract. Recent research has shown that a balanced harmonic mean (F1 measure) of unigram precision and recall outperforms the widely used BLEU and NIST metrics for Machine Translation evaluation in terms of correlation with human judgments of translation quality. We show that signi?cantly better correlations can be achieved by placing more weight on recall than on precision. While this may seem unexpected, since BLEU and NIST focus on n-gram precision and disregard recall, our experiments show that correlation with human judgments is highest when almost all of the weight is assigned to recall. We also show that stemming is signi?cantly bene?cial not just to simpler unigram precision and recall based metrics, but also to BLEU and NIST.



Automatic Metrics for machine translation (MT) evaluation have been receiving signi?cant attention in the past two years, since IBM’s BLEU metric was proposed and made available [1]. BLEU and the closely related NIST metric [2] have been extensively used for comparative evaluation of the various MT systems developed under the DARPA TIDES research program, as well as by other MT researchers. Several other automatic metrics for MT evaluation have been proposed since the early 1990s. These include various formulations of measures of “edit distance” between an MT-produced output and a reference translation [3] [4], and similar measures such as “word error rate” and “position-independent word error rate” [5], [6]. The utility and attractiveness of automatic metrics for MT evaluation has been widely recognized by the MT community. Evaluating an MT system using such automatic metrics is much faster, easier and cheaper compared to human evaluations, which require trained bilingual evaluators. In addition to their utility for comparing the performance of di?erent systems on a common translation task, automatic metrics can be applied on a frequent and ongoing basis during system development, in order to guide the development of the system based on concrete performance improvements. In this paper, we present a comparison between the widely used BLEU and NIST metrics, and a set of easily computable metrics based on unigram precision and recall. Using several empirical evaluation methods that have been proposed

in the recent literature as concrete means to assess the level of correlation of automatic metrics and human judgments, we show that higher correlations can be obtained with fairly simple and straightforward metrics. While recent researchers [7] [8] have shown that a balanced combination of precision and recall (F1 measure) has improved correlation with human judgments compared to BLEU and NIST, we claim that even better correlations can be obtained by assigning more weight to recall than to precision. In fact, our experiments show that the best correlations are achieved when recall is assigned almost all the weight. Previous work by Lin and Hovy [9] has shown that a recall-based automatic metric for evaluating summaries outperforms the BLEU metric on that task. Our results show that this is also the case for evaluation of MT. We also demonstrate that stemming both MT-output and reference strings prior to their comparison, which allows di?erent morphological variants of a word to be considered as “matches”, signi?cantly further improves the performance of the metrics. We describe the metrics used in our evaluation in Section 2. We also discuss certain characteristics of the BLEU and NIST metrics that may account for the advantage of metrics based on unigram recall. Our evaluation methodology and the data used for our experimentation are described in section 3. Our experiments and their results are described in section 4. Future directions and extensions of this work are discussed in section 5.


Evaluation Metrics

The metrics used in our evaluations, in addition to BLEU and NIST, are based on explicit word-to-word matches between the translation being evaluated and each of one or more reference translations. If more than a single reference translation is available, the translation is matched with each reference independently, and the best-scoring match is selected. While this does not allow us to simultaneously match di?erent portions of the translation with di?erent references, it supports the use of recall as a component in scoring each possible match. For each metric, including BLEU and NIST, we examine the case where matching requires that the matched word in the translation and reference be identical (the standard behavior of BLEU and NIST), and the case where stemming is applied to both strings prior to the matching1 . In the second case, we stem both translation and references prior to matching and then require identity on stems. We plan to experiment in the future with less strict matching schemes that will consider matching synonymous words (with some cost), as described in section 5. 2.1 BLEU and NIST

The main principle behind IBM’s BLEU metric [1] is the measurement of the overlap in unigrams (single words) and higher order n-grams of words, between a

We include BLEU and NIST in our evaluations on stemmed data, but since neither one includes stemming as part of the metric, the resulting BLEU-stemmed and NIST-stemmed scores are not truly BLEU and NIST scores. They serve to illustrate the e?ectiveness of stemming in MT evaluation.

translation being evaluated and a set of one or more reference translations. The main component of BLEU is n-gram precision: the proportion of the matched ngrams out of the total number of n-grams in the evaluated translation. Precision is calculated separately for each n-gram order, and the precisions are combined via a geometric averaging. BLEU does not take recall into account directly. Recall – the proportion of the matched n-grams out of the total number of n-grams in the reference translation, is extremely important for assessing the quality of MT output, as it re?ects to what degree the translation covers the entire content of the translated sentence. BLEU does not use recall because the notion of recall is unclear when simultaneously matching against multiple reference translations (rather than a single reference). To compensate for recall, BLEU uses a Brevity Penalty, which penalizes translations for being “too short”. The NIST metric is conceptually similar to BLEU in most aspects, including the weaknesses discussed below: – The Lack of Recall: We believe that the brevity penalty in BLEU does not adequately compensate for the lack of recall. Our experimental results strongly support this claim. – Lack of Explicit Word-matching Between Translation and Reference: N-gram counts don’t require an explicit word-to-word matching, but this can result in counting incorrect “matches”, particularly for common function words. A more advanced metric that we are currently developing (see section 4.3) uses the explicit word-matching to assess the grammatical coherence of the translation. – Use of Geometric Averaging of N-grams: Geometric averaging results in a score of “zero” whenever one of the component n-gram scores is zero. Consequently, BLEU scores at the sentence level can be meaningless. While BLEU was intended to be used only for aggregate counts over an entire test-set (and not at the sentence level), a metric that exhibits high levels of correlation with human judgments at the sentence level would be highly desirable. In experiments we conducted, a modi?ed version of BLEU that uses equal-weight arithmetic averaging of n-gram scores was found to have better correlation with human judgments at both the sentence and system level. 2.2 Metrics Based on Unigram Precision and Recall

The following metrics were used in our evaluations: 1. Unigram Precision: As mentioned before, we consider only exact one-toone matches between words. Precision is calculated as follows: m P = wt where m is the number of words in the translation that match words in the reference translation, and wt is the number of words in the translation. This may be interpreted as the fraction of the words in the translation that are present in the reference translation.

2. Unigram Precision with Stemming: Same as above, but the translation and references are stemmed before precision is computed. 3. Unigram Recall: As with precision, only exact one-to-one word matches are considered. Recall is calculated as follows: m P = wr where m is the number of matching words, and wr is the number of words in the reference translation. This may be interpreted as the fraction of words in the reference that appear in the translation. 4. Unigram Recall with Stemming: Same as above, but the translation and references are stemmed before recall is computed. 5. F1 : The harmonic mean [10] of precision and recall. F1 is computed as follows: 2P R P +R 6. F1 with Stemming: Same as above, but using the stemmed version of both precision and recall. 7. Fmean: This is similar to F1 , but recall is weighted nine times more heavily than precision. The precise amount by which recall outweighs precision is less important than the fact that most of the weight is placed on recall. The balance used here was estimated using a development set of translations and references (we also report results on a large test set that was not used in any way to determine any parameters in any of the metrics). Fmean is calculated as follows: 10P R F mean = 9P + R F1 =


Evaluating MT Evaluation Metrics

We evaluated the metrics described in section 2 and compared their performances with BLEU and NIST on two large data sets: the DARPA/TIDES 2002 and 2003 Chinese-to-English MT Evaluation sets. The data in both cases consists of approximately 900 sentences with four reference translations each. Both evaluations had corresponding human assessments, with two human judges evaluating each translated sentence. The human judges assign an Adequacy Score and a Fluency Score to each sentence. Each score ranges from one to ?ve (with one being the poorest grade and ?ve the highest). The adequacy and ?uency scores of the two judges for each sentence are averaged together, and an overall average adequacy and average ?uency score is calculated for each evaluated system. The total human score for each system is the sum of the average adequacy and average ?uency scores, and can range from two to ten. The data from the 2002 evaluation contains system output and human evaluation scores for seven systems. The 2003 data includes system output and human evaluation scores for six systems. The 2002 set was used in determining the weights of precision and recall in the Fmean metric.


Evaluation Methodology

Our goal in the evaluation of the MT scoring metrics is to e?ectively quantify how well each metric correlates with human judgments of MT quality. Several di?erent experimental methods have been proposed and used in recent work by various researchers. In our experiments reported here, we use two methods of assessment: 1. Correlation of Automatic Metric Scores and Human Scores at the System-level: We plot the automatic metric score assigned to each tested system against the average total human score assigned to the system, and calculate a correlation coe?cient between the metric scores and the human scores. Melamed et al [7], [8] suggest using the Spearman rank correlation coe?cient as an appropriate measure for this type of correlation experiment. The rank correlation coe?cient abstracts away from the absolute scores and measures the extent to which the two scores (human and automatic) similarly rank the systems. We feel that this rank correlation is not a su?ciently sensitive evaluation criterion, since even poor automatic metrics are capable of correctly ranking systems that are very di?erent in quality. We therefore opted to evaluate the correlation using the Pearson correlation coe?cient, which takes into account the distances of the data points from an optimal regression curve. This method has been used by various other researchers [6] and also in the o?cial DARPA/TIDES evaluations. 2. Correlation of Score Di?erentials between Pairs of Systems: For each pair of systems we calculate the di?erentials between the systems for both the human score and the metric score. We then plot these di?erentials and calculate a Pearson correlation coe?cient between the di?erentials. This method was suggested by Coughlin [11]. It provides signi?cantly more data points for establishing correlation between the MT metric and the human scores. It makes the reasonable assumption that the di?erentials of automatic metric and human scores should highly correlate. This assumption is reasonable if both human scores and metric scores are linear in nature, which is generally true for the metrics we compare here. As mentioned before, the values presented in this paper are Pearson’s correlation coe?cients, and consequently they range from -1 to 1, with 1 representing a very strong association between the automatic score and the human score. Thus the di?erent metrics are assessed primarily by looking at which metric has a higher correlation coe?cient in each scenario. In order to validate the statistical signi?cance of the di?erences in the scores, we apply a commonly used bootstrapping sampling technique [12] to estimate the variability over the test set, and establish con?dence intervals for each of the system scores and the correlation coe?cients.

Table 1. Correlation coe?cients with human judgments for each metric on the DARPA/TIDES 2002 Chinese data set Metric Pearson’s Coe?cient Con?dence Interval NIST 0.603 +/- 0.049 NIST-stem 0.740 +/- 0.043 BLEU 0.461 +/- 0.058 BLEU-stem 0.528 +/- 0.061 P 0.175 +/- 0.052 P-stem 0.257 +/- 0.065 R 0.615 +/- 0.042 R-stem 0.757 +/- 0.042 F1 0.425 +/- 0.047 F1-stem 0.564 +/- 0.052 Fmean 0.585 +/- 0.043 Fmean-stem 0.733 +/- 0.044


Metric Evaluation
Correlation of Automatic Metric Scores and Human Scores at the System-level

We ?rst compare the various metrics in terms of the correlation they have with total human scores at the system level. For each metric, we plot the metric and total human scores assigned to each system and calculate the correlation coe?cient between the two scores. Tables 1 and 2 summarize the results for the various metrics on the 2002 and 2003 data sets. All metrics show much higher levels of correlation with human judgments on the 2003 data, compared with the 2002 data. The 2002 data exhibits several anomalies that have been identi?ed and discussed by several other researchers [13]. Three of the 2002 systems have output that contains signi?cantly higher amounts of “noise” (non ascii characters) and upper-cased words, which are detrimental to the automatic metrics. The variability within the 2002 set is also much higher than within the 2003 set, as re?ected by the con?dence intervals of the various metrics. The levels of correlation of the di?erent metrics are quite consistent across both 2002 and 2003 data sets. Unigram-recall and F-mean have signi?cantly higher levels of correlation than BLEU and NIST. Unigram-precision, on the other hand, has a poor level of correlation. The performance of F1 is inferior to F-mean on the 2002 data. On the 2003 data, F1 is inferior to Fmean, but stemmed F1 is about equivalent to Fmean. Stemming improves correlations for all metrics on the 2002 data. On the 2003 data, stemming improves correlation on all metrics except for recall and Fmean, where the correlation coe?cients are already so high that stemming no longer has a statistically signi?cant e?ect. Recall, Fmean and NIST also exhibit more stability than the other metrics, as re?ected by the con?dence intervals.

Table 2. Correlation coe?cients with human judgments for each metric on the DARPA/TIDES 2003 Chinese data set Metric Pearson’s Coe?cient Con?dence Interval NIST 0.892 +/- 0.013 NIST-stem 0.915 +/- 0.010 BLEU 0.817 +/- 0.021 BLEU-stem 0.843 +/- 0.018 P 0.683 +/- 0.041 P-stem 0.752 +/- 0.041 R 0.961 +/- 0.011 R-stem 0.940 +/- 0.014 F1 0.909 +/- 0.025 F1-stem 0.948 +/- 0.014 Fmean 0.959 +/- 0.012 Fmean-stem 0.952 +/- 0.013


Correlation of Score Di?erentials between Pairs of Systems

We next calculated the score di?erentials for each pair of systems that were evaluated and assessed the correlation between the automatic score di?erentials and the human score di?erentials. The results of this evaluation are summarized in Tables 3 and 4. The results of the system pair di?erential correlation experiments are very consistent with the system-level correlation results. Once again, Unigram-recall and F-mean have signi?cantly higher levels of correlation than BLEU and NIST. The e?ects of stemming are somewhat less pronounced in this evaluation. 4.3 Discussion

It is clear from these results that unigram-recall has a very strong correlation with human assessment of MT quality, and stemming often strengthens this correlation. This follows the intuitive notion that MT system output should contain as much of the system output should contain as much of the meaning of the input as possible. It is perhaps surprising that unigram-precision, on the other hand, has such low correlation. It is still important, however, to factor precision into the ?nal score assigned to a system, to prevent systems that output very long translations from receiving in?ated scores (as an extreme example, a system that outputs every word in its vocabulary for every translation would consistently score very high in unigram recall, regardless of the quality of the translation). Our Fmean metric is e?ective in combining precision and recall. Because recall is weighted heavily, the Fmean scores have high correlations. For both data sets tested, recall and Fmean performed equally well (di?erences were statistically insigni?cant), even though precision performs much worse. Because we use a weighted harmonic mean, where precision and recall are multiplied, low

Table 3. Correlation coe?cients for pairwise system comparisons on the DARPA/TIDES 2002 Chinese data set Metric Pearson’s Coe?cient Con?dence Interval NIST 0.679 +/- 0.042 NIST-stem 0.774 +/- 0.041 BLEU 0.498 +/- 0.054 BLEU-stem 0.559 +/- 0.058 P 0.298 +/- 0.051 P-stem 0.325 +/- 0.064 R 0.743 +/- 0.032 R-stem 0.845 +/- 0.029 F1 0.549 +/- 0.042 F1-stem 0.643 +/- 0.046 Fmean 0.711 +/- 0.033 Fmean-stem 0.818 +/- 0.032

levels of precision properly penalize the Fmean score (thus disallowing the case of a system scoring high simply by outputting many words). One feature of BLEU and NIST that is not included in simple unigrambased metrics is the approximate notion of word order or grammatical coherence achieved by the use of higher-level n-grams. We have begun development of a new metric that combines the Fmean score with an explicit measure of grammatical coherence. This metric, METEOR (Metric for Evaluation of Translation with Explicit word Ordering), performs a maximal-cardinality match between translations and references, and uses the match to compute a coherence-based penalty. This computation is done by assessing the extent to which the matched words between translation and reference constitute well ordered coherent “chunks”. Preliminary experiments with METEOR have yielded promising results, achieving similar levels of correlation (but so far not statistically signi?cantly superior) as compared to the simpler measures of Fmean and recall.


Current and Future Work

We are currently in the process of enhancing the METEOR metric in several directions: Expanding the Matching between Translation and References: Our experiments indicate that stemming already signi?cantly improves the quality of the metric by expanding the matching. We plan to experiment with further expanding the matching to include synonymous words, by using information from synsets in WordNet. Since the reliability of such matches is likely to be somewhat reduced, we will consider assigning such matches a lower con?dence that will be taken into account within score computations.

Table 4. Correlation coe?cients for pairwise system comparisons on the DARPA/TIDES 2003 Chinese data set Metric Pearson’s Coe?cient Con?dence Interval NIST 0.886 +/- 0.017 NIST-stem 0.924 +/- 0.013 BLEU 0.758 +/- 0.027 BLEU-stem 0.793 +/- 0.025 P 0.573 +/- 0.053 P-stem 0.666 +/- 0.058 R 0.954 +/- 0.014 R-stem 0.923 +/- 0.018 F1 0.881 +/- 0.024 F1-stem 0.950 +/- 0.017 Fmean 0.954 +/- 0.015 Fmean-stem 0.940 +/- 0.017

Combining Precision, Recall and Sort Penalty: Results so far indicate that recall plays the most important role in obtaining high-levels of correlation with human judgments. We are currently exploring alternative ways for combining the components of precision, recall and a coherence penalty with the goal of optimizing correlation with human judgments, and exploring whether an optimized combination of these factors on one data set is also persistent in performance across di?erent data sets. The Utility of Multiple Reference Translations: The metrics described use multiple reference translations in a weak way: we compare the translation with each reference separately and select the reference with the best match. This was necessary in order to incorporate recall in our metric, which we have shown to be highly advantageous. We are in the process of quantifying the utility of multiple reference translations across the metrics by measuring the correlation improvements as a function of the number of reference translations. We will then consider exploring ways in which to improve our matching against multiple references. Recent work by Pang, Knight and Marcu [14] provides the mechanism for producing semantically meaningful additional “synthetic” references from a small set of real references. We plan to explore whether using such synthetic references can improve the performance of our metric. Matched Words are not Created Equally: Our current metrics treats all matched words between a system translation and a reference equally. It is safe to assume, however, that matching semantically important words should carry signi?cantly more weight than the matching of function words. We plan to explore schemes for assigning di?erent weights to matched words, and investigate if such schemes can further improve the sensitivity of the metric and its correlation with human judgments of MT quality.

This research was funded in part by NSF grant number IIS-0121631.

1. Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318, Philadelphia, PA, July. 2. Doddington, George. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics. In Proceedings of the Second Conference on Human Language Technology (HLT-2002). San Diego, CA. pp. 128–132. 3. K.-Y. Su, M.-W. Wu, and J.-S. Chang. 1992. A New Quantitative Quality Measure for Machine Translation Systems. In Proceedings of the ?fteenth International Conference on Computational Linguistics (COLING-92). Nantes, France. pp. 433– 439. 4. Y. Akiba, K. Imamura, and E. Sumita. 2001. Using Multiple Edit Distances to Automatically Rank Machine Translation Output. In Proceedings of MT Summit VIII. Santiago de Compostela, Spain. pp. 15–20. 5. S. Niessen, F. J. Och, G. Leusch, and H. Ney. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for Machine Translation Research. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC-2000). Athens, Greece. pp. 39–45. 6. Gregor Leusch, Nicola Ue?ng and Herman Ney. 2003. String-to-String Distance Measure with Applications to Machine Translation Evaluation. In Proceedings of MT Summit IX. New Orleans, LA. Sept. 2003. pp. 240–247. 7. I. Dan Melamed, R. Green and J. Turian. 2003. Precision and Recall of Machine Translation. In Proceedings of HLT-NAACL 2003. Edmonton, Canada. May 2003. Short Papers: pp. 61–63. 8. Joseph P. Turian, Luke Shen and I. Dan Melamed. 2003. Evaluation of Machine Translation and its Evaluation. In Proceedings of MT Summit IX. New Orleans, LA. Sept. 2003. pp. 386–393. 9. Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics. In Proceedings of HLT-NAACL 2003. Edmonton, Canada. May 2003. pp. 71–78. 10. C. van Rijsbergen. 1979. Information Retrieval. Butterworths. London, England. 2nd Edition. 11. Deborah Coughlin. 2003. Correlating Automated and Human Assessments of Machine Translation Quality. In Proceedings of MT Summit IX. New Orleans, LA. Sept. 2003. pp. 63–70. 12. Bradley Efron and Robert Tibshirani. 1986. Bootstrap Methods for Standard Errors, Con?dence Intervals, and Other Measures of Statistical Accuracy. Statistical Science, 1(1). pp. 54–77. 13. George Doddington. 2003. Automatic Evaluation of Language Translation using N-gram Co-occurrence Statistics. Presentation at DARPA/TIDES 2003 MT Workshop. NIST, Gathersberg, MD. July 2003. 14. Bo Pang, Kevin Knight and Daniel Marcu. 2003. Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences. In Proceedings of HLT-NAACL 2003. Edmonton, Canada. May 2003. pp. 102–109.

The Significance of Recall in Automatic Metrics for....pdf

The Significance of Recall in Automatic Metrics for MT Evaluation_专业资料。Abstract. Recent research has shown that a balanced harmonic mean (F1 measure) ...

PPT. Machine Translation Evaluation in Machine Tran....pdf

in Proceedings of the 14th biennial International ...Automatic MT evaluation metrics introduction 1. 2....Recall-based ROUGE(Lin, 2004 WAS) ? Precision ...

METEOR An Automatic Metric for MT Evaluation with I....pdf

METEOR An Automatic Metric for MT Evaluation with Improved Correlation with ...The Significance of Recall in Automatic Metrics for MT Evaluation. In ...

Automatic Evaluation of Machine Translation Quality.pdf

Automatic Evaluation of Machine Translation Quality Cyril Goutte (for the XRCE...The signi?cance of recall in automatic metrics for mt evaluation. In ...

Centre for Translation Studies, Centre for Translat....pdf

Several automatic methods for MT evaluation have ...in relation to a set of human reference ...although the relation between the significance weights...

...Evaluation Systems in WMT13 Metrics Task.pdf

In the Metrics task, we submitted two automatic ...penalty, n-gram precision and n-gram recall. nLE...(MT), the evaluation of MT has become a ...

...for the Machine Translation Evaluation Metrics.pdf

Automatic evaluation metrics for Machine Translation ...prominent with the development of data driven MT....To bring in the factor ofrecall”, BLEU ...

LEPOR-A Robust Evaluation Metric for Machine Transl....pdf

In the conventional evaluation metrics of machine ...the automatic metric for Machine Translation (MT)...(weighting of recall) and (weighting of precision...


become the new challenges in front of MT ...The commonly used automatic evaluation metrics ...of recall to precision, the weight for stemming ...

Evaluation Metrics for a Translation Memory System_....pdf

(1999) Evaluation Metrics for a Translation Memory...In combination with appropriate measures of recall,...(MT), usually with restrictions on the input or...

Data Driven Ontology Evaluation.pdf

evaluation metrics must be available similar to those...The usage of measures like precision and recall,...Explorations in Automatic Thesaurus Discovery. ...


changing the action for some gi in the list, ...? Distance metrics for text fields ? Normalizing/...Extensions (to increase recall of set of pairs)...

现代信息检索 双语 第二版 chapter 4 检索评价_图文.ppt

metrics 4.3.1 Precision and Recall 查准率/查全率...? In a system designed for providing data ...the evaluation of how precise is the answer set...

Why is Schema Matching Tough and What Can We Do Abo....pdf

In this paper we analyze the problem of schema ...clearer the vital need for automatic schema ...namely precision and recall, were used for the ...


The importance of numerical evaluation Should ...metrics for skewed classes Machine Learning Cancer ...return if cancer, Andrew Ng Precision/Recall in ...

...Extended analysis of benchmark datasets for Agil....pdf

Unfortunately, no similar evaluation of data ...presented for the Agilent two-color data in [6...Tusher VG, Tibshirani R, Chu G: Significance ...


? Evaluation Metrics for IR ? Query Expansion (...较长的查询可能给提高recall带来更多机会 14 Query ...the term 22 weight of a term in the vector V...


ned for the needs of an application (e.g., ...cation Evaluation Metrics Assessing Human Labels ...retrieval: Accuracy, Precision, Recall and F-...


only using propagation), with a recall greater than...3 EVALUATION The goal of this paper is to ...3.1.1 Evaluation metrics The metrics used to ...

A Synopsis of Learning to Recognize Names Across La....pdf

(MT) and information retrieval (IR) has ...the need for the automatic recognition of proper ...The metrics used were recall (R), precision (P...