Abstract

Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. This paper presents the results of the STS pilot task in Semeval. The training data contained 2000 sentence pairs from previously existing paraphrase datasets and machine translation evaluation resources. The test data also comprised 2000 sentences pairs for those datasets, plus two surprise datasets with 400 pairs from a different machine translation evaluation corpus and 750 pairs from a lexical resource mapping exercise. The similarity of pairs of sentences was rated on a 0-5 scale (low to high similarity) by human judges using Amazon Mechanical Turk, with high Pearson correlation scores, around 90%. 35 teams participated in the task, submitting 88 runs. The best results scored a Pearson correlation>80%, well above a simple lexical baseline that only scored a 31 % correlation. This pilot task opens an exciting way ahead, although there are still open issues, specially the evaluation metric. 1

Keywords

Computer scienceNatural language processingSemantic similarityParaphraseArtificial intelligenceSemEvalPearson product-moment correlation coefficientMachine translationSimilarity (geometry)Task (project management)Semantic equivalenceSentenceMetric (unit)CorrelationEquivalence (formal languages)Evaluation of machine translationSemantic computingStatisticsSemantic WebExample-based machine translationMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2012
Type
article
Volume
1
Pages
385-393
Citations
679
Access
Closed

External Links

Citation Metrics

679
OpenAlex

Cite This

Eneko Agirre, Daniel Cer, Mona Diab et al. (2012). SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. , 1 , 385-393.