Textual entailment

{{Short description|Concept in natural language processing}}

In natural language processing, textual entailment (TE), also known as natural language inference (NLI), is a directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text.

Definition

In the TE framework, the entailing and entailed texts are termed text (t) and hypothesis (h), respectively. Textual entailment is not the same as pure logical entailment – it has a more relaxed definition: "t entails h" (th) if, typically, a human reading t would infer that h is most likely true.[http://u.cs.biu.ac.il/~dagan/publications/RTEChallenge.pdf Ido Dagan, Oren Glickman and Bernardo Magnini. The PASCAL Recognising Textual Entailment Challenge, p. 2] {{Webarchive|url=https://web.archive.org/web/20120303215730/http://u.cs.biu.ac.il/~dagan/publications/RTEChallenge.pdf |date=2012-03-03 }} in: Quiñonero-Candela, J.; Dagan, I.; Magnini, B.; d'Alché-Buc, F. (Eds.) Machine Learning Challenges. Lecture Notes in Computer Science, Vol. 3944, pp. 177–190, Springer, 2006. (Alternatively: th if and only if, typically, a human reading t would be justified in inferring the proposition expressed by h from the proposition expressed by t.{{Cite journal|last1=Korman|first1=Daniel Z.|last2=Mack|first2=Eric|last3=Jett|first3=Jacob|last4=Renear|first4=Allen H.|date=2018-03-09|title=Defining textual entailment|journal=Journal of the Association for Information Science and Technology|language=en|volume=69|issue=6|pages=763–772|doi=10.1002/asi.24007|s2cid=46920779|issn=2330-1635|url=https://philpapers.org/rec/KORDTE}}) The relation is directional because even if "t entails h", the reverse "h entails t" is much less certain.[http://u.cs.biu.ac.il/~dagan/publications/ProbabilisticTE_fv07.pdf Dagan, I. and O. Glickman. 'Probabilistic textual entailment: Generic applied modeling of language variability'] {{Webarchive|url=https://web.archive.org/web/20120329195307/http://u.cs.biu.ac.il/~dagan/publications/ProbabilisticTE_fv07.pdf |date=2012-03-29 }} in: PASCAL Workshop on Learning Methods for Text Understanding and Mining (2004) Grenoble.[http://clg.wlv.ac.uk/events/CALP07/papers/6.pdf Tătar, D. e.a. Textual Entailment as a Directional Relation]

Determining whether this relationship holds is an informal task, one which sometimes overlaps with the formal tasks of formal semantics (satisfying a strict condition will usually imply satisfaction of a less strict conditioned); additionally, textual entailment partially subsumes word entailment.

Examples

Textual entailment can be illustrated with examples of three different relations:[http://aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal Textual Entailment Portal] on the Association for Computational Linguistics wiki

An example of a positive TE (text entails hypothesis) is:

  • text: If you help the needy, God will reward you.

:hypothesis: Giving money to a poor man has good consequences.

An example of a negative TE (text contradicts hypothesis) is:

  • text: If you help the needy, God will reward you.

:hypothesis: Giving money to a poor man has no consequences.

An example of a non-TE (text does not entail nor contradict) is:

  • text: If you help the needy, God will reward you.

:hypothesis: Giving money to a poor man will make you a better person.

Ambiguity of natural language

A characteristic of natural language is that there are many different ways to state what one wants to say: several meanings can be contained in a single text and the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together, they result in a many-to-many mapping between language expressions and meanings. The task of paraphrasing involves recognizing when two texts have the same meaning and creating a similar or shorter text that conveys almost the same information. Textual entailment is similar{{cite journal|last1=Androutsopoulos|first1=Ion|last2=Malakasiotis|first2=Prodromos|title=A Survey of Paraphrasing and Textual Entailment Methods|journal=Journal of Artificial Intelligence Research|volume=38|pages=135–187|year=2010|doi=10.1613/jair.2985|url=https://www.jair.org/media/2985/live-2985-5001-jair.pdf|accessdate=13 February 2017|arxiv=0912.3747|s2cid=9234833|archive-date=9 December 2017|archive-url=https://web.archive.org/web/20171209091513/http://www.jair.org/media/2985/live-2985-5001-jair.pdf|url-status=dead}} but weakens the relationship to be unidirectional. Mathematical solutions to establish textual entailment can be based on the directional property of this relation, by making a comparison between some directional similarities of the texts involved.

Approaches

{{Update section|date=November 2023}}

Textual entailment measures natural language understanding as it asks for a semantic interpretation of the text, and due to its generality remains an active area of research. Many approaches and refinements of approaches have been considered, such as word embedding, logical models, graphical models, rule systems, contextual focusing, and machine learning. Practical or large-scale solutions avoid these complex methods and instead use only surface syntax or lexical relationships, but are correspondingly less accurate. {{Asof|2005}}, state-of-the-art systems are far from human performance; a study found humans to agree on the dataset 95.25% of the time.{{cite conference |last1=Bos|first1=Johan|last2=Markert|first2=Katja|title=Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing – HLT '05 |chapter=Recognising textual entailment with logical inference |editor1= Raymond Mooney | editor2= Joyce Chai | display-editors= etal|date=6–8 October 2005 | place= Vancouver | publisher= Association for Computational Linguistics |pages=628–635|doi=10.3115/1220575.1220654|s2cid=10202504|doi-access=free}} Algorithms from 2016 had not yet achieved 90%.{{cite arXiv|last1=Zhao|first1=Kai|last2=Huang|first2=Liang|last3=Ma|first3=Mingbo|title=Textual Entailment with Structured Attentions and Composition|date=4 January 2017|eprint=1701.01126|class=cs.CL}}

Applications

Many natural language processing applications, like question answering, information extraction, summarization, multi-document summarization, and evaluation of machine translation systems, need to recognize that a particular target meaning can be inferred from different text variants. Typically entailment is used as part of a larger system, for example in a prediction system to filter out trivial or obvious predictions.{{cite news|last1=Shani|first1=Ayelett|title=How Dr. Kira Radinsky Used Algorithms to Predict Riots in Egypt|url=http://www.haaretz.com/israel-news/.premium-1.554263|accessdate=13 February 2017|work=Haaretz|date=25 October 2013|language=en}} Textual entailment also has applications in adversarial stylometry, which has the objective of removing textual style without changing the overall meaning of communication.{{sfn|Potthast|Hagen|Stein|2016|p=11-12}}

Datasets

Some of available English NLI datasets include:

  • [https://nlp.stanford.edu/projects/snli/ SNLI]{{cite conference |last1=Bowman |first1=Samuel R. |last2=Angeli |first2=Gabor |last3=Potts |first3=Christopher |last4=Manning |first4=Christopher D. |title=A large annotated corpus for learning natural language inference |journal=In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP) |date=2015 |pages=632–642 |doi=10.18653/v1/D15-1075 |url=http://nlp.stanford.edu/pubs/snli_paper.pdf |publisher=Association for Computational Linguistics}}
  • [https://cims.nyu.edu/~sbowman/multinli/ MultiNLI]{{cite conference |last1=Williams |first1=Adina |last2=Nangia |first2=Nikita |last3=Bowman |first3=Samuel R. |title=A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference |journal=In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) |date=2018 |pages=1112–1122 |doi=10.18653/v1/N18-1101 |url=https://aclanthology.org/N18-1101.pdf |publisher=Association for Computational Linguistics}}
  • [https://allenai.org/data/scitail SciTail]{{cite journal|last1=Khot |first1=Tushar |last2=Sabharwal |first2=Ashish |last3=Clark |first3=Peter |title= SciTaiL: A Textual Entailment Dataset from Science Question Answering|journal= Proceedings of the AAAI Conference on Artificial Intelligence|date=2018 |volume=32 |issue=1 |doi=10.1609/aaai.v32i1.12022 |url=https://ojs.aaai.org/index.php/AAAI/article/view/12022/11881|doi-access=free }}
  • [https://alt.qcri.org/semeval2014/task1/ SICK]{{cite conference |last1=Marelli |first1=Marco |last2=Bentivogli |first2=Luisa |last3=Baroni |first3=Marco |last4=Bernardi |first4=Raffaella |last5=Menini |first5=Stefano |last6=Zamparelli |first6=Roberto |title=SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment |journal=In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014) |date=2014 |pages=1–8 |doi=10.3115/v1/S14-2001 |url=https://aclanthology.org/S14-2001.pdf |publisher=Association for Computational Linguistics |location=Dublin, Ireland}}
  • [https://jgc128.github.io/mednli/ MedNLI]{{cite conference |last1=Romanov |first1=Alexey |last2=Shivade |first2=Chaitanya |title=Lessons from Natural Language Inference in the Clinical Domain |journal=In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing |date=2018 |pages=1586–1596 |doi=10.18653/v1/D18-1187 |url=http://aclanthology.lst.uni-saarland.de/D18-1187.pdf |publisher=Association for Computational Linguistics |location=Brussels, Belgium}}
  • [https://github.com/kelvinguu/qanli QA-NLI]{{cite arXiv |last1=Demszky |first1=Dorottya |last2=Guu |first2=Kelvin |last3=Liang |first3=Percy |title=Transforming Question Answering Datasets Into Natural Language Inference Datasets |year=2018 |class=cs.CL |eprint=1809.02922 }}

In addition, there are several non-English NLI datasets, as follows:

  • [https://cims.nyu.edu/~sbowman/xnli/ XNLI]{{cite conference |last1=Conneau |first1=Alexis |last2=Rinott |first2=Ruty |last3=Lample |first3=Guillaume |last4=Williams |first4=Adina |last5=Bowman |first5=Samuel R. |last6=Schwenk |first6=Holger |last7=Stoyanov |first7=Veselin |title=XNLI: Evaluating Cross-lingual Sentence Representations |journal=In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing |date=2018 |pages=2475–2485 |doi=10.18653/v1/D18-1269 |url=https://aclanthology.org/D18-1269.pdf |publisher=Association for Computational Linguistics |location=Brussels, Belgium}}
  • [https://huggingface.co/maximoss DACCORD, RTE3-FR, SICK-FR]{{cite conference |last1=Skandalis|first1=Maximos|last2=Moot|first2=Richard|last3=Robillard|first3=Simon|last4=Retoré|first4=Christian |title=New Datasets for Automatic Detection of Textual Entailment and of Contradictions between Sentences in French |journal=Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) |date=2024 |pages=12173–12186 |url=https://aclanthology.org/2024.lrec-main.1065.pdf |publisher=ELRA and ICCL |location=Turin, Italy}} for French
  • [https://github.com/dml-qom/FarsTail FarsTail]{{Cite journal |last1=Amirkhani |first1=Hossein |last2=AzariJafari |first2=Mohammad |last3=Faridan-Jahromi |first3=Soroush |last4=Kouhkan |first4=Zeinab |last5=Pourjafari |first5=Zohreh |last6=Amirak |first6=Azadeh |date=2023-07-07 |title=FarsTail: a Persian natural language inference dataset |url=https://doi.org/10.1007/s00500-023-08959-3 |journal=Soft Computing |language=en |doi=10.1007/s00500-023-08959-3 |issn=1433-7479|arxiv=2009.08820 |s2cid=221802461 }} for Farsi
  • [https://github.com/CLUEbenchmark/OCNLI OCNLI]{{cite conference |last1=Hu |first1=Hai |last2=Richardson |first2=Kyle |last3=Xu |first3=Liang |last4=Li |first4=Lu |last5=Kübler |first5=Sandra |last6=Moss |first6=Lawrence |title=OCNLI: Original Chinese Natural Language Inference |journal=In Findings of the Association for Computational Linguistics: EMNLP 2020 |date=2020 |pages=3512–3526 |doi=10.18653/v1/2020.findings-emnlp.314 |url=https://aclanthology.org/2020.findings-emnlp.314.pdf}} for Chinese
  • [https://github.com/gijswijnholds/sick%20nl SICK-NL]{{cite conference |last1=Wijnholds |first1=Gijs |last2=Moortgat |first2=Michael |title=SICK-NL: A Dataset for Dutch Natural Language Inference |journal=In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume |date=2021 |pages=1474–1479 |doi=10.18653/v1/2021.eacl-main.126 |url=https://aclanthology.org/2021.eacl-main.126.pdf |publisher=Association for Computational Linguistics}} for Dutch
  • [https://github.com/ir-nlp-csui/indonli IndoNLI]{{cite conference |last1=Mahendra |first1=Rahmad |last2=Aji |first2=Alham Fikri |last3=Louvan |first3=Samuel |last4=Rahman |first4=Fahrurrozi |last5=Vania |first5=Clara |title=IndoNLI: A Natural Language Inference Dataset for Indonesian |journal=In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing |date=2021 |pages=10511–10527 |doi=10.18653/v1/2021.emnlp-main.821 |url=https://aclanthology.org/2021.emnlp-main.821.pdf |publisher=Association for Computational Linguistics}} for Indonesian

See also

References

Bibliography

  • {{ cite conference | url = https://ceur-ws.org/Vol-1609/16090716.pdf | title = Author Obfuscation: Attacking the State of the Art in Authorship Verification | last1 = Potthast | first1 = Martin | last2 = Hagen | first2 = Matthias | last3 = Stein | first3 = Benno | conference = Conference and Labs of the Evaluation Forum | year = 2016 }}