Recognizing Textual Entailment with Attentive Reading and Writing Operations

Liang Liu, Huan Huo, Xiufeng Liu, Vasile Palade, Dunlu Peng, Qingkui Chen

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    Abstract

    Inferencing the entailment relations between natural language sentence pairs is fundamental to artificial intelligence. Recently, there is a rising interest in modeling the task with neural attentive models. However, those existing models have a major limitation to keep track of the attention history because usually only one single vector is utilized to memorize the past attention information. We argue its importance based on our observation that the potential alignment clues are not always centralized. Instead, they may diverge substantially, which could cause the problem of long-range dependency. In this paper, we propose to facilitate the conventional attentive reading operations with two sophisticated writing operations - forget and update. Instead of utilizing a single vector that accommodates the attention history, we write the past attention information directly into the sentence representations. Therefore, higher memory capacity of attention history could be achieved. Experiments on Stanford Natural Language Inference corpus (SNLI) demonstrate the superior efficacy of our proposed architecture.
    Original languageEnglish
    Title of host publicationDatabase Systems for Advanced Applications. DASFAA 2018.
    PublisherSpringer
    Publication date2018
    Pages847-860
    DOIs
    Publication statusPublished - 2018
    SeriesLecture Notes in Computer Science
    Volume10827
    ISSN0302-9743

    Fingerprint

    Dive into the research topics of 'Recognizing Textual Entailment with Attentive Reading and Writing Operations'. Together they form a unique fingerprint.

    Cite this