Do end-to-end speech recognition models care about context?

Lasse Borgholt, Jakob D. Havtorn, Željko Agic, Anders Søgaard, Lars Maaløe, Christian Igel

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

32 Downloads (Pure)

Abstract

The two most common paradigms for end-to-end speech recognition are connectionist temporal classification (CTC) and attention-based encoder-decoder (AED) models. It has been argued that the latter is better suited for learning an implicit language model. We test this hypothesis by measuring temporal context sensitivity and evaluate how the models perform when we constrain the amount of contextual information in the audio input. We find that the AED model is indeed more context sensitive, but that the gap can be closed by adding self-attention to the CTC model. Furthermore, the two models perform similarly when contextual information is constrained. Finally, in contrast to previous research, our results show that the CTC model is highly competitive on WSJ and LibriSpeech without the help of an external language model.
Original languageEnglish
Title of host publicationProceedings of the Annual Conference of the International Speech Communication Association
Publication date2020
Pages4352-4356
DOIs
Publication statusPublished - 2020
EventInterspeech 2020 - Shanghai International Convention Center, Shanghai, China
Duration: 25 Oct 202029 Oct 2020
http://www.interspeech2020.org/

Conference

ConferenceInterspeech 2020
LocationShanghai International Convention Center
Country/TerritoryChina
CityShanghai
Period25/10/202029/10/2020
Internet address
SeriesProceedings of the Annual Conference of the International Speech Communication Association, Interspeech
ISSN1990-9772

Keywords

  • Attention-based encoder-decoder
  • Automatic speech recognition
  • Connectionist temporal classification
  • End-to-end speech recognition

Fingerprint

Dive into the research topics of 'Do end-to-end speech recognition models care about context?'. Together they form a unique fingerprint.

Cite this