Abstract
Multilingual NLP models provide potential solutions to the digital language divide, i.e., cross-language performance disparities. Early analyses of such models have indicated good performance across training languages and good generalization to unseen, related languages. This work examines whether, between related languages, multilingual models are equally right for the right reasons, i.e., if interpretability methods reveal that the models put emphasis on the same words as humans. To this end, we provide a new trilingual, parallel corpus of rationale annotations for English, Danish, and Italian sentiment analysis models and use it to benchmark models and interpretability methods. We propose rank-biased overlap as a better metric for comparing input token attributions to human rationale annotations. Our results show: (i) models generally perform well on the languages they are trained on, and align best with human rationales in these languages; (ii) performance is higher on English, even when not a source language, but this performance is not accompanied by higher alignment with human rationales, which suggests that language models favor English, but do not facilitate successful transfer of rationales.
Original language | English |
---|---|
Title of host publication | BlackboxNLP 2022 - BlackboxNLP Analyzing and Interpreting Neural Networks for NLP : Workshop on Analyzing and Interpreting Neural Networks for NLP |
Publisher | Association for Computational Linguistics |
Publication date | 2022 |
Pages | 131-141 |
ISBN (Electronic) | 978-1-959429-05-0 |
Publication status | Published - 2022 |
Event | 5th Workshop on Analyzing and Interpreting Neural Networks for NLP - Hybrid event, Abu Dhabi, United Arab Emirates Duration: 8 Dec 2022 → 8 Dec 2022 |
Conference
Conference | 5th Workshop on Analyzing and Interpreting Neural Networks for NLP |
---|---|
Location | Hybrid event |
Country/Territory | United Arab Emirates |
City | Abu Dhabi |
Period | 08/12/2022 → 08/12/2022 |