Abstract
In this work, we take a closer look at the evaluation of two families of methods for enriching information from knowledge graphs: Link Prediction and Entity Alignment. In the current experimental setting, multiple different scores are employed to assess different aspects of model performance. We analyze the informativeness of these evaluation measures and identify several shortcomings. In particular, we demonstrate that all existing scores can hardly be used to compare results across different datasets. Therefore, we propose adjustments to the evaluation and demonstrate empirically how this supports a fair, comparable, and interpretable assessment of model performance.
Original language | English |
---|---|
Title of host publication | Proceedings of 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) |
Publisher | IEEE |
Publication date | 2020 |
Pages | 371-74 |
ISBN (Print) | 978-1-6654-3017-3 |
DOIs | |
Publication status | Published - 2020 |
Event | 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) - Virtual event Duration: 14 Dec 2020 → 17 Dec 2020 http://wi2020.vcrab.com.au/ |
Conference
Conference | 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) |
---|---|
Location | Virtual event |
Period | 14/12/2020 → 17/12/2020 |
Internet address |