ThoughtSource: A central hub for large language model reasoning data

  • Simon Ott
  • , Konstantin Hebenstreit
  • , Valentin Liévin
  • , Christoffer Egeberg Hother
  • , Milad Moradi
  • , Maximilian Mayrhauser
  • , Robert Praas
  • , Ole Winther
  • , Matthias Samwald*
  • *Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

149 Downloads (Orbit)

Abstract

Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.

Original languageEnglish
Article number528
JournalScientific Data
Volume10
Number of pages12
ISSN2052-4463
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'ThoughtSource: A central hub for large language model reasoning data'. Together they form a unique fingerprint.

Cite this