Verifying the Steps of Deductive Reasoning Chains

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

As Large Language Models penetrate everyday life more and more, it becomes essential to measure the correctness of their output. In this paper, we propose a novel task: the automatic verification of individual reasoning steps in a logical deductive Chain-of-Thought. This task addresses two well-known problems of LLMs, hallucination and incorrect reasoning. We propose a new dataset of logical reasoning chains, in which the individual deduction steps have been manually annotated for soundness, and benchmark several methods on it. We find that LLMs can detect unsound reasoning steps fairly well, but argue that verification has to be performed by transparent methods instead. We test symbolic methods, but find that they under-perform. We develop a neuro-symbolic baseline called VANESSA that comes closer to the performance of LLMs.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationACL 2025
EditorsWanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
PublisherAssociation for Computational Linguistics (ACL)
Pages456-475
Number of pages20
ISBN (Electronic)9798891762565
DOIs
Publication statusPublished - 1 Jan 2025
Event63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025 - Vienna, Austria
Duration: 27 Jul 20251 Aug 2025

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

Conference63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Country/TerritoryAustria
CityVienna
Period27/07/251/08/25

Fingerprint

Dive into the research topics of 'Verifying the Steps of Deductive Reasoning Chains'. Together they form a unique fingerprint.

Cite this