Get the latest tech news

Sketch-of-Thought: Efficient LLM Reasoning


Recent advances in large language models have demonstrated remarkable reasoning capabilities through Chain of Thought (CoT) prompting, but often at the cost of excessive verbosity in their intermediate outputs, which increases computational overhead. We introduce Sketch-of-Thought (SoT), a novel prompting framework that combines cognitive-inspired reasoning paradigms with linguistic constraints to minimize token usage while preserving reasoning accuracy. SoT is designed as a flexible framework that can incorporate any custom reasoning paradigms based on cognitive science, and we instantiate it with three such paradigms - Conceptual Chaining, Chunked Symbolism, and Expert Lexicons - each tailored to different reasoning tasks and selected dynamically via a lightweight routing model. Through comprehensive evaluation across 15 reasoning datasets with multiple languages and multimodal scenarios, we demonstrate that SoT achieves token reductions of 76% with negligible accuracy impact. In certain domains like mathematical and multi-hop reasoning, it even improves accuracy while using significantly fewer tokens. Our code is publicly available: https://www.github.com/SimonAytes/SoT.

View a PDF of the paper titled Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching, by Simon A. Aytes and 2 other authors View PDFHTML (experimental) Abstract:Recent advances in large language models have demonstrated remarkable reasoning capabilities through Chain of Thought (CoT) prompting, but often at the cost of excessive verbosity in their intermediate outputs, which increases computational overhead. Through comprehensive evaluation across 15 reasoning datasets with multiple languages and multimodal scenarios, we demonstrate that SoT achieves token reductions of 76% with negligible accuracy impact.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of thought

thought

Related news:

News photo

Watch R1 "think" with animated chains of thought

News photo

Bolt: Bootstrap long chain-of-thought in LLMs without distillation [pdf]

News photo

Learning how to think with Meta Chain-of-Thought