Get the latest tech news

voyage-context-3: Contextual Retrieval Without the LLM


TL;DR – We’re excited to introduce voyage-context-3, a contextualized chunk embedding model that produces vectors for chunks that capture the full document context without any manual metadata…

voyage-context-3 provides a seamless drop-in replacement for standard, context-agnostic embedding models used in existing RAG pipelines, while offering improved retrieval quality through its ability to capture relevant contextual information. For instance, if a 50-page legal document is vectorized into a single embedding, detailed information—such as the sentence “All data transmissions between the Client and the Service Provider’s infrastructure shall utilize AES-256 encryption in GCM mode”—is likely to be buried or lost in the aggregate. Common workarounds—such as chunk overlaps, context summaries using LLMs (e.g., Anthropic’s contextual retrieval), or metadata augmentation—can introduce extra steps into an already complex AI application pipeline.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLM

LLM

Photo of contextual retrieval

contextual retrieval

Photo of context-3

context-3

Related news:

News photo

Show HN: Mcp-use – Connect any LLM to any MCP

News photo

Show HN: Price Per Token – LLM API Pricing Data

News photo

Context Rot: How increasing input tokens impacts LLM performance