In my newest put up, I how hybrid search could be utilised to considerably enhance the effectiveness of a RAG pipeline. RAG, in its primary model, utilizing simply semantic search on embeddings, could be very efficient, permitting us to utilise the facility of AI in our personal paperwork. Nonetheless, semantic search, as highly effective as it’s, when utilised in massive data bases, can typically miss precise matches of the person’s question, even when they exist within the paperwork. This weak point of conventional RAG could be handled by including a key phrase search part within the pipeline, like BM25. On this manner, hybrid search, combining semantic and key phrase search, results in far more complete outcomes and considerably improves the efficiency of a RAG system.

Be that as it could, even when utilizing RAG with hybrid search, we are able to nonetheless typically miss vital info that’s scattered in several components of the doc. This will occur as a result of when a doc is damaged down into textual content chunks, typically the context — that’s, the encircling textual content of the chunk that kinds a part of its which means — is misplaced. This will particularly occur for textual content that’s complicated, with which means that’s interconnected and scattered throughout a number of pages, and inevitably can’t be wholly included inside a single chunk. Assume, for instance, referencing a desk or a picture throughout a number of totally different textual content sections with out explicitly defining to which desk we’re refering to (e.g., “as proven within the Desk, income elevated by 6%” — which desk?). Because of this, when the textual content chunks are then retrieved, they’re stripped down of their context, typically ensuing within the retrieval of irrelevant chunks and era of irrelevant responses.
This lack of context was a significant concern for RAG techniques for a while, and a number of other not-so-successful options have been explored for enhancing it. An apparent try for enhancing this, is rising chunk dimension, however this typically additionally alters the semantic which means of every chunk and finally ends up making retrieval much less exact. One other strategy is rising chunk overlap. Whereas this helps to extend the preservation of context, it additionally will increase storage and computation prices. Most significantly, it doesn’t totally resolve the issue — we are able to nonetheless have vital interconnections to the chunk out of chunk boundaries. Extra superior approaches making an attempt to resolve this problem embody Hypothetical Doc Embeddings (HyDE) or Doc Abstract Index. Nonetheless, these nonetheless fail to offer substantial enhancements.
Finally, an strategy that successfully resolves this and considerably enhances the outcomes of a RAG system is contextual retrieval, initially launched by Anthropic in 2024. Contextual retrieval goals to resolve the lack of context by preserving the context of the chunks and, subsequently, enhancing the accuracy of the retrieval step of the RAG pipeline.
. . .
What about context?
Earlier than saying something about contextual retrieval, let’s take a step again and speak a bit bit about what context is. Positive, we’ve all heard in regards to the context of LLMs or context home windows, however what are these about, actually?
To be very exact, context refers to all of the tokens which can be out there to the LLM and based mostly on which it predicts the subsequent phrase — bear in mind, LLMs work by producing textual content by predicting it one phrase at a time. Thus, that would be the person immediate, the system immediate, directions, abilities, or some other pointers influencing how the mannequin produces a response. Importantly, the a part of the ultimate response the mannequin has produced thus far can be a part of the context, since every new token is generated based mostly on every part that got here earlier than it.
Apparently, totally different contexts result in very totally different mannequin outputs. For instance:
- ‘I went to a restaurant and ordered a‘ may output ‘pizza.‘
- ‘I went to the pharmacy and acquired a‘ may output ‘drugs.‘
A basic limitation of LLMs is their context window. The context window of an LLM is the utmost variety of tokens that may be handed directly as enter to the mannequin and be taken into consideration to supply a single response. There are LLMs with bigger or smaller context home windows. Fashionable frontier fashions can deal with lots of of 1000’s of tokens in a single request, whereas earlier fashions typically had context home windows as small as 8k tokens.
In an ideal world, we’d need to simply cross all the knowledge that the LLM must know within the context, and we’d most definitely get excellent solutions. And that is true to some extent — a frontier mannequin like Opus 4.6 with a 200k token context window corresponds to about 500-600 pages of textual content. If all the knowledge we have to present matches this dimension restrict, we are able to certainly simply embody every part as is, as an enter to the LLM and get an important reply.
The difficulty is that for many of real-world AI use circumstances, we have to make the most of some type of data base with a dimension that’s a lot past this threshold — suppose, as an example, authorized libraries or manuals of technical tools. Since fashions have these context window limitations, we sadly can’t simply cross every part to the LLM and let it magically reply — we’ve to somwhow decide what is an important info that needs to be included in our restricted context window. And that’s primarily what the RAG methodology is all about — choosing the suitable info from a big data base in order to successfully reply a person’s question. Finally, this emerges as an optimization/ engineering drawback — context engineering — figuring out the suitable info to incorporate in a restricted context window, in order to supply the absolute best responses.
That is probably the most essential a part of a RAG system — ensuring the suitable info is retrieved and handed over as enter to the LLM. This may be accomplished with semantic search and key phrase search, as already defined. However, even when bringing all semantically related chunks and all precise matches, there’s nonetheless a great likelihood that some vital info could also be left behind.
However what sort of info would this be? Since we’ve lined the which means with semantic search and the precise matches with key phrase search, what different sort of knowledge is there to contemplate?
Completely different paperwork with inherently totally different meanings could embody components which can be comparable and even an identical. Think about a recipe guide and a chemical processing handbook each instructing the reader to ‘Warmth the combination slowly’. The semantic which means of such a textual content chunk and the precise phrases are very comparable — an identical. On this instance, what kinds the which means of the textual content and permit us to separate between cooking and chemnical engineering is what we’re reffering to as context.

Thus, that is the type of additional info we intention to protect. And that is precisely what contextual retrieval does: preserves the context — the encircling which means — of every textual content chunk.
. . .
What about contextual retrieval?
So, contextual retrieval is a technique utilized in RAG aiming to protect the context of every chunk. On this manner, when a piece is retrieved and handed over to the LLM as enter, we’re capable of protect as a lot of its preliminary which means as doable — the semantics, the key phrases, the context — all of it.
To realize this, contextual retrieval means that we first generate a helper textual content for every chunk — specifically, the contextual textual content — that permits us to situate the textual content chunk within the authentic doc it comes from. In observe, we ask an LLM to generate this contextual textual content for every chunk. To do that, we offer the doc, together with the precise chunk, in a single request to an LLM and immediate it to “present the context to situate the particular chunk within the doc“. A immediate for producing the contextual textual content for our Italian Cookbook chunk would look one thing like this:
your complete doc Italian Cookbook doc the chunk comes from
Right here is the chunk we need to place inside the context of the complete doc.
the precise chunk
Present a quick context that situates this chunk inside the total
doc to enhance search retrieval. Reply solely with the concise
context and nothing else.
The LLM returns the contextual textual content which we mix with our preliminary textual content chunk. On this manner, for every chunk of our preliminary textual content, we generate a contextual textual content that describes how this particular chunk is positioned in its guardian doc. For our instance, this might be one thing like:
Context: Recipe step for simmering selfmade tomato pasta sauce.
Chunk: Warmth the combination slowly and stir often to stop it from sticking.
Which is certainly much more informative and particular! Now there is no such thing as a doubt about what this mysterious combination is, as a result of all the knowledge wanted for identiying whether or not we’re speaking about tomato sauce or laboratory starch options is conveniently included inside the identical chunk.
From this level on, we take care of the preliminary chunk textual content and the contextual textual content as an unbreakable pair. Then, the remainder of the steps of RAG with hybrid search are carried out primarily in the identical manner. That’s, we create embeddings which can be saved in a vector search and the BM25 index for every textual content chunk, prepended with its contextual textual content.

This strategy, so simple as it’s, leads to astonishing enhancements within the retrieval efficiency of RAG pipelines. Based on Anthropic, Contextual Retrieval improves the retrieval accuracy by a powerful 35%.
. . .
Decreasing value with immediate caching
I hear you asking, “However isn’t this going to break the bank?“. Surprisingly, no.
Intuitively, we perceive that this setup goes to considerably enhance the price of ingestion for a RAG pipeline — primarily double it, if no more. In any case we now added a bunch of additional calls to the LLM, didn’t we? That is true to some extent — certainly now, for every chunk, we make an extra name to the LLM as a way to situate it inside its supply doc and get the contextual textual content.
Nevertheless, this can be a value that we’re solely paying as soon as, on the stage of doc ingestion. Not like different strategies that try and protect context at runtime — corresponding to Hypothetical Doc Embeddings (HyDE) — contextual retrieval performs the heavy work in the course of the doc ingestion stage. In runtime approaches, extra LLM calls are required for each person question, which may rapidly scale latency and operational prices. In distinction, contextual retrieval shifts the computation to the ingestion part, which means that the improved retrieval high quality comes with no extra overhead throughout runtime. On high of those, extra strategies can be utilized for additional decreasing the contextual retrieval value. Extra exactly, caching can be utilized for producing the abstract of the doc solely as soon as after which situating every chunk in opposition to the produced doc abstract.
. . .
On my thoughts
Contextual retrieval represents a easy but highly effective enchancment to conventional RAG techniques. By enriching every chunk with contextual textual content, pinpointing its semantic place inside its supply doc, we dramatically cut back the anomaly of every chunk, and thus enhance the standard of the knowledge handed to the LLM. Mixed with hybrid search, this system permits us to protect semantics, key phrases, and context concurrently.
Liked this put up? Let’s be pals! Be part of me on:
📰Substack 💌 Medium 💼LinkedIn ☕Purchase me a espresso!
All photographs by the writer, besides talked about in any other case.

