Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

When (Not) to Use Vector DB

admin by admin
December 17, 2025
in Artificial Intelligence
0
When (Not) to Use Vector DB
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


. They remedy an actual drawback, and in lots of circumstances, they’re the correct alternative for RAG programs. However right here’s the factor: simply since you’re utilizing embeddings doesn’t imply you want a vector database.

We’ve seen a rising development the place each RAG implementation begins by plugging in a vector DB. Which may make sense for large-scale, persistent data bases, nevertheless it’s not all the time essentially the most environment friendly path, particularly when your use case is extra dynamic or time-sensitive.

At Planck, we make the most of embeddings to reinforce LLM-based programs. Nevertheless, in certainly one of our real-world purposes, we opted to keep away from a vector database and as an alternative used a easy key-value retailer, which turned out to be a significantly better match.

Earlier than I dive into that, let’s discover a easy, generalized model of our state of affairs to clarify why.

Foo Instance

Let’s think about a easy RAG-style system. A consumer uploads just a few textual content recordsdata, possibly some reviews or assembly notes. We break up these recordsdata into chunks, generate embeddings for every chunk, and use these embeddings to reply questions. The consumer asks a handful of questions over the following couple of minutes, then leaves. At that time, each the recordsdata and their embeddings are ineffective and will be safely discarded.

In different phrases, the information is ephemeral, the consumer will ask solely a few questions, and we wish to reply them as quick as attainable.

Now pause for a second and ask your self:

The place ought to I retailer these embeddings?


Most individuals’s intuition is: “I’ve embeddings, so I want a vector database”, however pause for a second and take into consideration what’s truly occurring behind that abstraction. Once you ship embeddings to a vector DB, it doesn’t simply “retailer” them. It builds an index that quickens similarity searches. That indexing work is the place a number of the magic comes from, and likewise the place a number of the fee lives.

In a long-lived, large-scale data base, this trade-off makes excellent sense: you pay an indexing value as soon as (or incrementally as knowledge adjustments), after which unfold that value over thousands and thousands of queries. In our Foo instance, that’s not what’s occurring. We’re doing the other: consistently including small, one-off batches of embeddings, answering a tiny variety of queries per batch, after which throwing every part away.

So the actual query will not be “ought to I exploit a vector database?” however “is the indexing work value it?” To reply that, we will have a look at a easy benchmark.

Benchmarking: No-Index Retrieval vs. Listed Retrieval

Picture by Julia Fiander on Unsplash

This part is extra technical. We’ll have a look at Python code and clarify the underlying algorithms. If the precise implementation particulars aren’t related to you, be happy to skip forward to the Outcomes part.

We wish to examine two programs:

  1. No indexing in any respect, simply retains embeddings in reminiscence and scans them straight.
  2. A vector database, the place we pay an indexing value upfront to make every question quicker.

First, think about the “no vector DB” strategy. When a question is available in, we compute similarities between the question embedding and all saved embeddings, then choose the top-k. That’s simply Okay-Nearest Neighbors with none index.

import numpy as np

def run_knn(embeddings: np.ndarray, query_embedding: np.ndarray, top_k: int) -> np.ndarray:
    sims = embeddings @ query_embedding
    return sims.argsort()[-top_k:][::-1]

The code makes use of the dot product as a proxy for cosine similarity (assuming normalized vectors) and kinds the scores to seek out one of the best matches. It actually simply scans all vectors and picks the closest ones.

Now, let’s have a look at what a vector DB sometimes does. Underneath the hood, most vector databases depend on an approximate nearest neighbor (ANN) index. ANN strategies commerce a little bit of accuracy for a big increase in search pace, and some of the broadly used algorithms for that is HNSW. We’ll use the hnswlib library to simulate the index conduct.

import numpy as np
import hnswlib

def create_hnsw_index(embeddings: np.ndarray, num_dims: int) -> hnswlib.Index:
    index = hnswlib.Index(area='cosine', dim=num_dims)
    index.init_index(max_elements=embeddings.form[0])
    index.add_items(embeddings)
    return index

def query_hnsw(index: hnswlib.Index, query_embedding: np.ndarray, top_k: int) -> np.ndarray:
    labels, distances = index.knn_query(query_embedding, okay=top_k)
    return labels[0]

To see the place the trade-off lands, we will generate some random embeddings, normalize them, and measure how lengthy every step takes:

import time
import numpy as np
import hnswlib
from tqdm import tqdm

def run_benchmark(num_embeddings: int, num_dims: int, top_k: int, num_iterations: int) -> None:
    print(f"Benchmarking with {num_embeddings} embeddings of dimension {num_dims}, retrieving top-{top_k} nearest neighbors.")

    knn_times: checklist[float] = []
    index_times: checklist[float] = []
    hnsw_query_times: checklist[float] = []

    for _ in tqdm(vary(num_iterations), desc="Working benchmark"):
        embeddings = np.random.rand(num_embeddings, num_dims).astype('float32')
        embeddings = embeddings / np.linalg.norm(embeddings, axis=1, keepdims=True)
        query_embedding = np.random.rand(num_dims).astype('float32')
        query_embedding = query_embedding / np.linalg.norm(query_embedding)

        start_time = time.time()
        run_knn(embeddings, query_embedding, top_k)
        knn_times.append((time.time() - start_time) * 1e3)

        start_time = time.time()
        vector_db_index = create_hnsw_index(embeddings, num_dims)
        index_times.append((time.time() - start_time) * 1e3)

        start_time = time.time()
        query_hnsw(vector_db_index, query_embedding, top_k)
        hnsw_query_times.append((time.time() - start_time) * 1e3)

    print(f"BENCHMARK RESULTS (averaged over {num_iterations} iterations)")
    print(f"[Naive KNN] Common search time with out indexing: {np.imply(knn_times):.2f} ms")
    print(f"[HNSW Index] Common index building time: {np.imply(index_times):.2f} ms")
    print(f"[HNSW Index] Common question time with indexing: {np.imply(hnsw_query_times):.2f} ms")

run_benchmark(num_embeddings=50000, num_dims=1536, top_k=5, num_iterations=20)

Outcomes

On this instance, we use 50,000 embeddings with 1,536 dimensions (matching OpenAI’s text-embedding-3-small) and retrieve the top-5 neighbors. The precise outcomes will differ with completely different configs, however the sample we care about is similar.

I encourage you to run the benchmark with your individual numbers, it’s one of the best ways to see how the trade-offs play out in your particular use case.

On common, the naive KNN search takes 24.54 milliseconds per question. Constructing the HNSW index for a similar embeddings takes round 277 seconds. As soon as the index is constructed, every question takes about 0.47 milliseconds.

From this, we will estimate the break-even level. The distinction between naive KNN and listed queries is 24.07 ms per question. That suggests you want 11,510 queries earlier than the time saved on every question compensates for the time spent constructing the index.

Generated utilizing the benchmark code: A graph evaluating naive KNN and listed search effectivity

Moreover, even with completely different values for the variety of embeddings and top-k, the break-even level stays within the hundreds of queries and stays inside a reasonably slender vary. You don’t get a state of affairs the place indexing begins to repay after just some dozen queries.

Generated utilizing the benchmark code: A graph displaying break-even factors for numerous embedding counts and top-k settings (picture by writer)

Now examine that to the Foo instance. A consumer uploads a small set of recordsdata and asks just a few questions, not hundreds. The system by no means reaches the purpose the place the index pays off. As a substitute, the indexing step merely delays the second when the system can reply the primary query and provides operational complexity.

For this type of short-lived, per-user context, the straightforward in-memory KNN strategy will not be solely simpler to implement and function, however it’s also quicker end-to-end.

If in-memory storage will not be an choice, both as a result of the system is distributed or as a result of we have to protect the consumer’s state for a couple of minutes, we will use a key-value retailer like Redis. We will retailer a novel identifier for the consumer’s request as the important thing and retailer all of the embeddings as the worth.

This offers us a light-weight, low-complexity resolution that’s well-suited to our use case of short-lived, low-query contexts.

Actual-World Instance: Why We Selected a Key-Worth Retailer

Picture by Gavin Allanwood on Unsplash

At Planck, we reply insurance-related questions on companies. A typical request begins with a enterprise identify and handle, after which we retrieve real-time knowledge about that particular enterprise, together with its on-line presence, registrations, and different public data. This knowledge turns into our context, and we use LLMs and algorithms to reply questions based mostly on it.

The necessary bit is that each time we get a request, we generate a recent context. We’re not reusing current knowledge, it’s fetched on demand and stays related for a couple of minutes at most.

In the event you assume again to the sooner benchmark, this sample ought to already be triggering your “this isn’t a vector DB use case” sensor.

Each time we obtain a request, we generate recent embeddings for short-lived knowledge that we’ll probably question just a few hundred occasions. Indexing these embeddings in a vector DB provides pointless latency. In distinction, with Redis, we will instantly retailer the embeddings and run a fast similarity search within the software code with virtually no indexing delay.

That’s why we selected Redis as an alternative of a vector database. Whereas vector DBs are wonderful at dealing with giant volumes of embeddings and supporting quick nearest-neighbor queries, they introduce indexing overhead, and in our case, that overhead will not be value it.

In Conclusion

If you’ll want to retailer thousands and thousands of embeddings and help high-query workloads throughout a shared corpus, a vector DB can be a greater match. And sure, there are positively use circumstances on the market that really want and profit from a vector DB.

However simply since you’re utilizing embeddings or constructing a RAG system doesn’t imply it is best to default to a vector DB.

Every database expertise has its strengths and trade-offs. The only option begins with a deep understanding of your knowledge and use case, relatively than mindlessly following the development.

So, the following time you’ll want to select a database, pause for a second and ask: am I selecting the best one based mostly on goal trade-offs, or am I simply going with the trendiest, shiniest alternative?

Tags: VectorDB
Previous Post

How Tata Energy CoE constructed a scalable AI-powered photo voltaic panel inspection answer with Amazon SageMaker AI and Amazon Bedrock

Next Post

Governance by design: The important information for profitable AI scaling

Next Post
Governance by design: The important information for profitable AI scaling

Governance by design: The important information for profitable AI scaling

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    403 shares
    Share 161 Tweet 101
  • Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

    403 shares
    Share 161 Tweet 101
  • The Good-Sufficient Fact | In direction of Knowledge Science

    403 shares
    Share 161 Tweet 101
  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    402 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Understanding the Generative AI Consumer | In the direction of Information Science
  • Bi-directional streaming for real-time agent interactions now out there in Amazon Bedrock AgentCore Runtime
  • Transformer vs LSTM for Time Sequence: Which Works Higher?
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.