Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

How Ring scales international buyer assist with Amazon Bedrock Data Bases

admin by admin
March 31, 2026
in Artificial Intelligence
0
How Ring scales international buyer assist with Amazon Bedrock Data Bases
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


This submit is cowritten with David Kim, and Premjit Singh from Ring.

Scaling self-service assist globally presents challenges past translation. On this submit, we present you ways Ring, Amazon’s residence safety subsidiary, constructed a production-ready, multi-locale Retrieval-Augmented Technology (RAG)-based assist chatbot utilizing Amazon Bedrock Data Bases. By eliminating per-Area infrastructure deployments, Ring decreased the price of scaling to every further locale by 21%. On the identical time, Ring maintained constant buyer experiences throughout 10 worldwide Areas.

On this submit, you’ll find out how Ring applied metadata-driven filtering for Area-specific content material, separated content material administration into ingestion, analysis and promotion workflows, and achieved price financial savings whereas scaling up. The structure described on this submit makes use of Amazon Bedrock Data Bases, Amazon Bedrock, AWS Lambda, AWS Step Capabilities, and Amazon Easy Storage Service (Amazon S3). Whether or not you’re increasing assist operations internationally or trying to optimize your present RAG structure, this implementation offers sensible patterns you may apply to your personal multi-locale assist techniques.

The assist evolution journey for Ring

Buyer assist at Ring initially relied on a rule-based chatbot constructed with Amazon Lex. Whereas useful, the system had limitations with predefined dialog patterns that couldn’t deal with the various vary of buyer inquiries. Throughout peak intervals, 16% of interactions escalated to human brokers, and assist engineers spent 10% of their time sustaining the rule-based system. As Ring expanded throughout worldwide locales, this strategy turned unsustainable.

Necessities for a RAG-based assist system

Ring confronted a problem: the best way to present correct, contextually related assist throughout a number of worldwide locales with out creating separate infrastructure for every Area. The crew recognized 4 necessities that will inform their architectural strategy.

  1. World content material localization

The worldwide presence of Ring required greater than translation. Every territory wanted Area-specific product info, from voltage specs to regulatory compliance particulars, supplied by means of a unified system. Throughout the UK, Germany, and eight different locales, Ring wanted to deal with distinct product configurations and assist situations for every Area.

  1. Serverless, managed structure

Ring needed their engineering crew centered on bettering buyer expertise, not managing infrastructure. The crew wanted a completely managed, serverless answer.

  1. Scalable information administration

With a whole bunch of product guides, troubleshooting paperwork, and assist articles continuously being up to date, Ring wanted vector search know-how that would retrieve exact info from a unified repository. The system needed to assist automated content material ingestion pipelines in order that the Ring content material crew may publish updates that will change into accessible throughout a number of locales with out handbook intervention.

  1. Efficiency and value optimization

The typical end-to-end latency requirement for Ring was 7–8 seconds and efficiency evaluation revealed that cross-Area latency accounted for lower than 10% of whole response time. This discovering allowed Ring to undertake a centralized structure relatively than deploying separate infrastructure in every Area, which decreased operational complexity and prices.

To handle these necessities, Ring applied metadata-driven filtering with content material locale tags. This strategy serves Area-specific content material from a single centralized system. For his or her serverless necessities, Ring selected Amazon Bedrock Data Bases and Lambda, which eliminated the necessity for infrastructure administration whereas offering automated scaling.

Overview of answer

Ring designed their RAG-based chatbot structure to separate content material administration into two core processes: Ingestion & Analysis and Promotion. This two-phase strategy permits Ring to keep up steady content material enchancment whereas preserving manufacturing techniques steady.

Ingestion and analysis workflow

Architecture diagram showing the Ring data pipeline from content team through S3 buckets, Lambda processing, and Amazon Bedrock Knowledge Bases. The numbered flow (1-8) shows content ingestion, metadata extraction, daily evaluation through Step Functions, and quality validation before promoting to Golden Data Source.

Determine 1: Structure diagram exhibiting the Ring ingestion and analysis workflow with Step Capabilities orchestrating every day information base creation, analysis, and high quality validation utilizing Data Bases and S3 storage.

  1. Content material add – The Ring content material crew uploads assist documentation, troubleshooting guides, and product info to Amazon S3. The crew structured the S3 objects with content material in encoded format and metadata attributes. For instance, a file for the content material “Steps to Substitute the doorbell battery” has the next construction:
{
	"properties": {
		"slug": "abcde",
		"contentLocale": "en-GB",  # distinctive identifier
		"sourceFormat": "md",      # locale info
		"metadataAttributes": {
			"group": "Service",
			"slug": "abcde",
			"contentLocale": "en-GB"
		},
		"content material": "U3RlcHMgdG8gUmVwbGFjZSB0aGUgZG9vcmJlbGwgYmF0dGVyeTo= 
                VXNlIHRoZSBpbmNsdWRlZCBzZWN1cml0eSBzY3Jld2RyaXZlciB0byByZW1vdmUgd
                GhlIHNlY3VyaXR5IHNjcmV3IGxvY2F0ZWQgb24gdGhlIGJvdHRvbSBvZiB0aGUgZm
                FjZXBsYXRlCgpSZW1vdmUgdGhlIGZhY2VwbGF0ZSBieSBwcmVzc2luZyBpbiBvbiB
                0aGUgc2lkZXMgYW5kIGNhcmVmdWxseSBwdWxsaW5nIGl0IG91dCBhbmQgb2ZmCgpS
                ZW1vdmUgdGhlIGJhdHRlcnkgZnJvbSB0aGUgZG9vcmJlbGwKCkNvbm5lY3QgdGhlI
                GNoYXJnaW5nIGNhYmxlIHRvIHRoZSBiYXR0ZXJ5J3MgY2hhcmdpbmcgcG9ydAoKQ2h
                hcmdlIHVudGlsIG9ubHkgdGhlIGdyZWVuIGxpZ2h0IHJlbWFpbnMgbGl0ICh3aGlsZ
                SBjaGFyZ2luZywgeW91J2xsIHNlZSBib3RoIGEgc29saWQgZ3JlZW4gYW5kIGFtYmV
                yIGxpZ2h0KQoKUmUtaW5zZXJ0IHRoZSBjaGFyZ2VkIGJhdHRlcnkgaW50byB0aGUgZ
                G9vcmJlbGwKCkRlLWF0dGFjaCB0aGUgZmFjZXBsYXRlCgpTZWN1cmUgd2l0aCB0aGU
                gc2VjdXJpdHkgc2NyZXc=    # base64 encoded 
	}
}

  1. Content material processing – Ring configured Amazon S3 bucket occasion notifications with Lambda because the goal to routinely course of uploaded content material.
  1. Uncooked and processed content material storage

    The Lambda operate performs two key operations:

    • Copies the uncooked knowledge to the Data Base Archive Bucket
    • Extracts metadata and content material from uncooked knowledge, storing them as separate recordsdata within the Data Base Supply Bucket with contentLocale classification (for instance, {locale}/Service.Ring.{Upsert/Delete}.{unique_identifier}.json)

    For the doorbell battery instance, the Ring metadata and content material recordsdata have the next construction:

    {locale}/Service.Ring.{Upsert/Delete}.{unique_identifier}.metadata.json

{
	"metadataAttributes" : {
		"group": "Service",
		"slug": "abcde",
		"contentLocale": "en-GB"
	}
}

{locale}/Service.Ring.{Upsert/Delete}.{unique_identifier}.json

{
	"content material": "Steps to Substitute the doorbell battery:
	Use the included safety screwdriver to take away the safety screw positioned on the underside of the faceplate
	Take away the faceplate by urgent in on the perimeters and thoroughly pulling it out and off
	Take away the battery from the doorbell
	Join the charging cable to the battery's charging port
	Cost till solely the inexperienced mild stays lit (whereas charging, you may see each a strong inexperienced and amber mild)
	Re-insert the charged battery into the doorbell
	Re-attach the faceplate
	Safe with the safety screw
}

  1. Each day Information Copy and Data Base Creation

Ring makes use of AWS Step Capabilities to orchestrate their every day workflow that:

  • Copies content material and metadata from the Data Base Supply Bucket to Information Supply (Model)
  • Creates a brand new Data Base (Model) by indexing the every day bucket as knowledge supply for vector embedding

Every model maintains a separate Data Base, giving Ring unbiased analysis capabilities and easy rollback choices.

  1. Each day Analysis Course of

The AWS Step Capabilities workflow continues utilizing analysis datasets to:

  • Run queries throughout Data Base variations
  • Check retrieval accuracy and response high quality to check efficiency between variations
  • Publish efficiency metrics to Tableau dashboards with outcomes organized by contentLocale
  1. High quality Validation and Golden Dataset Creation

Ring makes use of the Anthropic Claude Sonnet 4 giant language mannequin (LLM)-as-a-judge to:

  • Consider metrics throughout Data Base variations to establish the best-performing model
  • Evaluate retrieval accuracy, response high quality, and efficiency metrics organized by contentLocale
  • Promote the highest-performing model to Information Supply (Golden) for manufacturing use

This structure helps rollbacks to earlier variations for as much as 30 days. As a result of content material is up to date roughly 200 instances per week, Ring determined to not preserve variations past 30 days.

Promotion workflow: customer-facing

Architecture diagram showing the Ring promotion pipeline with four-step customer interaction flow (1-4) from chatbot through AWS Lambda to Knowledge Bases retrieval and response generation using foundation models.

Determine 2: Structure diagram exhibiting the Ring manufacturing chatbot system the place buyer queries circulate by means of AWS Lambda to retrieve context from Data Bases and generate responses utilizing basis fashions

  1. Buyer interplay – Prospects provoke assist queries by means of the chatbot interface. For instance, a buyer question for the battery substitute situation seems to be like this:
{
	"textual content": "How can I exchange the doorbell battery?",
	"market": "en-GB"
}

  1. Question orchestration and information retrieval

Ring configured Lambda to course of buyer queries and retrieve related content material from Amazon Bedrock Data Bases. The operate:

  • Transforms incoming queries for the RAG system
  • Applies metadata filtering with contentLocale tags utilizing equals operator for exact Regional content material focusing on
  • Queries the validated Golden Information Supply to retrieve contextually related content material

Right here’s the pattern code Ring makes use of in AWS Lambda:

## Metadata Filtering for Regional Content material Concentrating on

num_results = 10
market = "en-GB"
knowledge_base_id = "A2BCDEFGHI"
user_text = "How can I exchange the doorbell battery?"

# Configure Regional content material filtering
vector_search_config = {"numberOfResults": num_results}
vector_search_config["filter"] = {
	"equals": {
		"key": "contentLocale",
		"worth": market
	}
}

# Run Amazon Bedrock Data Base search
response = boto3.shopper("bedrock-agent-runtime").retrieve(
	knowledgeBaseId=knowledge_base_id,
	retrievalQuery={"textual content": user_text},
	retrievalConfiguration={
		"vectorSearchConfiguration": vector_search_config,
	},
)

  1. Response era

Within the Lambda operate, the system:

  • Kinds the retrieved content material based mostly on relevance rating and selects the highest-scoring context
  • Combines the top-ranked context with the unique buyer question to create an augmented immediate
  • Sends the augmented immediate to LLM on Amazon Bedrock
  • Configures locale-specific prompts for every contentLocale
  • Generates contextually related responses returned by means of the chatbot interface

Different concerns in your implementation

When constructing your personal RAG-based system at scale, contemplate these architectural approaches and operational necessities past the core implementation:

Vector retailer choice

The Ring implementation makes use of Amazon OpenSearch Serverless because the vector retailer for his or her information bases. Nevertheless, Amazon Bedrock Data Bases additionally helps Amazon S3 Vectors as a vector retailer possibility. When selecting between these choices, contemplate:

  • Amazon OpenSearch Serverless: Supplies superior search capabilities, real-time indexing, and versatile querying choices. Finest suited to functions requiring complicated search patterns or while you want further OpenSearch options past vector search.
  • Amazon S3 vectors: Gives a less expensive possibility for simple vector search use instances. S3 vector shops present automated scaling, built-in sturdiness, and could be extra economical for large-scale deployments with predictable entry patterns.

Along with these two choices, AWS helps integrations with different knowledge retailer choices, together with Amazon Kendra, Amazon Neptune Analytics, and Amazon Aurora PostgreSQL. Consider your particular necessities round question complexity, price optimization, and operational wants when choosing your vector retailer. The prescriptive steerage offers a great start line to judge vector shops in your RAG use case.

Versioning structure concerns

Whereas Ring applied separate Data Bases for every model, you may contemplate another strategy involving separate knowledge sources for every model inside a single information base. This technique leverages the x-amz-bedrock-kb-data-source-id filter parameter to focus on particular knowledge sources throughout retrieval:

vector_search_config["filter"] = {
	"equals": {
		"key": "x-amz-bedrock-kb-data-source-id",
		"worth": ''
		}
	}
# Execute Bedrock Data Base search
response = boto3.shopper("bedrock-agent-runtime").retrieve(
	knowledgeBaseId=knowledge_base_id,
	retrievalQuery={"textual content": user_text},
	retrievalConfiguration={
		"vectorSearchConfiguration": vector_search_config,
	},
)

When selecting between these approaches, weigh these particular trade-offs:

  • Separate information bases per model (the strategy that Ring makes use of): Supplies knowledge supply administration and cleaner rollback capabilities, however requires managing extra information base situations.
  • Single information base with a number of knowledge sources: Reduces the variety of information base situations to keep up, however introduces complexity in knowledge supply routing logic and filtering mechanisms, plus requires sustaining separate knowledge shops for every knowledge supply ID.

Catastrophe restoration: Multi-Area deployment

Think about your catastrophe restoration necessities when designing your RAG structure. Amazon Bedrock Data Bases are Regional assets. To attain strong catastrophe restoration, deploy your full structure throughout a number of Areas:

  • Data bases: Create Data Base situations in a number of Areas
  • Amazon S3 buckets: Keep cross-Area copies of your Golden Information Supply
  • Lambda capabilities and Step Capabilities workflows: Deploy your orchestration logic in every Area
  • Information synchronization: Implement processes to maintain content material synchronized throughout Areas

The centralized structure serves its site visitors from a single Area, prioritizing price optimization over multi-region deployment. Consider your personal Restoration Time Goal (RTO) and Restoration Level Goal (RPO) necessities to find out whether or not a multi-Area deployment is important in your use case.

Basis mannequin throughput: Cross-Area inference

Amazon Bedrock basis fashions are Regional assets with Regional quotas. To deal with site visitors bursts and scale past single-Area quotas, Amazon Bedrock helps cross-Area inference (CRIS). CRIS routinely routes inference requests throughout a number of AWS Areas to extend throughput:

CRIS: Routes requests solely inside particular geographic boundaries (comparable to inside the US or inside the EU) to fulfill knowledge residency necessities. This could present as much as double the default in-Area quotas.

World CRIS: Routes requests throughout a number of industrial Areas worldwide, optimizing accessible assets and offering larger mannequin throughput past geographic CRIS capabilities. World CRIS routinely selects the optimum Area to course of every request.

CRIS operates independently out of your Data Base deployment technique. Even with a single-Area Data Base deployment, you may configure CRIS to scale your basis mannequin throughput throughout site visitors bursts. Observe that CRIS applies solely to the inference layer—your Data Bases, S3 buckets, and orchestration logic stay Regional assets that require separate multi-Area deployment for catastrophe restoration.

Embedding mannequin choice and chunking technique

Choosing the suitable embedding mannequin and chunking technique is necessary for RAG system efficiency as a result of it instantly impacts retrieval accuracy and response high quality. Ring makes use of the Amazon Titan Embeddings mannequin with the default chunking technique, which proved efficient for his or her assist documentation.

Amazon Bedrock presents flexibility with a number of choices:

Embedding fashions:

  • Amazon Titan embeddings: Optimized for text-based content material
  • Amazon Nova multimodal embeddings: Helps “Textual content”, “Picture”, “Audio”, and “Video” modalities

Chunking methods:

When ingesting knowledge, Amazon Bedrock splits paperwork into manageable chunks for environment friendly retrieval utilizing 4 methods:

  • Normal chunking: Mounted-size chunks for uniform paperwork
  • Hierarchical chunking: For structured paperwork with clear part hierarchies
  • Semantic chunking: Splits content material based mostly on matter boundaries
  • Multimodal content material chunking: For paperwork with combined content material sorts (textual content, pictures, tables)

Consider your content material traits to pick out the optimum mixture in your particular use case.

Conclusion

On this submit, we confirmed how Ring constructed a production-ready, multi-locale RAG-based assist chatbot utilizing Amazon Bedrock Data Bases. The structure combines automated content material ingestion, systematic every day analysis utilizing an LLM-as-judge strategy, and metadata-driven content material focusing on to attain a 21% discount in infrastructure and operational price per further locale, whereas sustaining constant buyer experiences throughout 10 worldwide Areas.

Past the core RAG structure, we coated key design concerns for manufacturing deployments: vector retailer choice, versioning methods, multi-Area deployment for catastrophe restoration, Cross-Area Inference for scaling basis mannequin throughput, embedding mannequin choice and chunking methods. These patterns apply broadly to any crew constructing multi-locale or high-availability RAG techniques on AWS.Ring continues to evolve their chatbot structure towards an agentic mannequin with dynamic agent choice and integration of a number of specialised brokers. This agentic strategy will permit Ring to route buyer inquiries to specialised brokers for gadget troubleshooting, order administration, and product suggestions, demonstrating the extensibility of RAG-based assist techniques constructed on Amazon Bedrock.

To study extra about Amazon Bedrock Data Bases, go to the Amazon Bedrock documentation.


In regards to the authors

Gopinath Jagadesan

Gopinath Jagadesan

Gopinath Jagadesan is a Senior Resolution Architect at AWS, the place he works with Amazon to design, construct, and deploy well-architected options on AWS. He holds a grasp’s diploma in electrical and laptop engineering from the College of Illinois at Chicago. Gopinath is captivated with generative AI and its real-world functions, serving to prospects harness its potential to drive innovation and effectivity. Outdoors of labor, he enjoys taking part in soccer and spending time along with his household and buddies.

David Kim

David Kim

David Kim is a Software program Improvement Engineer at Ring, the place he designs and builds AI brokers to automate customer support experiences. He’s captivated with conversational AI and multi-agent techniques, leveraging AWS Bedrock to create clever, scalable options. David additionally has a deep curiosity in quantum mechanics, exploring its potential intersections with computing. Outdoors of labor, he enjoys gaming, bouldering, watching TV reveals, and touring along with his household.

Premjit Singh

Premjit Singh

Premjit Singh is a Software program Improvement Supervisor with the Ring eCommerce platform at Ring. She focuses on enabling Ring prospects to find and buy Ring merchandise on ring.com. She is captivated with leveraging AWS AI service choices, together with Amazon Bedrock, to construct brokers and exploring Kiro’s spec-driven improvement paradigm. In her spare time, she enjoys watching TV reveals.

Tags: AmazonBasesBedrockcustomerglobalKnowledgeRingscalesSupport
Previous Post

Past the Vector Retailer: Constructing the Full Information Layer for AI Purposes

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • How Cursor Really Indexes Your Codebase

    404 shares
    Share 162 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    403 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    403 shares
    Share 161 Tweet 101
  • Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

    403 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • How Ring scales international buyer assist with Amazon Bedrock Data Bases
  • Past the Vector Retailer: Constructing the Full Information Layer for AI Purposes
  • Tips on how to Lie with Statistics together with your Robotic Finest Pal
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.