Generative AI revolutionizes enterprise operations via varied purposes, together with conversational assistants similar to Amazon’s Rufus and Amazon Vendor Assistant. Moreover, a few of the most impactful generative AI purposes function autonomously behind the scenes, a necessary functionality that empowers enterprises to remodel their operations, knowledge processing, and content material creation at scale. These non-conversational implementations, usually within the type of agentic workflows powered by giant language fashions (LLMs), execute particular enterprise targets throughout industries with out direct consumer interplay.
Non-conversational purposes supply distinctive benefits similar to larger latency tolerance, batch processing, and caching, however their autonomous nature requires stronger guardrails and exhaustive high quality assurance in comparison with conversational purposes, which profit from real-time consumer suggestions and supervision.
This put up examines 4 various Amazon.com examples of such generative AI purposes:
Every case research reveals totally different facets of implementing non-conversational generative AI purposes, from technical structure to operational issues. All through these examples, you’ll learn the way the excellent suite of AWS companies, together with Amazon Bedrock and Amazon SageMaker, are the important thing to success. Lastly, we record key learnings generally shared throughout these use circumstances.
Creating high-quality product listings on Amazon.com
Creating high-quality product listings with complete particulars helps prospects make knowledgeable buy selections. Historically, promoting companions manually entered dozens of attributes per product. The brand new generative AI answer, launched in 2024, transforms this course of by proactively buying product info from model web sites and different sources to enhance the shopper expertise throughout quite a few product classes.
Generative AI simplifies the promoting accomplice expertise by enabling info enter in varied codecs similar to URLs, product pictures, or spreadsheets and routinely translating this into the required construction and format. Over 900,000 promoting companions have used it, with almost 80% of generated itemizing drafts accepted with minimal edits. AI-generated content material supplies complete product particulars that assist with readability and accuracy, which may contribute to product discoverability in buyer searches.
For brand spanking new listings, the workflow begins with promoting companions offering preliminary info. The system then generates complete listings utilizing a number of info sources, together with titles, descriptions, and detailed attributes. Generated listings are shared with promoting companions for approval or modifying.
For present listings, the system identifies merchandise that may be enriched with further knowledge.
Knowledge integration and processing for a big number of outputs
The Amazon crew constructed sturdy connectors for inner and exterior sources with LLM-friendly APIs utilizing Amazon Bedrock and different AWS companies to seamlessly combine into Amazon.com backend methods.
A key problem is synthesizing various knowledge into cohesive listings throughout greater than 50 attributes, each textual and numerical. LLMs require particular management mechanisms and directions to precisely interpret ecommerce ideas as a result of they won’t carry out optimally with such complicated, different knowledge. For instance, LLMs may misread “capability” in a knife block as dimensions quite than variety of slots, or mistake “Match Put on” as a mode description as a substitute of a model title. Immediate engineering and fine-tuning have been extensively used to deal with these circumstances.
Era and validation with LLMs
The generated product listings needs to be full and proper. To assist this, the answer implements a multistep workflow utilizing LLMs for each era and validation of attributes. This dual-LLM method helps stop hallucinations, which is essential when coping with security hazards or technical specs. The crew developed superior self-reflection strategies to ensure the era and validation processes complement one another successfully.
The next determine illustrates the era course of with validation each carried out by LLMs.

Determine 1. Product Itemizing creation workflow
Multi-layer high quality assurance with human suggestions
Human suggestions is central to the answer’s high quality assurance. The method consists of Amazon.com specialists for preliminary analysis and promoting accomplice enter for acceptance or edits. This supplies high-quality output and permits ongoing enhancement of AI fashions.
The standard assurance course of consists of automated testing strategies combining ML-, algorithm-, or LLM-based evaluations. Failed listings endure regeneration, and profitable listings proceed to additional testing. Utilizing causal inference fashions, we establish underlying options affecting itemizing efficiency and alternatives for enrichment. In the end, listings that cross high quality checks and obtain promoting accomplice acceptance are printed, ensuring prospects obtain correct and complete product info.
The next determine illustrates the workflow of going to manufacturing with testing, analysis, and monitoring of product itemizing era.

Determine 2. Product Itemizing testing and human within the loop workflow
Utility-level system optimization for accuracy and price
Given the excessive requirements for accuracy and completeness, the crew adopted a complete experimentation method with an automatic optimization system. This method explores varied mixtures of LLMs, prompts, playbooks, workflows, and AI instruments to iterate for larger enterprise metrics, together with value. By steady analysis and automatic testing, the product itemizing generator successfully balances efficiency, value, and effectivity whereas staying adaptable to new AI developments. This method means prospects profit from high-quality product info, and promoting companions have entry to cutting-edge instruments for creating listings effectively.
Generative AI-powered prescription processing in Amazon Pharmacy
Constructing upon the human-AI hybrid workflows beforehand mentioned within the vendor itemizing instance, Amazon Pharmacy demonstrates how these ideas might be utilized in a Well being Insurance coverage Portability and Accountability Act (HIPAA)-regulated trade. Having shared a conversational assistant for affected person care specialists within the put up Learn the way Amazon Pharmacy created their LLM-based chat-bot utilizing Amazon SageMaker, we now deal with automated prescription processing, which you’ll be able to examine in The lifetime of a prescription at Amazon Pharmacy and the next analysis paper in Nature Journal.
At Amazon Pharmacy, we developed an AI system constructed on Amazon Bedrock and SageMaker to assist pharmacy technicians course of treatment instructions extra precisely and effectively. This answer integrates human specialists with LLMs in creation and validation roles to boost precision in treatment directions for our sufferers.
Agentic workflow design for healthcare accuracy
The prescription processing system combines human experience (knowledge entry technicians and pharmacists) with AI help for route solutions and suggestions. The workflow, proven within the following diagram, begins with a pharmacy knowledge-based preprocessor standardizing uncooked prescription textual content in Amazon DynamoDB, adopted by fine-tuned small language fashions (SLMs) on SageMaker figuring out essential elements (dosage, frequency).
![]() (a) |
|
![]() (b) |
![]() (c) |
Determine 3. (a) Knowledge entry technician and pharmacist workflow with two GenAI modules, (b) Suggestion module workflow and (c) Flagging module workflow |
The system seamlessly integrates specialists similar to knowledge entry technicians and pharmacists, the place generative AI enhances the general workflow in the direction of agility and accuracy to higher serve our sufferers. A route meeting system with security guardrails then generates directions for knowledge entry technicians to create their typed instructions via the suggestion module. The flagging module flags or corrects errors and enforces additional security measures as suggestions offered to the information entry technician. The technician finalizes extremely correct, safe-typed instructions for pharmacists who can both present suggestions or execute the instructions to the downstream service.
One spotlight from the answer is the usage of activity decomposition, which empowers engineers and scientists to interrupt the general course of into a mess of steps with particular person modules fabricated from substeps. The crew extensively used fine-tuned SLMs. As well as, the method employs conventional ML procedures similar to named entity recognition (NER) or estimation of ultimate confidence with regression fashions. Utilizing SLMs and conventional ML in such contained, well-defined procedures considerably improved processing pace whereas sustaining rigorous security requirements as a result of incorporation of applicable guardrails on particular steps.
The system includes a number of well-defined substeps, with every subprocess working as a specialised element working semi-autonomously but collaboratively inside the workflow towards the general goal. This decomposed method, with particular validations at every stage, proved simpler than end-to-end options whereas enabling the usage of fine-tuned SLMs. The crew used AWS Fargate to orchestrate the workflow given its present integration into present backend methods.
Of their product growth journey, the crew turned to Amazon Bedrock, which offered high-performing LLMs with ease-of-use options tailor-made to generative AI purposes. SageMaker enabled additional LLM choices, deeper customizability, and conventional ML strategies. To study extra about this system, see How activity decomposition and smaller LLMs could make AI extra inexpensive and browse concerning the Amazon Pharmacy enterprise case research.
Constructing a dependable software with guardrails and HITL
To adjust to HIPAA requirements and supply affected person privateness, we applied strict knowledge governance practices alongside a hybrid method that mixes fine-tuned LLMs utilizing Amazon Bedrock APIs with Retrieval Augmented Era (RAG) utilizing Amazon OpenSearch Service. This mixture permits environment friendly information retrieval whereas sustaining excessive accuracy for particular subtasks.
Managing LLM hallucinations—which is essential in healthcare—required extra than simply fine-tuning on giant datasets. Our answer implements domain-specific guardrails constructed on Amazon Bedrock Guardrails, complemented by human-in-the-loop (HITL) oversight to advertise system reliability.
The Amazon Pharmacy crew continues to boost this technique via real-time pharmacist suggestions and expanded prescription format capabilities. This balanced method of innovation, area experience, superior AI companies, and human oversight not solely improves operational effectivity, however implies that the AI system correctly augments healthcare professionals in delivering optimum affected person care.
Generative AI-powered buyer overview highlights
Whereas our earlier instance showcased how Amazon Pharmacy integrates LLMs into real-time workflows for prescription processing, this subsequent use case demonstrates how comparable strategies—SLMs, conventional ML, and considerate workflow design—might be utilized to offline batch inferencing at large scale.
Amazon has launched AI-generated buyer overview highlights to course of over 200 million annual product critiques and scores. This characteristic distills shared buyer opinions into concise paragraphs highlighting constructive, impartial, and detrimental suggestions about merchandise and their options. Consumers can shortly grasp consensus whereas sustaining transparency by offering entry to associated buyer critiques and holding authentic critiques accessible.
The system enhances buying selections via an interface the place prospects can discover overview highlights by deciding on particular options (similar to image high quality, distant performance, or ease of set up for a Hearth TV). Options are visually coded with inexperienced examine marks for constructive sentiment, orange minus indicators for detrimental, and grey for impartial—which suggests consumers can shortly establish product strengths and weaknesses primarily based on verified buy critiques. The next screenshot reveals overview highlights relating to noise degree for a product.

Determine 4. An instance product overview highlights for a product.
A recipe for cost-effective use of LLMs for offline use circumstances
The crew developed a cheap hybrid structure combining conventional ML strategies with specialised SLMs. This method assigns sentiment evaluation and key phrase extraction to conventional ML whereas utilizing optimized SLMs for complicated textual content era duties, bettering each accuracy and processing effectivity. The next diagram reveals ttraditional ML and LLMs working to supply the general workflow.

Determine 5. Use of conventional ML and LLMs in a workflow.
The characteristic employs SageMaker batch remodel for asynchronous processing, considerably lowering prices in comparison with real-time endpoints. To ship a close to zero-latency expertise, the answer caches extracted insights alongside present critiques, lowering wait instances and enabling simultaneous entry by a number of prospects with out further computation. The system processes new critiques incrementally, updating insights with out reprocessing the whole dataset. For optimum efficiency and cost-effectiveness, the characteristic makes use of Amazon Elastic Compute Cloud (Amazon EC2) Inf2 cases for batch remodel jobs, offering as much as 40% higher price-performance to alternate options.
By following this complete method, the crew successfully managed prices whereas dealing with the huge scale of critiques and merchandise in order that the answer remained each environment friendly and scalable.
Amazon Adverts AI-powered inventive picture and video era
Having explored largely text-centric generative AI purposes in earlier examples, we now flip to multimodal generative AI with Amazon Adverts inventive content material era for sponsored adverts. The answer has capabilities for picture and video era, the main points of which we share on this part. In frequent, this answer makes use of Amazon Nova inventive content material era fashions at its core.
Working backward from buyer want, a March 2023 Amazon survey revealed that almost 75% of advertisers battling marketing campaign success cited inventive content material era as their main problem. Many advertisers—notably these with out in-house capabilities or company help—face vital limitations as a result of experience and prices of manufacturing high quality visuals. The Amazon Adverts answer democratizes visible content material creation, making it accessible and environment friendly for advertisers of various sizes. The affect has been substantial: advertisers utilizing AI-generated pictures in Sponsored Manufacturers campaigns noticed almost 8% click-through charges (CTR) and submitted 88% extra campaigns than non-users.
Final 12 months, the AWS Machine Studying Weblog printed a put up detailing the picture era answer. Since then, Amazon has adopted Amazon Nova Canvas as its basis for inventive picture era, creating professional-grade pictures from textual content or picture prompts with options for text-based modifying and controls for shade scheme and structure changes.
In September 2024, the Amazon Adverts crew included the creation of short-form video adverts from product pictures. This characteristic makes use of basis fashions accessible on Amazon Bedrock to present prospects management over visible model, pacing, digicam movement, rotation, and zooming via pure language, utilizing an agentic workflow to first describe video storyboards after which generate the content material for the story. The next screenshot reveals an instance of inventive picture era for product backgrounds on Amazon Adverts.

Determine 6. Adverts picture era instance for a product.
As mentioned within the authentic put up, accountable AI is on the heart of the answer, and Amazon Nova inventive fashions include built-in controls to help security and accountable AI use, together with watermarking and content material moderation.
The answer makes use of AWS Step Features with AWS Lambda capabilities to orchestrate serverless orchestration of each picture and video era processes. Generated content material is saved in Amazon Easy Storage Service (Amazon S3) with metadata in DynamoDB, and Amazon API Gateway supplies buyer entry to the era capabilities. The answer now employs Amazon Bedrock Guardrails along with sustaining Amazon Rekognition and Amazon Comprehend integration at varied steps for added security checks. The next screenshot reveals inventive AI-generated movies on Amazon Adverts marketing campaign builder.

Determine 7. Adverts video era for a product
Creating high-quality advert creatives at scale offered complicated challenges. The generative AI mannequin wanted to supply interesting, brand-appropriate pictures throughout various product classes and promoting contexts whereas remaining accessible to advertisers no matter technical experience. High quality assurance and enchancment are basic to each picture and video era capabilities. The system undergoes continuous enhancement via intensive HITL processes enabled by Amazon SageMaker Floor Reality. This implementation delivers a strong device that transforms advertisers’ inventive course of, making high-quality visible content material creation extra accessible throughout various product classes and contexts.
That is just the start of Amazon Adverts utilizing generative AI to empower promoting prospects to create the content material they should drive their promoting targets. The answer demonstrates how lowering inventive limitations instantly will increase promoting exercise whereas sustaining excessive requirements for accountable AI use.
Key technical learnings and discussions
Non-conversational purposes profit from larger latency tolerance, enabling batch processing and caching, however require sturdy validation mechanisms and stronger guardrails as a result of their autonomous nature. These insights apply to each non-conversational and conversational AI implementations:
- Job decomposition and agentic workflows – Breaking complicated issues into smaller elements has confirmed useful throughout implementations. This deliberate decomposition by area specialists permits specialised fashions for particular subtasks, as demonstrated in Amazon Pharmacy prescription processing, the place fine-tuned SLMs deal with discrete duties similar to dosage identification. This technique permits for specialised brokers with clear validation steps, bettering reliability and simplifying upkeep. The Amazon vendor itemizing use case exemplifies this via its multistep workflow with separate era and validation processes. Moreover, the overview highlights use case showcased cost-effective and managed use of LLMs through the use of conventional ML for preprocessing and performing components that could possibly be related to an LLM activity.
- Hybrid architectures and mannequin choice – Combining conventional ML with LLMs supplies higher management and cost-effectiveness than pure LLM approaches. Conventional ML excels at well-defined duties, as proven within the overview highlights system for sentiment evaluation and knowledge extraction. Amazon groups have strategically deployed each giant and small language fashions primarily based on necessities, integrating RAG with fine-tuning for efficient domain-specific purposes just like the Amazon Pharmacy implementation.
- Price optimization methods – Amazon groups achieved effectivity via batch processing, caching mechanisms for high-volume operations, specialised occasion sorts similar to AWS Inferentia and AWS Trainium, and optimized mannequin choice. Evaluate highlights demonstrates how incremental processing reduces computational wants, and Amazon Adverts used Amazon Nova basis fashions (FMs) to cost-effectively create inventive content material.
- High quality assurance and management mechanisms – High quality management depends on domain-specific guardrails via Amazon Bedrock Guardrails and multilayered validation combining automated testing with human analysis. Twin-LLM approaches for era and validation assist stop hallucinations in Amazon vendor listings, and self-reflection strategies enhance accuracy. Amazon Nova inventive FMs present inherent accountable AI controls, complemented by continuous A/B testing and efficiency measurement.
- HITL implementation – The HITL method spans a number of layers, from knowledgeable analysis by pharmacists to end-user suggestions from promoting companions. Amazon groups established structured enchancment workflows, balancing automation and human oversight primarily based on particular area necessities and threat profiles.
- Accountable AI and compliance – Accountable AI practices embody content material ingestion guardrails for regulated environments and adherence to rules similar to HIPAA. Amazon groups built-in content material moderation for user-facing purposes, maintained transparency in overview highlights by offering entry to supply info, and applied knowledge governance with monitoring to advertise high quality and compliance.
These patterns allow scalable, dependable, and cost-effective generative AI options whereas sustaining high quality and duty requirements. The implementations display that efficient options require not simply refined fashions, however cautious consideration to structure, operations, and governance, supported by AWS companies and established practices.
Subsequent steps
The examples from Amazon.com shared on this put up illustrate how generative AI can create worth past conventional conversational assistants. We invite you to comply with these examples or create your personal answer to find how generative AI can reinvent your small business and even your trade. You’ll be able to go to the AWS generative AI use circumstances web page to start out the ideation course of.
These examples confirmed that efficient generative AI implementations usually profit from combining several types of fashions and workflows. To study what FMs are supported by AWS companies, confer with Supported basis fashions in Amazon Bedrock and Amazon SageMaker JumpStart Basis Fashions. We additionally recommend you discover Amazon Bedrock Flows, which may ease the trail in the direction of constructing workflows. Moreover, we remind you that Trainium and Inferentia accelerators present necessary value financial savings in these purposes.
Agentic workflows, as illustrated in our examples, have confirmed notably useful. We suggest exploring Amazon Bedrock Brokers for shortly constructing agentic workflows.
Profitable generative AI implementation extends past mannequin choice—it represents a complete software program growth course of from experimentation to software monitoring. To start constructing your basis throughout these important companies, we invite you to discover Amazon QuickStart.
Conclusion
These examples display how generative AI extends past conversational assistants to drive innovation and effectivity throughout industries. Success comes from combining AWS companies with robust engineering practices and enterprise understanding. In the end, efficient generative AI options deal with fixing actual enterprise issues whereas sustaining excessive requirements of high quality and duty.
To study extra about how Amazon makes use of AI, confer with Synthetic Intelligence in Amazon Information.
In regards to the Authors
Burak Gozluklu is a Principal AI/ML Specialist Options Architect and lead GenAI Scientist Architect for Amazon.com on AWS, primarily based in Boston, MA. He helps strategic prospects undertake AWS applied sciences and particularly Generative AI options to realize their enterprise targets. Burak has a PhD in Aerospace Engineering from METU, an MS in Methods Engineering, and a post-doc in system dynamics from MIT in Cambridge, MA. He maintains his connection to academia as a analysis affiliate at MIT. Exterior of labor, Burak is an fanatic of yoga.
Emilio Maldonado is a Senior chief at Amazon accountable for Product Data, oriented at constructing methods to scale the e-commerce Catalog metadata, manage all product attributes, and leverage GenAI to deduce exact info that guides Sellers and Consumers to work together with merchandise. He’s keen about creating dynamic groups and forming partnerships. He holds a Bachelor of Science in C.S. from Tecnologico de Monterrey (ITESM) and an MBA from Wharton, College of Pennsylvania.
Wenchao Tong is a Sr. Principal Technologist at Amazon Adverts in Palo Alto, CA, the place he spearheads the event of GenAI purposes for inventive constructing and efficiency optimization. His work empowers prospects to boost product and model consciousness and drive gross sales by leveraging revolutionary AI applied sciences to enhance inventive efficiency and high quality. Wenchao holds a Grasp’s diploma in Pc Science from Tongji College. Exterior of labor, he enjoys climbing, board video games, and spending time along with his household.
Alexandre Alves is a Sr. Principal Engineer at Amazon Well being Providers, specializing in ML, optimization, and distributed methods. He helps ship wellness-forward well being experiences.
Puneet Sahni is Sr. Principal Engineer in Amazon. He works on bettering the information high quality of all merchandise accessible in Amazon catalog. He’s keen about leveraging product knowledge to enhance our buyer experiences. He has a Grasp’s diploma in Electrical engineering from Indian Institute of Expertise (IIT) Bombay. Exterior of labor he having fun with spending time along with his younger children and travelling.
Vaughn Schermerhorn is a Director at Amazon, the place he leads Buying Discovery and Analysis—spanning Buyer Opinions, content material moderation, and web site navigation throughout Amazon’s international marketplaces. He manages a multidisciplinary group of utilized scientists, engineers, and product leaders targeted on surfacing reliable buyer insights via scalable ML fashions, multimodal info retrieval, and real-time system structure. His crew develops and operates large-scale distributed methods that energy billions of buying selections day by day. Vaughn holds levels from Georgetown College and San Diego State College and has lived and labored within the U.S., Germany, and Argentina. Exterior of labor, he enjoys studying, journey, and time along with his household.
Tarik Arici is a Principal Utilized Scientist at Amazon Choice and Catalog Methods (ASCS), engaged on Catalog High quality Enhancement utilizing GenAI workflows. He has a PhD in Electrical and Pc Engineering from Georgia Tech. Exterior of labor, Tarik enjoys swimming and biking.