As we speak, we’re asserting structured outputs on Amazon Bedrock—a functionality that essentially transforms how one can get hold of validated JSON responses from basis fashions by means of constrained decoding for schema compliance.
This represents a paradigm shift in AI utility growth. As a substitute of validating JSON responses and writing fallback logic for once they fail, you possibly can transfer straight to constructing with the info. With structured outputs, you possibly can construct zero-validation information pipelines that belief mannequin outputs, dependable agentic programs that confidently name exterior capabilities, and simplified utility architectures with out retry logic.
On this submit, we discover the challenges of conventional JSON technology and the way structured outputs solves them. We cowl the 2 core mechanisms—JSON Schema output format and strict instrument use—together with implementation particulars, finest practices, and sensible code examples. Whether or not you’re constructing information extraction pipelines, agentic workflows, or AI-powered APIs, you’ll discover ways to use structured outputs to create dependable, production-ready purposes. Our companion Jupyter pocket book offers hands-on examples for each function coated right here.
The issue with conventional JSON technology
For years, getting structured information from language fashions meant crafting detailed prompts, hoping for one of the best, and constructing elaborate error-handling programs. Even with cautious prompting, builders routinely encounter:
- Parsing failures: Invalid JSON syntax that breaks
json.masses()calls - Lacking fields: Required information factors absent from responses
- Kind mismatches: Strings the place integers are anticipated, breaking downstream processing
- Schema violations: Responses that technically parse however don’t match your information mannequin
In manufacturing programs, these failures compound. A single malformed response can cascade by means of your pipeline, requiring retries that improve latency and prices. For agentic workflows the place fashions name instruments, invalid parameters can break operate calls totally.
Take into account a reserving system requiring passengers: int. With out schema enforcement, the mannequin may return passengers: "two" or passengers: "2"—syntactically legitimate JSON, however semantically improper in your operate signature.
What adjustments with structured outputs
Structured outputs on Amazon Bedrock isn’t incremental enchancment—it’s a elementary shift from probabilistic to deterministic output formatting. Via constrained decoding, Amazon Bedrock constrains mannequin responses to evolve to your specified JSON schema. Two complementary mechanisms can be found:
| Function | Goal | Use case |
|---|---|---|
| JSON Schema output format | Management the mannequin’s response format | Information extraction, report technology, API responses |
| Strict instrument use | Validate instrument parameters | Agentic workflows, operate calling, multi-step automation |
These options can be utilized independently or collectively, supplying you with exact management over each what the mannequin outputs and the way it calls your capabilities.
What structured outputs delivers:
- All the time legitimate: No extra
JSON.parse()errors or parsing exceptions - Kind protected: Subject sorts are enforced and required fields are at all times current
- Dependable: No retries wanted for schema violations
- Manufacturing prepared: Deploy with confidence at enterprise scale
How structured outputs works
Structured outputs makes use of constrained sampling with compiled grammar artifacts. Right here’s what occurs once you make a request:
- Schema validation: Amazon Bedrock validates your JSON schema towards the supported JSON Schema Draft 2020-12 subset
- Grammar compilation: For brand spanking new schemas, Amazon Bedrock compiles a grammar (first request may take longer)
- Caching: Compiled grammars are cached for twenty-four hours, making subsequent requests quicker
- Constrained technology: The mannequin generates tokens that produce legitimate JSON matching your schema
Efficiency concerns:
- First request latency: Preliminary compilation may add latency to new schemas
- Cached efficiency: Subsequent requests with an identical schemas have minimal overhead
- Cache scope: Grammars are cached per account for twenty-four hours from first entry
Altering the JSON schema construction or a instrument’s enter schema invalidates the cache, however altering solely identify or description fields doesn’t.
Getting began with structured outputs
The next instance demonstrates structured outputs with the Converse API:
Output:
The response conforms to your schema—no further validation required.
Necessities and finest practices
To make use of structured outputs successfully, observe these pointers:
- Set
additionalProperties: falseon all objects. That is required for structured outputs to work. With out it, your schema received’t be accepted.
- Use descriptive subject names and descriptions. Fashions use property names and descriptions to grasp what information to extract. Clear names like
customer_emailoutperform generic names likefield1. - Use
enumfor constrained values. When a subject has a restricted set of legitimate values, useenumto constrain choices. This improves accuracy and produces legitimate values. - Begin fundamental, then add complexity. Start with the minimal required fields and add complexity incrementally. Fundamental schemas compile quicker and are simpler to keep up.
- Reuse schemas to profit from caching. Construction your utility to reuse schemas throughout requests. The 24-hour grammar cache considerably improves efficiency for repeated queries.
- Test
stopReasonin each response. Two eventualities can produce non-conforming responses: refusals (when the mannequin declines for security causes) and token limits (whenmax_tokensis reached earlier than finishing). Deal with each circumstances in your code. - Take a look at with reasonable information earlier than deployment. Validate your schemas towards production-representative inputs. Edge circumstances in actual information typically reveal schema design points.
Supported JSON Schema options:
- All fundamental sorts:
object,array,string,integer,quantity,boolean,null enum(strings, numbers, bools, or nulls solely)const,anyOf,allOf(with limitations)$ref,$def, anddefinitions(inside references solely)- String codecs:
date-time,time,date,length,e mail,hostname,uri,ipv4,ipv6,uuid - Array
minItems(solely values 0 and 1)
Not supported:
- Recursive schemas
- Exterior
$refreferences - Numerical constraints (
minimal,most,multipleOf) - String constraints (
minLength,maxLength) additionalPropertiesset to something aside fromfalse
Strict instrument use for agentic workflows
When constructing purposes the place fashions name instruments, set strict: true in your instrument definition to constrain instrument parameters to match your enter schema precisely:
With strict: true, structured outputs constrains the output in order that:
- The
locationsubject is at all times a string - The
unitsubject is at all times bothcelsiusorfahrenheit - No sudden fields seem within the enter
Sensible purposes throughout industries
The pocket book demonstrates use circumstances that span industries:
- Monetary companies: Extract structured information from earnings experiences, mortgage purposes, and compliance paperwork. With structured outputs, each required subject is current and accurately typed for downstream processing.
- Healthcare: Parse medical notes into structured, schema-compliant data. Extract affected person data, diagnoses, and therapy plans into validated JSON for EHR integration.
- Ecommerce: Construct dependable product catalog enrichment pipelines. Extract specs, classes, and attributes from product descriptions with constant, dependable outcomes.
- Authorized: Analyze contracts and extract key phrases, events, dates, and obligations into structured codecs appropriate for contract administration programs.
- Customer support: Construct clever ticket routing and response programs the place extracted intents, sentiments, and entities match your utility’s information mannequin.
Selecting the best strategy
Our testing revealed clear patterns for when to make use of every function:
Use JSON Schema output format when:
- You want the mannequin’s response in a selected construction
- Constructing information extraction pipelines
- Producing API-ready responses
- Creating structured experiences or summaries
Use strict instrument use when:
- Constructing agentic programs that decision exterior capabilities
- Implementing multi-step workflows with instrument chains
- Requiring validated parameter sorts for operate calls
- Connecting AI to databases, APIs, or exterior companies
Use each collectively when:
- Constructing advanced brokers that want validated instrument calls and structured ultimate responses
- Creating programs the place intermediate instrument outcomes feed into structured outputs
- Implementing enterprise workflows requiring end-to-end schema compliance
API comparability: Converse in comparison with InvokeModel
Each the Converse API and InvokeModel API assist structured outputs, with barely completely different parameter codecs:
| Facet | Converse API | InvokeModel (Anthropic Claude) | InvokeModel (open-weight fashions) |
|---|---|---|---|
| Schema location | outputConfig.textFormat |
output_config.format |
response_format |
| Instrument strict flag | toolSpec.strict |
instruments[].strict |
instruments[].operate.strict |
| Schema format | JSON string in jsonSchema.schema |
JSON object in schema |
JSON object in json_schema.schema |
| Greatest for | Conversational workflows | Single-turn inference (Claude) | Single-turn inference (open-weight) |
Word: The InvokeModel API makes use of completely different request subject names relying on the mannequin kind. For Anthropic Claude fashions, use output_config.format for JSON schema outputs. For open-weight fashions, use response_format as a substitute.
Select the Converse API for multi-turn conversations and the InvokeModel API once you want direct mannequin entry with provider-specific request codecs.
Supported fashions and availability
Structured outputs is mostly out there in all business AWS Areas for choose Amazon Bedrock mannequin suppliers:
- Anthropic
- DeepSeek
- MiniMax
- Mistral AI
- Moonshot AI
- NVIDIA
- OpenAI
- Qwen
The function works seamlessly with:
- Cross-Area inference: Use structured outputs throughout AWS Areas with out further setup
- Batch inference: Course of giant volumes with schema-compliant outputs
- Streaming: Stream structured responses with
ConverseStreamorInvokeModelWithResponseStream
Conclusion
On this submit, you found how structured outputs on Amazon Bedrock cut back the uncertainty of AI-generated JSON by means of validated, schema-compliant responses. By utilizing JSON Schema output format and strict instrument use, you possibly can construct dependable information extraction pipelines, sturdy agentic workflows, and production-ready AI purposes—with out customized parsing or validation logic.Whether or not you’re extracting information from paperwork, constructing clever automation, or creating AI-powered APIs, structured outputs ship the reliability your purposes demand.
Structured outputs is now usually out there on Amazon Bedrock. To make use of structured outputs with the Converse APIs, replace to the newest AWS SDK. To study extra, see the Amazon Bedrock documentation and discover our pattern pocket book.
What workflows may validated, schema-compliant JSON unlock in your group? The pocket book offers all the pieces it’s essential discover out.
In regards to the authors


