interface for interacting with LLMs is thru the traditional chat UI present in ChatGPT, Gemini, or DeepSeek. The interface is kind of easy, the place the consumer inputs a physique of textual content and the mannequin responds with one other physique, which can or might not observe a selected construction. Since people can perceive unstructured pure language, this interface is appropriate and fairly efficient for the target market it was designed for.
Nevertheless, the consumer base of LLMs is far bigger than the 8 billion people residing on Earth. It expands to hundreds of thousands of software program packages that may probably harness the ability of such massive generative fashions. In contrast to people, software program packages can not perceive unstructured knowledge, stopping them from exploiting the information generated by these neural networks.
To deal with this concern, numerous methods have been developed to generate outputs from LLMs following a predefined schema. This text will overview three of the preferred approaches for producing structured outputs from LLMs. It’s written for engineers focused on integrating LLMs into their software program purposes.
Structured Output Technology
Structured output era from LLMs includes utilizing these fashions to provide knowledge that adheres to a predefined schema, fairly than producing unstructured textual content. The schema could be outlined in numerous codecs, with JSON and regex being the most typical. For instance, when using JSON format, the schema specifies the anticipated keys and the information varieties (corresponding to int, string, float, and so forth.) for every worth. The LLM then outputs a JSON object that features solely the outlined keys and accurately formatted values.
There are numerous conditions the place structured output is required from LLMs. Formatting unstructured our bodies of textual content is one massive utility space of this know-how. You should utilize a mannequin to extract particular data from massive our bodies of textual content and even pictures (utilizing VLMs). For instance, you should use a normal VLM to extract the acquisition date, whole value, and retailer title from receipts.
There are numerous methods to generate structured outputs from LLMs. This text will talk about three.
- Counting on API Suppliers
- Prompting and Reprompting Methods
- Constrained Decoding
Counting on API Suppliers ‘Magic’
A number of LLM service API suppliers, together with OpenAI and Google’s Gemini, permit customers to outline a schema for the mannequin’s output. This schema is normally outlined utilizing a Pydantic class and supplied to the API endpoint. In case you are utilizing LangChain, you may observe this tutorial to combine structured outputs into your utility.
Simplicity is the best facet of this specific method. You outline the required schema in a fashion acquainted to you, move it to the API supplier, and sit again and chill out because the service supplier performs all of the magic for you.
Utilizing this method, nonetheless, will restrict you to utilizing solely API suppliers that present the described service. This limits the expansion and suppleness of your tasks, because it shuts the door to utilizing a number of fashions, notably open supply ones. If the API suppliers all of a sudden determine to spike the value of the service, you’ll be pressured both to simply accept the additional prices or search for one other supplier.
Furthermore, it isn’t precisely Hogwarts Magic that the service supplier does. The supplier follows a sure method to generate the structured output for you. Information of the underlying know-how will facilitate the app improvement and speed up the debugging course of and error understanding. For the talked about causes, greedy the underlying science might be definitely worth the effort.
Prompting and Reprompting-Based mostly Methods
You probably have chatted with an LLM earlier than, then this method might be in your thoughts. If you’d like a mannequin to observe a sure construction, simply inform it to take action! Within the system immediate, instruct the mannequin to observe a sure construction, present a couple of examples, and ask it to not add any further textual content or description.
After the mannequin responds to the consumer request and the system receives the output, it’s best to use a parser to rework the sequence of bytes to an applicable illustration within the system. If parsing succeeds, then congratulate your self and thank the ability of immediate engineering. If parsing fails, then your system should recuperate from the error.
Prompting is Not Sufficient
The issue with prompting is unreliability. By itself, prompting isn’t sufficient to belief a mannequin to observe a required construction. It would add further clarification, disregard sure fields, and use an incorrect knowledge sort. Prompting could be and ought to be coupled with error restoration methods that deal with the case the place the mannequin defies the schema, which is detected by parsing failure.
Some individuals would possibly suppose {that a} parser acts like a boolean perform. It takes a string as enter, checks its adherence to predefined grammar guidelines, and returns a easy ‘sure’ or ‘no’ reply. In actuality, parsers are extra complicated than that and supply a lot richer data than ‘follows’ or ‘doesn’t observe’ construction.
Parsers can detect errors and incorrect tokens in enter textual content in response to grammar guidelines (Aho et al. 2007, 192–96). This data supplies us with useful data on the specifics of misalignments within the enter string. For instance, the parser is what detects a lacking semicolon error while you’re operating Java code.
Determine 1 depicts the circulate used within the prompting-based methods.

Prompting Instruments
One of the crucial widespread libraries for immediate based mostly structured output era from LLMs is teacher. Teacher is a Python library with over 11k stars on GitHub. It helps knowledge definition with Pydantic, integrates with over 15 suppliers, and supplies computerized retries on parsing failure. Along with Python, the bundle can also be avillable in TypeScript, Go, Ruby, and Rust (2).
The great thing about Teacher lies in its simplicity. All you want is to outline a Pydantic class, initialize a consumer utilizing solely its title and API key (if required), and move your request. The pattern code under, from the docs, shows the simplicity of Teacher.
import teacher
from pydantic import BaseModel
from openai import OpenAI
class Individual(BaseModel):
title: str
age: int
occupation: str
consumer = teacher.from_openai(OpenAI())
particular person = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
response_model=Individual,
messages=[
{
"role": "user",
"content": "Extract: John is a 30-year-old software engineer"
}
],
)
print(particular person) # Individual(title='John', age=30, occupation='software program engineer')
The Value of Reprompting
As handy because the reprompting approach could be, it comes at a hefty value. LLM utilization value, both service supplier API prices or GPU utilization, scales linearly with the variety of enter tokens and the variety of generated tokens.
As talked about earlier prompting based mostly methods would possibly require reprompting. The reprompt can have roughly the identical value as the unique one. Therefore, the fee scales linearly with the variety of reprompts.
For those who’re going to make use of this method, it’s important to hold the fee drawback in thoughts. Nobody needs to be stunned by a big invoice from an API supplier. One concept to assist reduce stunning prices is to place emergency brakes into the system by making use of a hard-coded restrict on the variety of allowed reprompts. This can enable you put an higher restrict on the prices of a single immediate and reprompt cycle.
Constrained Decoding
In contrast to the prompting, constrained decoding doesn’t want retries to generate a sound, structure-following output. Constrained decoding makes use of computational linguistics methods and information of the token era course of in LLMs to generate outputs which are assured to observe the required schema.
How It Works?
LLMs are autoregressive fashions. They generate one token at a time and the generated tokens are used as inputs to the identical mannequin.
The final layer of an LLM is mainly a logistic regression mannequin that calculates for every token within the mannequin’s vocabulary the chance of it following the enter sequence. The mannequin calculates the logits worth for every token, then utilizing the softmax perform, these worth are scaled and remodeled to chance values.
Constrained decoding produces structured outputs by limiting the accessible tokens at every era step. The tokens are picked in order that the ultimate output obeys the required construction. To determine how the set of doable subsequent tokens could be decided, we have to go to RegEx.
Common expressions, RegEx, are used to outline particular patterns of textual content. They’re used to test if a sequence of textual content matches an anticipated construction or schema. So mainly, RegEx is a language that can be utilized to outline anticipated buildings from LLMs. Due to its recognition, there’s a big selection of instruments and libraries that transforms different types of knowledge construction definition like Pydantic courses and JSON to RegEx. Due to its flexibility and the vast availability of conversion instruments, we are able to rework our aim now and deal with utilizing LLMs to generate outputs following a RegEx sample.
Deterministic Finite Automata (DFA)
One of many methods a RegEx sample could be compiled and examined in opposition to a physique of textual content is by reworking the sample right into a deterministic finite automata (DFA). A DFA is just a state machine that’s used to test if a string follows a sure construction or sample.
A DFA consists of 5 elements.
- A set of tokens (known as the alphabet of the DFA)
- A set of states
- A set of transitions. Every transition connects two states (perhaps connecting a state with itself) and is annotated with a token from the alphabet
- A begin state (marked with an enter arrow)
- A number of last states (marked as double circles)
A string is a sequence of tokens. To check a string in opposition to the sample outlined by a DFA, you start in the beginning state and loop over the string’s tokens, taking the transition similar to the token at every transfer. If at any level you’ve a token for which no corresponding transition exists from the present state, parsing fails and the string defies the schema. If parsing ends at one of many last states, then the string matches the sample; in any other case it additionally fails.

{a, b}
, states {q0, q1, q2}
, and a single last state, q2
. Generated utilizing Grpahviz by the Creator.For instance, the string abab
matches the sample in Determine 2 as a result of beginning at q0
and following the transitions marked with a
, b
, a
, and b
on this order will land us at q2
, which is a last state.
However, the string abba
doesn’t match the sample as a result of its path ends at q0
which isn’t a last state.
A wonderful thing about RegEx is that it may be compiled right into a DFA; in spite of everything, they’re simply two other ways to specify patterns. Dialogue of such a metamorphosis is out of scope for this text. The reader can test Aho et al. (2007, 152–66) for a dialogue of two methods to carry out the transformation.
DFA for Legitimate Subsequent Tokens Set

a(b|c)*d
. Generated utilizing Grpahviz by the Creator.Let’s recap what we now have reached up to now. We needed a way to determine the set of legitimate subsequent tokens to observe a sure schema. We outlined the schema utilizing RegEx and remodeled it right into a DFA. Now we’re going to present {that a} DFA informs us of the set of doable tokens at any level throughout parsing, becoming our necessities and wishes.
After constructing the DFA, we are able to simply decide in O(1) the set of legitimate subsequent tokens whereas standing at any state. It’s the set of tokens annotating any transition exiting from the present state.
Contemplate the DFA in Determine 3, for instance. The next desk reveals the set of legitimate subsequent tokens for every state.
State | Legitimate Subsequent Tokens |
---|---|
q0 |
{a } |
q1 |
{b , c , d } |
q2 |
{} |
Making use of the DFA to LLMs
Getting again to our structured output from LLMs drawback, we are able to rework our schema to a RegEx then to a DFA. The alphabet of this DFA will probably be set to the LLM’s vocabulary (the set of all tokens the mannequin can generate). Whereas the mannequin generates tokens, we’ll transfer via the DFA, beginning in the beginning state. At every step, we will decide the set of legitimate subsequent tokens.
The trick now occurs on the softmax scaling stage. By zeroing out the logits of all tokens that aren’t within the legitimate tokens set, we’ll calculate possibilities just for legitimate tokens, forcing the mannequin to generate a sequence of tokens that follows the schema. That approach, we are able to generate structured outputs with zero further prices!
Constrained Decoding Instruments
One of the crucial widespread Python libraries for constrained decoding is Outlines (Willard and Louf 2023). It is extremely easy to make use of and integrates with many LLM suppliers like OpenAI, Anthropic, Ollama, and vLLM.
You’ll be able to outline the schema utilizing a Pydantic class, for which the library handles the RegEx transformation, or straight utilizing a RegEx sample.
from pydantic import BaseModel
from typing import Literal
import outlines
import openai
class Buyer(BaseModel):
title: str
urgency: Literal["high", "medium", "low"]
concern: str
consumer = openai.OpenAI()
mannequin = outlines.from_openai(consumer, "gpt-4o")
buyer = mannequin(
"Alice wants assist with login points ASAP",
Buyer
)
# ✓ All the time returns legitimate Buyer object
# ✓ No parsing, no errors, no retries
The code snippet above from the docs shows the simplicity of utilizing Outlines. For extra data on the library, you may test the docs and the dottxt blogs.
Conclusion
Structured output era from LLMs is a robust device that expands the doable use circumstances of LLMs past the easy human chat. This text mentioned three approaches: counting on API suppliers, prompting and reprompting methods, and constrained decoding. For many eventualities, constrained decoding is the favoured methodology due to its flexibility and low value. Furthermore, the existence of widespread libraries like Outlines simplifies the introduction of constrained decoding to software program tasks.
If you wish to be taught extra about constrained decoding, then I’d extremely advocate this course from deeplearning.ai and dottxt, the creators of Outlines library. Utilizing movies and code examples, this course will enable you get hands-on expertise getting structured outputs from LLMs utilizing the methods mentioned on this put up.
References
[1] Aho, Alfred V., Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman, Compilers: Ideas, Methods, & Instruments (2007), Pearson/Addison Wesley
[2] Willard, Brandon T., and Rémi Louf, Environment friendly Guided Technology for Giant Language Fashions (2023), https://arxiv.org/abs/2307.09702.