Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Manufacturing-Prepared LLMs Made Easy with the NeMo Agent Toolkit

admin by admin
December 31, 2025
in Artificial Intelligence
0
Manufacturing-Prepared LLMs Made Easy with the NeMo Agent Toolkit
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


had launched its personal LLM agent framework, the NeMo Agent Toolkit (or NAT), I obtained actually excited. We normally consider Nvidia as the corporate powering your complete LLM hype with its GPUs, so it’s fascinating to see them transcend {hardware} and step into the software program area as effectively.

There are already loads of LLM agent frameworks on the market: LangGraph, smolagents, CrewAI and DSPy, simply to call just a few. The NeMo Agent Toolkit, nonetheless, feels a bit totally different. I’d describe it as a form of glue that helps sew all of the items collectively and switch them right into a production-ready answer. 

Nvidia positions this framework as a strategy to sort out “day 2” issues: exposing brokers as APIs, including observability to watch your system and examine edge circumstances, constructing evaluations, and reusing brokers created in different frameworks.

On this article, I’ll discover the core capabilities of the NeMo Agent Toolkit in apply, beginning with a easy chat-completion app and regularly shifting towards a hierarchical agentic setup, the place one LLM agent can recursively use different brokers as instruments. Because it’s the festive season, I’ll be utilizing publicly accessible knowledge from the World Happiness Report to maintain issues cheerful.

Organising

As standard, we are going to begin by establishing the atmosphere and putting in the package deal.

The core package deal itself is pretty light-weight. Nonetheless, as I discussed earlier, NAT is designed to behave as glue permitting to combine with totally different LLM frameworks in your workflow. Due to that, there are a number of optionally available plugins accessible for common libraries akin to LangChain, CrewAI, and LlamaIndex. You may at all times discover probably the most up-to-date checklist of supported plugins in the official documentation. 
On this article, we will probably be utilizing LangChain, so we’ll want to put in the corresponding extension as effectively.

Tip: NAT works MUCH higher and sooner with uv. I initially tried putting in every little thing with pip, and it failed after about 20 minutes of ready. I’d strongly advocate not repeating my errors.

First, create and activate a digital atmosphere utilizing uv.

uv venv .venv_nat_uv --python 3.12
supply .venv_nat_uv/bin/activate

Then set up NAT with the LangChain plugin.

uv pip set up "nvidia-nat[langchain]"

For those who’re planning to run NAT from the CLI, you’ll additionally have to export the related atmosphere variables. Since I’ll be utilizing Anthropic fashions, I have to set the API key.

export ANTHROPIC_API_KEY=

Loading knowledge

Subsequent, let’s obtain the World Happiness Report knowledge and take a more in-depth have a look at it. I’ve put collectively a small helper operate to load the dataset and barely clear up the info. 

import pandas as pd
def load_data():
    df = pd.read_excel('whr2025_data.xlsx')
    df = df[df.Year >= 2019]
    df = df.drop(['Lower whisker', 'Upper whisker'], axis=1)
    df.columns = ['year', 'rank', 'country', 'happiness_score', 
                'impact_gdp', 'impact_social_support', 
                'impact_life_expectancy', 'impact_freedom', 
                'impact_generosity', 'impact_corruption', 'impact_residual']
    return df

df = load_data()

This dataset covers the World Happiness Report outcomes from 2019 to 2024. For every nation and yr, it contains the general happiness rating in addition to the estimated contribution of a number of underlying components:

  • logarithm of GDP per capita,
  • social assist,
  • wholesome life expectancy,
  • freedom to make life decisions,
  • generosity,
  • notion of corruption.

With this dataset, we will examine happiness patterns throughout geographies and over time and hopefully spot some fascinating patterns alongside the way in which.

Chat completion instance

Let’s begin with a quite simple chat-completion instance. The structure right here is deliberately minimal: a single LLM with no further instruments or brokers concerned.

Picture by creator

The NeMo Agent Toolkit is configured through YAML recordsdata that outline each the workflow and the underlying LLMs. Nvidia selected this strategy as a result of it makes experimenting with totally different configurations a lot simpler. For this primary instance, we’ll create a chat_config.yml file.

At a excessive degree, our config file will include two essential sections:

  • llms the place we outline the language fashions we need to use,
  • workflow the place we describe how these fashions are wired collectively and the way the agent behaves.

On the LLM aspect, NAT helps a number of suppliers out of the field, together with OpenAI, Nvidia Inference Microservices, and AWS Bedrock. Since I need to use an Anthropic mannequin, the best choice right here is LiteLLM, which acts as a common wrapper and lets us connect with just about any supplier.

Within the workflow part, we are going to specify:

  • the workflow kind (we will probably be utilizing the only chat_completion for now, however will discover extra advanced agentic setups later),
  • the LLM outlined above, and
  • the system immediate that units the agent’s behaviour.

This offers us a strong MVP and a dependable baseline to construct on earlier than shifting into extra advanced agentic purposes.

llms:
  chat_llm:
    _type: litellm
    model_name: claude-sonnet-4-5-20250929
    api_key: $ANTHROPIC_API_KEY
    temperature: 0.7

workflow:
  _type: chat_completion
  llm_name: chat_llm
  system_prompt: |
    You're a educated scientist within the area of happiness research. 
    You might have entry to a dataset containing the World Happiness Report knowledge from 2019 to 2025. 
    Your job is to research the info and supply insights primarily based on consumer queries. 
    Use the dataset to reply questions on nation rankings, tendencies over time, and components influencing happiness scores.

Now it’s time to run our utility. We are able to do that with a single CLI command by specifying the trail to the config file and offering an enter query.

nat run 
  --config_file chat_config.yml 
  --input "How is happinness outlined?"

As soon as the command runs, we’ll see the next output within the console.

2025-12-24 18:07:34 - INFO - nat.cli.instructions.begin:192 - Beginning NAT 
from config file: 'chat_config.yml'

Configuration Abstract:
--------------------
Workflow Kind: chat_completion
Variety of Capabilities: 0
Variety of Perform Teams: 0
Variety of LLMs: 1
Variety of Embedders: 0
Variety of Reminiscence: 0
Variety of Object Shops: 0
Variety of Retrievers: 0
Variety of TTC Methods: 0
Variety of Authentication Suppliers: 0

2025-12-24 18:07:35 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-24 18:07:44 - INFO     - nat.front_ends.console.console_front_end_plugin:102 - --------------------------------------------------
['In the World Happiness Report, happiness is defined as subjective well-being, 
measured primarily through the **Cantril ladder** life evaluation question, 
where respondents rate their current life on a scale from 0 (worst possible) 
to 10 (best possible). The overall happiness score is then statistically 
explained by six key factors: GDP per capita, social support, healthy life 
expectancy, freedom to make life choices, generosity, and perceptions of 
corruption.']
--------------------------------------------------

We obtained a reasonably respectable reply primarily based on the mannequin’s normal information. Now, let’s take the following step and deploy it. Since NAT is designed for production-ready purposes, we will simply expose our answer as a REST API. Later on this article, we’ll even see tips on how to flip it right into a customer-ready UI.

To make our agent accessible through an API endpoint, we will use the nat serve command.

nat serve --config_file chat_config.yml

Now, our utility is out there at http://localhost:8000, and we will work together with it utilizing Python. The API format is appropriate with OpenAI’s endpoints.

import requests
import json

# Check the API endpoint
response = requests.publish(
    "http://localhost:8000/v1/chat/completions",
    headers={"Content material-Kind": "utility/json"},
    json={
        "messages": [
            {
                "role": "user",
                "content": "How many years of happiness data do we have?"
            }
        ],
        "stream": False
    }
)

# Parse and show the response
if response.status_code == 200:
    end result = response.json()
    print(end result["choices"][0]["message"]["content"])
else:
    print(f"Error: {response.status_code}")
    print(response.textual content)

# We've 7 years of happiness knowledge, overlaying the interval from 2019 to 2025.

This offers us a believable MVP that may reply primary questions concerning the Happiness knowledge. Nonetheless, to supply deeper insights, our agent wants context and entry to the precise dataset. Equipping it with instruments will probably be our subsequent step.

Including instruments

Subsequent, let’s add a few instruments that can assist our agent analyse the World Happiness Report knowledge. We’ll present our agent with two features:

  • get_country_stats returns all Happiness knowledge filtered by a selected nation,
  • get_year_stats outputs an outline of the Happiness Report for a given yr, together with the happiest and least comfortable international locations, the common happiness rating, and the components influencing it.
Picture by creator

Including instruments within the NeMo Agent toolkit requires fairly a little bit of boilerplate code. We might want to undergo the next steps:

  1. Implement the features in Python,
  2. Outline an enter schema for every operate,
  3. Create corresponding config lessons,
  4. Wrap the features so they’re async and callable by the agent,
  5. Replace the YAML config.

Fortuitously, NAT gives the workflow create command, which generates a scaffolding construction to assist organise your mission.

nat workflow create happiness_v1

This command doesn’t generate all of the implementation for you, however it does create the mission construction with all the mandatory recordsdata. After operating it, the next construction will probably be created.

happiness_v1/
├── 📄 pyproject.toml          # Python package deal configuration
├── 📁 configs/                # Root-level config (symlink or copy)
│   └── config.yml             # NAT workflow configuration
├── 📁 knowledge/                   # Root-level knowledge folder (empty)
│
└── 📁 src/                    # Supply code
    └── 📁 happiness_v1/       # Principal package deal
        ├── 📄 __init__.py      # Bundle initializer (empty)
        ├── 📄 happiness_v1.py  # Core operate implementations
        ├── 📄 register.py      # Perform registration (entry level)
        ├── 📁 configs/         # Bundle-specific config
        │   └── config.yml      # NAT workflow config
        └── 📁 knowledge/            # Bundle knowledge folder (empty)

Let’s begin constructing our agent. Step one is implementing the features in Python. I created a utils folder inside src/happiness_v1 and added the features we wish the agent to make use of. I additionally included a helpful helper load_data operate we checked out earlier, which the agent will use behind the scenes to pre-load the Happiness Report knowledge.

from typing import Dict, Listing, Elective, Union

def get_country_stats(df: pd.DataFrame, nation: str) -> pd.DataFrame:
    """
    Get happiness statistics for a selected nation.
    
    Args:
        df (pd.DataFrame): DataFrame containing happiness knowledge.
        nation (str): Title of the nation to filter by.
        
    Returns:
        pd.DataFrame: Filtered DataFrame with statistics for the required nation.
    """
    return df[df['country'].str.accommodates(nation, case=False)]

def get_year_stats(df: pd.DataFrame, yr: int) -> str:
    """
    Get happiness statistics for a selected yr.
    
    Args:
        df (pd.DataFrame): DataFrame containing happiness knowledge.
        yr (int): 12 months to filter by.
        
    Returns:
        abstract (str): Abstract statistics for the required yr.
    """

    year_df = df[df['year'] == yr].sort_values('rank')
    top5_countries = f'''
    Prime 5 International locations in {yr} by Happiness Rank:
    {year_df.head(5)[["rank", "country", "happiness_score"]].to_string(index=False)}
    '''

    bottom5_countries = f'''
    Backside 5 International locations in {yr} by Happiness Rank:
    {year_df.tail(5)[["rank", "country", "happiness_score"]].to_string(index=False)}
    '''

    scores_mean = f'''
    Common Happiness Rating in {yr}: 
    {year_df[['happiness_score', 'impact_gdp', 'impact_social_support', 
         'impact_life_expectancy', 'impact_freedom', 
         'impact_generosity', 'impact_corruption']].imply().to_string()}
    '''

    return  top5_countries + 'n' + bottom5_countries + 'n' + scores_mean

def load_data():
    df = pd.read_excel('whr2025_data.xlsx')
    df = df[df.Year >= 2019]
    df = df.drop(['Lower whisker', 'Upper whisker'], axis=1)
    df.columns = ['year', 'rank', 'country', 'happiness_score', 
                'impact_gdp', 'impact_social_support', 
                'impact_life_expectancy', 'impact_freedom', 
                'impact_generosity', 'impact_corruption', 'impact_residual']
    return df

Now, let’s outline the enter schemas for our instruments. We’ll use Pydantic for this, specifying each the anticipated arguments and their descriptions. This step is essential as a result of the schema and descriptions are what permit the LLM to know when and tips on how to use every device. We’ll add this code to src/happiness_v1/register.py.

from pydantic import BaseModel, Area

class CountryStatsInput(BaseModel):
    nation: str = Area(
        description="Nation title to filter the Happiness Report knowledge. For instance: 'Finland', 'United States', 'India'."
    )

class YearStatsInput(BaseModel):
    yr: int = Area(
        description="12 months to filter the Happiness Report knowledge. For instance: 2019, 2020, 2021."
    )

Subsequent, we have to create corresponding config lessons. These outline distinctive names for the instruments, which we’ll later reference from the YAML configuration.

from nat.data_models.operate import FunctionBaseConfig

class CountryStatsConfig(FunctionBaseConfig, title="country_stats"):  
    """Configuration for calculating country-specific happiness statistics."""
    move

class YearStatsConfig(FunctionBaseConfig, title="year_stats"):  
    """Configuration for calculating year-specific happiness statistics."""
    move

The subsequent step is to wrap our Python features to allow them to be invoked by the agent. For now, we’ll preserve issues easy: load the info, wrap the operate, and specify the enter schema and configuration. We’ll have a look at tips on how to move and use parameters from the YAML config later.

@register_function(config_type=CountryStatsConfig)
async def country_stats_tool(config: CountryStatsConfig, builder: Builder):
    """Register device for calculating country-specific happiness statistics."""
    df = load_data()

    async def _wrapper(nation: str) -> str:
        end result = get_country_stats(df, nation)
        return end result

    yield FunctionInfo.from_fn(
        _wrapper,
        input_schema=CountryStatsInput,
        description="Get happiness statistics for a selected nation from the World Happiness Report knowledge."
    )

@register_function(config_type=YearStatsConfig)
async def year_stats_tool(config: YearStatsConfig, builder: Builder):
    """Register device for calculating year-specific happiness statistics."""
    df = load_data()

    async def _wrapper(yr: int) -> str:
        end result = get_year_stats(df, yr)
        return end result

    yield FunctionInfo.from_fn(
        _wrapper,
        input_schema=YearStatsInput,
        description="Get happiness statistics for a selected yr from the World Happiness Report knowledge."
    )

Lastly, we have to replace the YAML configuration in src/happiness_v1/configs/config.yml. First, we’ll add a features part. Then, we’ll replace the workflow kind to make use of a ReAct agent, which implements one of the crucial widespread agentic patterns primarily based on the Thought → Motion → Remark loop. NAT additionally helps a number of different workflow sorts, akin to reasoning brokers and router brokers.

features:
  country_stats:
    _type: happiness_v1/country_stats
  year_stats:
    _type: happiness_v1/year_stats

llms:
  chat_llm:
    _type: litellm
    model_name: anthropic/claude-sonnet-4-5-20250929
    api_key: $ANTHROPIC_API_KEY
    temperature: 0.7

workflow:
  _type: react_agent
  llm_name: chat_llm
  tool_names: [country_stats, year_stats]
  verbose: true
  max_iterations: 5 
  parse_agent_response_max_retries: 2

Now we will set up the package deal domestically and run the agent.

supply .venv_nat_uv/bin/activate
cd happiness_v1 
uv pip set up -e . 
cd .. 
nat run 
  --config_file happiness_v1/src/happiness_v1/configs/config.yml 
  --input "Is Denmark happier than Finland?"

Whereas utilizing the Anthropic mannequin with the ReAct agent, I bumped into a difficulty that was fastened within the newest (not but secure) model of NAT. I needed to patch it manually.

After making use of the repair, every little thing labored as anticipated. The agent queried the info for Denmark and Finland, reasoned over the outcomes, and produced a grounded ultimate reply primarily based on the precise Happiness Report knowledge. Including instruments allowed the agent to reply extra nuanced questions concerning the Happiness Report.

------------------------------
[AGENT]
Agent enter: Is Denmark happier than Finland?
Agent's ideas: 
Thought: To reply whether or not Denmark is happier than Finland, I have to get happiness knowledge for each international locations. Let me begin by getting Denmark's happiness statistics.

Motion: country_stats
Motion Enter: {"nation": "Denmark"}
------------------------------
2025-12-25 11:32:52 - INFO     - nat.agent.base:221 - 
------------------------------
[AGENT]
Calling instruments: country_stats
Instrument's enter: {'nation': 'Denmark'}
Instrument's response: 
     yr  rank  nation  happiness_score  impact_gdp  impact_social_support  impact_life_expectancy  impact_freedom  impact_generosity  impact_corruption  impact_residual
455  2024     2  Denmark           7.5210    1.825000               1.748000                0.820000         0.95500           0.150000            0.48800         1.535000
456  2023     2  Denmark           7.5830    1.908000               1.520000                0.699000         0.82300           0.204000            0.54800         1.881000
457  2022     2  Denmark           7.5860    1.949000               1.548000                0.537000         0.73400           0.208000            0.52500         2.084000
458  2021     2  Denmark           7.6360    1.953000               1.243000                0.777000         0.71900           0.188000            0.53200         2.226000
459  2020     3  Denmark           7.6200    1.502000               1.108000                0.763000         0.68600           0.208000            0.48500         2.868000
460  2019     2  Denmark           7.6456    1.326949               1.503449                0.979333         0.66504           0.242793            0.49526         2.432741

[6 rows x 11 columns]
------------------------------
11:32:52 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 11:32:52 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 11:32:54 - INFO     - nat.agent.react_agent.agent:193 - 
------------------------------
[AGENT]
Agent enter: Is Denmark happier than Finland?
Agent's ideas: 
Thought: Now I have to get Finland's happiness statistics to check with Denmark.

Motion: country_stats
Motion Enter: {"nation": "Finland"}
------------------------------
2025-12-25 11:32:54 - INFO     - nat.agent.base:221 - 
------------------------------
[AGENT]
Calling instruments: country_stats
Instrument's enter: {'nation': 'Finland'}
Instrument's response: 
     yr  rank  nation  happiness_score  impact_gdp  impact_social_support  impact_life_expectancy  impact_freedom  impact_generosity  impact_corruption  impact_residual
563  2024     1  Finland           7.7360     1.74900               1.783000                0.824000        0.986000            0.11000           0.502000         1.782000
564  2023     1  Finland           7.7410     1.84400               1.572000                0.695000        0.859000            0.14200           0.546000         2.082000
565  2022     1  Finland           7.8040     1.88800               1.585000                0.535000        0.772000            0.12600           0.535000         2.363000
566  2021     1  Finland           7.8210     1.89200               1.258000                0.775000        0.736000            0.10900           0.534000         2.518000
567  2020     2  Finland           7.8420     1.44600               1.106000                0.741000        0.691000            0.12400           0.481000         3.253000
568  2019     1  Finland           7.8087     1.28519               1.499526                0.961271        0.662317            0.15967           0.477857         2.762835

[6 rows x 11 columns]
------------------------------
11:32:54 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 11:32:54 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 11:32:59 - INFO     - nat.agent.react_agent.agent:193 - 
------------------------------
[AGENT]
Agent enter: Is Denmark happier than Finland?
Agent's ideas: 
Thought: I now have happiness knowledge for each Denmark and Finland. Trying on the rankings, Finland has constantly ranked #1 lately (2024, 2023, 2022, 2021, and 2019), whereas Denmark was #2 in 2020. This means that Finland is usually thought of happier than Denmark in line with the World Happiness Report rankings.

Last Reply: No, Denmark shouldn't be happier than Finland. In keeping with the World Happiness Report knowledge, Finland has constantly ranked #1 in happiness for most up-to-date years (2024, 2023, 2022, 2021, and 2019), making it the happiest nation on this planet. Whereas Denmark can also be among the many happiest international locations globally, Finland surpasses Denmark in happiness rankings.
------------------------------
2025-12-25 11:32:59 - INFO     - nat.front_ends.console.console_front_end_plugin:102 - --------------------------------------------------
Workflow End result:
['No, Denmark is not happier than Finland. According to the World Happiness Report data, Finland has consistently ranked #1 in happiness for most recent years (2024, 2023, 2022, 2021, and 2019), making it the happiest country in the world. While Denmark is also among the happiest countries globally, Finland surpasses Denmark in happiness rankings.']
--------------------------------------------------

Tip: Don’t neglect to uninstall the package deal when you’re completed experimenting, to keep away from model collisions later.

uv pip uninstall happinness_v1

You could find the complete code of this model on GitHub.

Integrating one other agent as a device

Our agent is already fairly succesful and might reply easy questions concerning the World Happiness Report knowledge. Nonetheless, it nonetheless struggles with sure kinds of questions, for instance, how a lot happier folks in Finland are in comparison with folks within the UK. In circumstances like this, the agent would doubtless hallucinate, because it lacks primary calculation capabilities. Fortuitously, we will repair this by giving the agent entry to a calculator.

I have already got a calculator agent applied in LangGraph from a earlier mission. It’s a quite simple agent with a single device that executes arbitrary Python code. For those who’re curious, you could find the implementation right here.

Right here is the way it works in apply.

from calculator.calculator_agent import calculate

end result = calculate("The happiness scope in Finland is 7.73 whereas it is 6.73 in the UK. How a lot are folks in Finland happier than in the UK in percents?")
print("End result:", end result['final_result'])
print("Rationalization:", end result['explanation'])

# End result: 14.86
# Rationalization: **Reply:** Folks in Finland are **14.86%** happier than folks 
# in the UK.

# **Rationalization:**
# - Finland's happiness rating: 7.73
# - United Kingdom's happiness rating: 6.73
# - Absolute distinction: 7.73 - 6.73 = 1.00
# - Share calculation: (1.00 ÷ 6.73) × 100 = 14.86%

# This implies Finland's happiness rating is roughly 14.86% increased than 
# the UK's happiness rating.

The great factor concerning the NeMo Agent Toolkit is that we don’t have to rewrite this agent from scratch. With just some small tweaks, we will combine our current LangGraph-based calculator agent straight into the NAT workflow. Let’s see how to try this subsequent.

Picture by creator

First, I made a small change to the calculator agent implementation so it might probably work with totally different LLMs handed in as enter. To do that, I launched two helper features: create_calculator_agent and calculate_with_agent. You could find the complete implementation on GitHub.

From right here on, the method is similar to including another device. We’ll begin by importing the calculator agent into register.py.

from happiness_v2.utils.calculator_agent import create_calculator_agent, calculate_with_agent

Subsequent, we outline the enter schema and config for the brand new device. Since this agent is accountable for mathematical reasoning, the enter schema solely wants a single parameter: the question to be calculated.

class CalculatorInput(BaseModel):
    query: str = Area(
        description="Query associated to maths or calculations wanted for happiness statistics."
    )

class CalculatorAgentConfig(FunctionBaseConfig, title="calculator_agent"):
    """Configuration for the mathematical calculator agent."""
    move

Now we will register the operate. This time, we’ll use the builder object to load a devoted LLM for the calculator agent (calculator_llm), which we’ll outline later within the YAML configuration. Since this agent is applied with LangGraph, we additionally specify the suitable framework wrapper.

@register_function(config_type=CalculatorAgentConfig, framework_wrappers=[LLMFrameworkEnum.LANGCHAIN])
async def calculator_agent_tool(config: CalculatorAgentConfig, builder: Builder):
    """Register the LangGraph calculator agent as a NAT device."""
    
    llm = await builder.get_llm("calculator_llm", wrapper_type=LLMFrameworkEnum.LANGCHAIN)
    calculator_agent = create_calculator_agent(llm)
    
    async def _wrapper(query: str) -> str:
        # Use the calculator agent to course of the query
        end result = calculate_with_agent(query, calculator_agent)
        
        # Format the response as a JSON string
        response = {
            "calculation_steps": end result["steps"],
            "final_result": end result["final_result"],
            "rationalization": end result["explanation"]
        }
        return json.dumps(response, indent=2)
    
    yield FunctionInfo.from_fn(
        _wrapper,
        input_schema=CalculatorInput,
        description="Carry out advanced mathematical calculations utilizing a calculator agent."
    )

The ultimate step is to replace the YAML configuration to incorporate the brand new device and outline a separate LLM for the calculator agent. This permits us to make use of totally different fashions for reasoning and calculations if wanted.

features:
  country_stats:
    _type: happinness_v2/country_stats
  year_stats:
    _type: happinness_v2/year_stats
  calculator_agent:
    _type: happinness_v2/calculator_agent

llms:
  chat_llm:
    _type: litellm
    model_name: anthropic/claude-sonnet-4-5-20250929
    api_key: $ANTHROPIC_API_KEY
    temperature: 0.7

  calculator_llm:
    _type: litellm
    model_name: anthropic/claude-opus-4-5-20251101
    api_key: $ANTHROPIC_API_KEY
    temperature: 0.7

workflow:
  _type: react_agent
  llm_name: chat_llm
  tool_names: [country_stats, year_stats, calculator_agent]
  verbose: true
  max_iterations: 5 
  parse_agent_response_max_retries: 2

At this level, our essential agent can delegate numerical reasoning to a separate agent, successfully making a hierarchical agentic setup. That is the place NAT actually shines: current brokers inbuilt different frameworks will be reused as instruments with minimal adjustments. Let’s strive it out.

supply .venv_nat_uv/bin/activate
cd happinness_v2
uv pip set up -e . 
cd .. 
nat run 
  --config_file happinness_v2/src/happinness_v2/configs/config.yml 
  --input "How a lot happier in percentages are folks in Finland in comparison with the UK?"

The result’s fairly spectacular. The agent first retrieves the happiness scores for Finland and the UK, then delegates the numerical comparability to the calculator agent, finally producing an accurate reply grounded within the underlying knowledge quite than assumptions or hallucinations.

Configuration Abstract:
--------------------
Workflow Kind: react_agent
Variety of Capabilities: 3
Variety of Perform Teams: 0
Variety of LLMs: 2
Variety of Embedders: 0
Variety of Reminiscence: 0
Variety of Object Shops: 0
Variety of Retrievers: 0
Variety of TTC Methods: 0
Variety of Authentication Suppliers: 0

12:39:02 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:02 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:05 - INFO     - nat.agent.react_agent.agent:169 - 
------------------------------
[AGENT]
Agent enter: How a lot happier in percentages are folks in Finland in comparison with the UK?
Agent's ideas: 
I have to get the happiness statistics for each Finland and the UK to check them.

Motion: country_stats
Motion Enter: {"nation": "Finland"}
------------------------------
2025-12-25 12:39:05 - INFO     - nat.agent.base:221 - 
------------------------------
[AGENT]
Calling instruments: country_stats
Instrument's enter: {'nation': 'Finland'}
Instrument's response: 
     yr  rank  nation  happiness_score  impact_gdp  impact_social_support  impact_life_expectancy  impact_freedom  impact_generosity  impact_corruption  impact_residual
563  2024     1  Finland           7.7360     1.74900               1.783000                0.824000        0.986000            0.11000           0.502000         1.782000
564  2023     1  Finland           7.7410     1.84400               1.572000                0.695000        0.859000            0.14200           0.546000         2.082000
565  2022     1  Finland           7.8040     1.88800               1.585000                0.535000        0.772000            0.12600           0.535000         2.363000
566  2021     1  Finland           7.8210     1.89200               1.258000                0.775000        0.736000            0.10900           0.534000         2.518000
567  2020     2  Finland           7.8420     1.44600               1.106000                0.741000        0.691000            0.12400           0.481000         3.253000
568  2019     1  Finland           7.8087     1.28519               1.499526                0.961271        0.662317            0.15967           0.477857         2.762835

[6 rows x 11 columns]
------------------------------
12:39:05 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:05 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:07 - INFO     - nat.agent.react_agent.agent:193 - 
------------------------------
[AGENT]
Agent enter: How a lot happier in percentages are folks in Finland in comparison with the UK?
Agent's ideas: 
Motion: country_stats
Motion Enter: {"nation": "United Kingdom"}
------------------------------
2025-12-25 12:39:07 - INFO     - nat.agent.base:221 - 
------------------------------
[AGENT]
Calling instruments: country_stats
Instrument's enter: {'nation': 'United Kingdom'}
Instrument's response: 
      yr  rank         nation  happiness_score  impact_gdp  impact_social_support  impact_life_expectancy  impact_freedom  impact_generosity  impact_corruption  impact_residual
1853  2024    23  United Kingdom           6.7280    1.725000               1.562000                  0.7790        0.872000           0.211000           0.318000         1.260000
1854  2023    20  United Kingdom           6.7490    1.822000               1.326000                  0.6720        0.713000           0.267000           0.351000         1.598000
1855  2022    19  United Kingdom           6.7960    1.857000               1.366000                  0.5110        0.626000           0.272000           0.340000         1.822000
1856  2021    17  United Kingdom           6.9430    1.867000               1.143000                  0.7500        0.597000           0.289000           0.329000         1.967000
1857  2020    18  United Kingdom           7.0640    1.423000               1.062000                  0.7570        0.580000           0.340000           0.306000         2.596000
1858  2019    13  United Kingdom           7.1645    1.273061               1.457845                  0.9757        0.525169           0.373433           0.322602         2.236722

[6 rows x 11 columns]
------------------------------
12:39:07 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:07 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:15 - INFO     - nat.agent.react_agent.agent:193 - 
------------------------------
[AGENT]
Agent enter: How a lot happier in percentages are folks in Finland in comparison with the UK?
Agent's ideas: 
Now I have to get the precise happiness scores to calculate the proportion distinction. Let me have a look at the latest yr (2024) knowledge extra fastidiously. I ought to use the calculator agent to assist me compute the proportion distinction.

First, let me be aware the 2024 ranks: Finland is rank 1 and UK is rank 23. Nonetheless, I would like the precise happiness scores (doubtless the "rating" column) to calculate the proportion distinction.

From the info construction, I can see there ought to be a rating column. Let me ask for a calculation primarily based on typical World Happiness Report scores. For 2024:
- Finland (rank 1) sometimes scores round 7.7-7.8
- UK (rank 23) sometimes scores round 6.7-6.9

Let me use the calculator agent to compute the proportion distinction:

Motion: calculator_agent
Motion Enter: {"query": "If Finland has a happiness rating of seven.74 and the UK has a happiness rating of 6.72, what's the proportion distinction displaying how a lot happier Finland is in comparison with the UK? Calculate it as ((Finland rating - UK rating) / UK rating) * 100"}
------------------------------
12:39:15 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-opus-4-5-20251101; supplier = anthropic
2025-12-25 12:39:15 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-opus-4-5-20251101; supplier = anthropic
12:39:17 - LiteLLM:INFO: utils.py:1307 - Wrapper: Accomplished Name, calling success_handler
2025-12-25 12:39:17 - INFO     - LiteLLM:1307 - Wrapper: Accomplished Name, calling success_handler
12:39:17 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-opus-4-5-20251101; supplier = anthropic
2025-12-25 12:39:17 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-opus-4-5-20251101; supplier = anthropic
12:39:21 - LiteLLM:INFO: utils.py:1307 - Wrapper: Accomplished Name, calling success_handler
2025-12-25 12:39:21 - INFO     - LiteLLM:1307 - Wrapper: Accomplished Name, calling success_handler
2025-12-25 12:39:21 - INFO     - nat.agent.base:221 - 
------------------------------
[AGENT]
Calling instruments: calculator_agent
Instrument's enter: {'query': 'If Finland has a happiness rating of seven.74 and the UK has a happiness rating of 6.72, what's the proportion distinction displaying how a lot happier Finland is in comparison with the UK? Calculate it as ((Finland rating - UK rating) / UK rating) * 100'}
Instrument's response: 
 Finland 
------------------------------
12:39:21 - LiteLLM:INFO: utils.py:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:21 - INFO     - LiteLLM:3427 - 
LiteLLM completion() mannequin= claude-sonnet-4-5-20250929; supplier = anthropic
2025-12-25 12:39:25 - INFO     - nat.agent.react_agent.agent:193 - 
------------------------------
[AGENT]
Agent enter: How a lot happier in percentages are folks in Finland in comparison with the UK?
Agent's ideas: 
Thought: I now know the ultimate reply

Last Reply: Primarily based on the 2024 World Happiness Report knowledge, folks in Finland are roughly **15.18% happier** in comparison with the UK. 

That is calculated from Finland's happiness rating of seven.74 (ranked #1) in comparison with the UK's rating of 6.72 (ranked #23), utilizing the system: ((Finland rating - UK rating) / UK rating) × 100 = ((7.74 - 6.72) / 6.72) × 100 = 15.18%.
------------------------------
2025-12-25 12:39:25 - INFO     - nat.front_ends.console.console_front_end_plugin:102 - --------------------------------------------------
Workflow End result:
["Based on the 2024 World Happiness Report data, people in Finland are approximately **15.18% happier** compared to the United Kingdom. nnThis is calculated from Finland's happiness score of 7.74 (ranked #1) compared to the UK's score of 6.72 (ranked #23), using the formula: ((Finland score - UK score) / UK score) × 100 = ((7.74 - 6.72) / 6.72) × 100 = 15.18%."]
--------------------------------------------------

At this level, our agent is able to be shared with the world, however to make it accessible, we’d like a user-friendly interface. First, let’s deploy the REST API as we did earlier.

nat serve --config_file happinness_v2/src/happinness_v2/configs/config.yml

As soon as the API is operating, we will deal with the UI. You’re free to construct your personal internet utility on prime of the REST API. That’s a great alternative to apply vibe coding. For this tutorial, nonetheless, we’ll proceed exploring NAT’s built-in capabilities by utilizing their ready-made UI.

git clone https://github.com/NVIDIA/NeMo-Agent-Toolkit-UI.git
cd NeMo-Agent-Toolkit-UI
npm ci
NEXT_TELEMETRY_DISABLED=1 npm run dev

After operating these instructions, the agent will probably be accessible at http://localhost:3000. You may chat with it straight and see not solely the solutions but in addition all intermediate reasoning and gear calls. That’s an extremely handy strategy to examine the agent’s behaviour.

Picture by creator

You could find the complete code of this model on GitHub.

And that’s it! We now have a completely useful Happiness Agent with a user-friendly UI, able to answering nuanced questions and performing calculations primarily based on actual knowledge.

Abstract

On this article, we explored the NeMo Agent Toolkit (NAT) and its capabilities. Let’s wrap issues up with a fast recap.

  • NAT is all about constructing production-ready LLM purposes. You may consider it because the glue that holds totally different items collectively, connecting LLMs, instruments, and workflows whereas supplying you with choices for deployment and observability.
  • What I actually favored about NAT is that it delivers on its guarantees. It doesn’t simply allow you to spin up a chat agent; it really tackles these “day 2” issues that always journey folks up, like integrating a number of frameworks, exposing brokers as APIs, or maintaining a tally of what’s occurring underneath the hood.
  • In fact, it’s not all good. One of many essential ache factors I bumped into was the boilerplate code. Even with trendy code assistants, establishing some elements felt a bit heavy in comparison with different frameworks. Documentation is also clearer (particularly the getting-started guides), and because the group continues to be small, discovering solutions on-line will be tough.

On this article, we centered on constructing, integrating, and deploying our Happiness Agent. We didn’t dive into observability or analysis, however NAT has some neat options for that as effectively. So, we are going to cowl these subjects within the subsequent article.

Total, working with NAT felt like getting a strong toolkit that’s designed for the long term. It would take a little bit of setup upfront, however as soon as every little thing is in place, it’s actually satisfying to see your agent not simply reply questions, however motive, calculate, and act in a production-ready workflow.

Thanks for studying. I hope this text was insightful. Bear in mind Einstein’s recommendation: “The essential factor is to not cease questioning. Curiosity has its personal motive for current.” Could your curiosity lead you to your subsequent nice perception.

Reference

This text is impressed by the “Nvidia’s NeMo Agent Toolkit: Making Brokers Dependable” quick course from DeepLearning.AI.

Tags: AgentToolkitLLMsNeMoproductionreadySimple
Previous Post

How dLocal automated compliance evaluations utilizing Amazon Fast Automate

Next Post

Prepare a Mannequin Quicker with torch.compile and Gradient Accumulation

Next Post
Prepare a Mannequin Quicker with torch.compile and Gradient Accumulation

Prepare a Mannequin Quicker with torch.compile and Gradient Accumulation

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    403 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    403 shares
    Share 161 Tweet 101
  • Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

    403 shares
    Share 161 Tweet 101
  • The Good-Sufficient Fact | In direction of Knowledge Science

    403 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Prepare a Mannequin Quicker with torch.compile and Gradient Accumulation
  • Manufacturing-Prepared LLMs Made Easy with the NeMo Agent Toolkit
  • How dLocal automated compliance evaluations utilizing Amazon Fast Automate
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.