This submit is cowritten with James Luo from BGL.
Information evaluation is rising as a high-impact use case for AI brokers. In response to Anthropic’s 2026 State of AI Brokers Report, 60% of organizations rank knowledge evaluation and report technology as their most impactful agentic AI purposes. 65% of enterprises cite it as a high precedence. In observe, companies face two frequent challenges:
- Enterprise customers with out technical data depend on knowledge groups for queries, which is time-consuming and creates a bottleneck.
- Conventional text-to-SQL options don’t present constant and correct outcomes.
Like many different companies, BGL confronted comparable challenges with its knowledge evaluation and reporting use instances. BGL is a number one supplier of self-managed superannuation fund (SMSF) administration options that assist people handle the advanced compliance and reporting of their very own or a shopper’s retirement financial savings, serving over 12,700 companies throughout 15 nations. BGL’s answer processes advanced compliance and monetary knowledge via over 400 analytics tables, every representing a particular enterprise area, similar to aggregated buyer suggestions, funding efficiency, compliance monitoring, and monetary reporting. BGL’s clients and workers want to seek out insights from the info. For instance, Which merchandise had probably the most adverse suggestions final quarter? or Present me funding traits for high-net-worth accounts. Working with Amazon Internet Providers (AWS), BGL constructed an AI agent utilizing Claude Agent SDK hosted on Amazon Bedrock AgentCore. Through the use of the AI agent enterprise customers can retrieve analytic insights via pure language whereas aligning with the safety and compliance necessities of economic companies, together with session isolation and identity-based entry controls.
On this weblog submit, we discover how BGL constructed its production-ready AI agent utilizing Claude Agent SDK and Amazon Bedrock AgentCore. We cowl three key facets of BGL’s implementation:
- Why constructing a robust knowledge basis is important for dependable AI agent-based text-to-SQL options
- How BGL designed its AI agent utilizing Claude Agent SDK for code execution, context administration, and domain-specific experience
- How BGL used AgentCore to supply the perfect stateful execution periods in manufacturing for a safer, scalable AI agent.
Establishing robust knowledge foundations for an AI agent-based text-to-SQL answer
When engineering groups implement an AI agent for analytics use instances, a typical anti-pattern is to have the agent deal with all the things together with understanding database schemas, remodeling advanced datasets, checking out enterprise logic for analyses and decoding outcomes. The AI agent is more likely to produce inconsistent outcomes and fail by becoming a member of tables incorrectly, lacking edge instances, or producing incorrect aggregations.
BGL used its present mature large knowledge answer powered by Amazon Athena and dbt Labs, to course of and rework terabytes of uncooked knowledge throughout numerous enterprise knowledge sources. The extract, rework, and cargo (ETL) course of builds analytic tables and every desk solutions a particular class of enterprise questions. These tables are aggregated, denormalized datasets (with metrics and, summaries) that function a business-ready single supply of reality for enterprise intelligence (BI) instruments, AI brokers, and purposes. For particulars on find out how to construct a serverless knowledge transformation structure with Athena and dbt, see How BMW Group constructed a serverless terabyte-scale knowledge transformation structure with dbt and Amazon Athena.
The AI agent’s function is to deal with advanced knowledge transformation inside the knowledge system by specializing in decoding the person’s pure language questions, translating it, and producing SQL SELECT queries towards well-structured analytic tables. When wanted, the AI agent writes Python scripts to additional course of outcomes and generate visualizations. This separation of issues considerably reduces the chance of hallucination and presents a number of key advantages:
- Consistency: The information system handles advanced enterprise logic in a extra deterministic approach: joins, aggregations, and enterprise guidelines are validated by the info group forward of time. The AI agent’s process turns into easy: interpret questions and generate primary SELECT queries towards these tables.
- Efficiency: Analytic tables are pre-aggregated and optimized with correct indexes. The agent performs primary queries slightly than advanced joins throughout uncooked tables, leading to a quicker response time even for giant datasets.
- Maintainability and governance: Enterprise logic resides within the knowledge system, not within the AI’s context window. This helps be sure that the AI agent depends on the identical single supply of reality as different shoppers, similar to BI instruments. If a enterprise rule adjustments, the info group updates the info transformation logic in dbt, and the AI agent mechanically consumes the up to date analytic tables that mirror these adjustments.
“Many individuals assume the AI agent is so highly effective that they’ll skip constructing the info platform; they need the agent to do all the things. However you may’t obtain constant and correct outcomes that approach. Every layer ought to clear up complexity on the acceptable stage”
– James Luo, BGL Head of Information and AI
How BGL builds AI brokers utilizing Claude Agent SDK with Amazon Bedrock
BGL’s growth group has been utilizing Claude Code powered by Amazon Bedrock as its AI coding assistant. This integration makes use of momentary, session-based entry to mitigate credential publicity, and integrates with present id suppliers to align with monetary companies compliance necessities. For particulars of integration, see Steering for Claude Code with Amazon Bedrock
By its day by day use of the Claude Code, BGL acknowledged that its core capabilities lengthen past coding. BGL used its skill to motive via advanced issues, write and execute code, and work together with information and methods autonomously. Claude Agent SDK packages the similar agentic capabilities right into a Python and TypeScript SDK, in order that builders can construct customized AI brokers on high of Claude Code. For BGL, this meant they might construct an analytics AI agent with:
- Code execution: The agent writes and runs Python code to course of datasets returned from analytic tables and generate visualizations
- Automated context administration: Lengthy-running periods don’t overwhelm token limits
- Sandboxed execution: Manufacturing-grade isolation and permission controls
- Modular reminiscence and data: A
CLAUDE.mdfile for undertaking context and Agent Abilities for product line domain-specific experience
Why code execution issues for knowledge analytics
Analytics queries typically return hundreds of rows and typically past megabytes of knowledge. Normal tool-use, operate calling, and Mannequin Context Protocol (MCP) patterns typically go retrieved knowledge straight into the context window, which rapidly reaches mannequin context window limits. BGL applied a distinct method: the agent writes SQL to question Athena, then writes Python code to course of the CSV file outcomes straight in its file system. This permits the agent to deal with giant outcome units, carry out advanced aggregations, and generate charts with out reaching context window limits. You possibly can be taught extra concerning the code execution patterns in Code execution with MCP: Constructing extra environment friendly brokers.
Modular data structure
To deal with BGL’s various product traces and sophisticated area data, the implementation makes use of a modular method with two key configuration varieties that work collectively seamlessly.
CLAUDE.md (undertaking context)
The CLAUDE.md file supplies the agent with international context—the undertaking construction, setting configuration (check, manufacturing, and so forth), and critically, find out how to execute SQL queries. It defines which folders retailer intermediate outcomes and ultimate outputs, ensuring information land in an outlined file path that customers can entry. The next diagram exhibits the construction of a CLAUDE.md file:

SKILL.md (Product area experience)
BGL organizes their agent area data by product traces utilizing the SKILL.md configuration information. Every ability acts as a specialised knowledge analyst for a particular product. For instance, the BGL CAS 360 product has a ability known as CAS360 Information Analyst agent, which handles firm and belief administration with ASIC compliance alignment; whereas BGL’s Easy Fund 360 product has a ability known as Easy Fund 360 Information Analyst agent, which is supplied with SMSF administration and compliance-related area expertise. A SKILL.md file defines three issues:
- When to set off: What varieties of questions ought to activate this ability
- Which tables to make use of or map: References to the related analytic tables within the knowledge folder (as proven within the previous determine)
- Learn how to deal with advanced situations: Step-by-step steering for multi-table queries or particular enterprise questions if required
Through the use of SKILL.md information, the agent can dynamically uncover and cargo the appropriate ability to achieve domain-specific experience for corresponding duties.
- Unified context: When a ability is triggered, Claude Agent SDK dynamically merges its specialised directions with the worldwide
CLAUDE.mdfile right into a single immediate. This permits the agent to concurrently apply project-wide requirements (for instance, all the time save to disk) whereas utilizing domain-specific data (similar to mapping person inquiries to a bunch of tables). - Progressive discovery: Not all expertise must be loaded into the context window directly. The agent first reads the question to find out which ability must be triggered. It hundreds the ability physique and references to grasp which analytic desk’s metadata is required. It then additional explores corresponding knowledge folders. This retains context utilization environment friendly whereas offering complete protection.
- Iterative refinement: If the AI agent is unable to deal with some enterprise data due to an absence of recent area data, the group will collect suggestions from customers, determine the gaps, and add new data to present expertise utilizing a human-in-the-loop course of so expertise are up to date and refined iteratively.
![This technical architecture diagram illustrates an Agent Virtual Machine system designed for AI automation and skill management. The diagram is organized into two main sections: At the top level, the system provides two scripting execution environments: Bash for shell command execution and Python for running Python scripts. These environments enable the agent to perform various computational tasks. The lower section displays the file system architecture, represented by a light blue container. Within this file system, skills are organized using a standardized directory structure following the pattern "skills/[skillname]360/". Three specific skill modules are shown: skills/sf360/ containing a SKILL.md documentation file and a references subdirectory skills/cas360/ containing a SKILL.md documentation file and a references subdirectory skills/smartdocs360/ containing a SKILL.md documentation file and a references subdirectory An ellipsis notation indicates additional skill directories follow the same organizational pattern. Each skill module maintains consistent structure with documentation (SKILL.md) and supporting reference materials stored in dedicated subdirectories. This modular architecture enables the AI agent system to access, execute, and manage multiple capabilities programmatically, with each skill packaged alongside its documentation and resources for efficient automation workflows.](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2026/01/30/ML-20204-image-2.jpg)
As proven within the previous determine, agent expertise are organized per product line. Every product folder comprises a SKILL.md definition file and a references listing with extra area data and assist supplies that the agent hundreds on demand.
For particulars about Anthropic Agent Abilities, see the Anthropic weblog submit, brokers for the actual world with Agent Abilities
Excessive-level answer structure
To ship a safer and scalable text-to-SQL expertise, BGL makes use of Amazon Bedrock AgentCore to host Claude Agent SDK whereas preserving knowledge transformation within the present large knowledge answer.

The previous determine illustrates a high-level structure and workflow. The analytic tables are pre-built day by day utilizing Athena and dbt, and function the single supply of reality. A typical person interplay flows via the next phases:
- Consumer request: A person asks a enterprise query utilizing Slack (for instance, Which merchandise had probably the most adverse suggestions final quarter?).
- Schema discovery and SQL technology: The agent identifies related tables utilizing expertise and writes SQL queries.
- SQL safety validation: To assist stop unintended knowledge modification, a safety layer permits solely SELECT queries and blocks DELETE, UPDATE, and DROP operations.
- Question execution: Athena executes the question and shops outcomes into Amazon Easy Storage Service (Amazon S3).
- End result Obtain: The agent downloads the ensuing CSV file to the file system on AgentCore, fully bypassing the context window to keep away from token limits.
- Evaluation and visualization: The agent writes Python code to research the CSV file and generate visualizations or refined datasets relying on the enterprise query.
- Response supply: Remaining insights and visualizations are formatted and returned to the person in Slack.
Why use Amazon Bedrock AgentCore to host Claude Agent SDK
Deploying an AI agent that executes arbitrary Python code requires vital infrastructure issues. As an illustration, you want isolation to assist be sure that there’s no cross-session entry to knowledge or credentials. Amazon Bedrock AgentCore supplies fully-managed, stateful execution periods, every session has its personal remoted microVM with a separate CPU, reminiscence, and file system. When a session ends, the microVM terminates totally and sanitizes reminiscence, serving to to make sure no remnants persist for future periods. BGL discovered this service particularly useful:
- Stateful execution session: AgentCore maintains session state for as much as 8 hours. Customers can have ongoing conversations with the agent, referring again to earlier queries with out shedding context.
- Framework flexibility: It’s framework-agnostic. It helps deployment of AI brokers similar to Strands Brokers SDK, Claude Agent SDK, LangGraph, and CrewAI with a number of traces of code.
- Aligned with safety finest practices: It supplies session isolation, VPC assist, AWS Id and Entry Administration (IAM) or OAuth based mostly id to facilitate ruled, compliance-aligned agent operations at scale.
- System integration: This can be a forward-looking consideration.
“There’s Gateway, Reminiscence, Browser instruments, a complete ecosystem constructed round it. I do know AWS is investing on this course, so all the things we construct now can combine with these companies sooner or later.”
– James Luo, BGL Head of Information and AI.
BGL is already planning to combine AgentCore Reminiscence for storing person preferences and question patterns.
Outcomes and influence
For BGL’s greater than 200 workers, this represents a big shift in how they extract enterprise intelligence. Product managers can now validate hypotheses immediately with out ready for the info group. Compliance groups can spot threat traits with out studying SQL. Buyer success managers can pull account-specific analytics in real-time throughout shopper calls. This democratization of knowledge entry helps rework analytics from a bottleneck right into a aggressive benefit, enabling quicker decision-making throughout the group whereas releasing the info group to concentrate on strategic initiatives slightly than one-time question requests.
Conclusion and key takeaways
BGL’s journey demonstrates how combining a robust knowledge basis with agentic AI can democratize enterprise intelligence. Through the use of Amazon Bedrock AgentCore and the Claude Agent SDK, BGL constructed a safer and scalable AI agent that empowers workers to faucet into their knowledge to reply enterprise questions. Listed below are some key takeaways:
- Spend money on a robust knowledge basis: Accuracy begins with a robust knowledge basis. Through the use of the info system and knowledge pipeline to deal with advanced enterprise logic (joins and aggregations), the agent can concentrate on primary, dependable logic.
- Set up data by area: Use Agent Abilities to encapsulate domain-specific experience (for instance, Tax Regulation or Funding Efficiency). This retains the context window clear and manageable. Moreover, set up a suggestions loop: constantly monitor person queries to determine gaps and iteratively replace these expertise.
- Use code execution for knowledge processing: Keep away from utilizing an agent to course of giant datasets utilizing a big language mannequin (LLM) context. As a substitute, instruct the agent to jot down and execute code to filter, mixture, and visualize knowledge.
- Select stateful, session-based infrastructure to host the agent: Conversational analytics requires persistent context. Amazon Bedrock AgentCore simplifies this by offering built-in state persistence (as much as 8-hour periods), assuaging the necessity to construct customized state dealing with layers on high of stateless compute.
In case you’re able to construct comparable capabilities to your group, get began by exploring the Claude Agent SDK and a brief demo of Deploying Claude Agent SDK on Amazon Bedrock AgentCore Runtime. You probably have an analogous use case or want assist designing your structure, attain out to your AWS account group.
References:
In regards to the authors
Dustin Liu is a options architect at AWS, targeted on supporting monetary companies and insurance coverage (FSI) startups and SaaS firms. He has a various background spanning knowledge engineering, knowledge science, and machine studying, and he’s enthusiastic about leveraging AI/ML to drive innovation and enterprise transformation.
Melanie Li, PhD, is a Senior Generative AI Specialist Options Architect at AWS based mostly in Sydney, Australia, the place her focus is on working with clients to construct options leveraging state-of-the-art AI and machine studying instruments. She has been actively concerned in a number of Generative AI initiatives throughout APJ, harnessing the ability of Giant Language Fashions (LLMs). Previous to becoming a member of AWS, Dr. Li held knowledge science roles within the monetary and retail industries.
Frank Tan is a Senior Options Architect at AWS with a particular curiosity in Utilized AI. Coming from a product growth background, he’s pushed to bridge expertise and enterprise success.
James Luo is Head of Information & AI at BGL Company Options, a world-leading supplier of compliance software program for accountants and monetary professionals. Since becoming a member of BGL in 2008, James has progressed from developer to architect to his present management function, spearheading the Information Platform and Roni AI Agent initiatives. In 2015, he fashioned BGL’s BigData group, implementing the primary deep studying mannequin within the SMSF trade (2017), which now processes 200+ million transactions yearly. He has spoken at Large Information & AI World and AWS Summit, and BGL’s AI work has been featured in a number of AWS case research.
Dr. James Bland is a Expertise Chief with 30+ years driving AI transformation at scale. He holds a PhD in Pc Science with a machine studying focus and leads strategic AI initiatives at AWS, enabling enterprises to undertake AI-powered growth lifecycles and agentic capabilities. Dr. Bland spearheaded the AI-SDLC initiative, authored complete guides on Generative AI within the SDLC, and helps enterprises architect production-scale AI options that essentially rework how organizations function in an AI-first world.


