Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Constructing net search-enabled brokers with Strands and Exa

admin by admin
May 12, 2026
in Artificial Intelligence
0
Constructing net search-enabled brokers with Strands and Exa
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


This submit is co written by Ishan Goswami and Nitya Sridhar from Exa.

In case you are constructing net search-enabled AI brokers for analysis, fact-checking, or aggressive intelligence, entry to present and dependable data is important. Most general-purpose search APIs usually are not designed for agent workflows. They return HTML-heavy pages and brief snippets optimized for human searching, not structured knowledge that an agent can straight devour. In consequence, builders typically must construct further layers, customized crawlers, parsers, and rating logic, to remodel this content material into one thing usable inside an agent workflow.

The Exa integration for the Strands Brokers SDK addresses this hole with an AI-native search and retrieval layer constructed straight into the software interface. Exa delivers clear, structured content material formatted for direct use in LLM context home windows, with out requiring post-processing to strip markup or reformat output. Mixed with the Strands Brokers SDK’s model-driven structure, the place the mannequin decides when to invoke instruments and the best way to use their outputs, brokers can draw real-time net information into their reasoning loop.

In observe, your agent accesses this integration via two instruments: exa_search, which performs semantic search with assist for classes like information, analysis papers, and repositories, and exa_get_contents, which retrieves full content material from chosen URLs. On this submit, you’ll discover ways to arrange the Exa integration in Strands Brokers, perceive the 2 core instruments it exposes, and stroll via real-world use instances that present how brokers use net search to finish multi-step duties.

Strands Brokers

The Strands Brokers SDK is an open supply framework from AWS for constructing AI brokers utilizing a model-driven method. Fairly than writing hard-coded workflows that dictate each step, builders present a mannequin, a system immediate, and a listing of instruments. The mannequin itself decides what to do subsequent: which instruments to name, in what order, and when the duty is finished. On the core of Strands Brokers is the agent loop. On every iteration, the mannequin receives the complete dialog historical past, together with each prior software name and its end result. If the mannequin wants extra data, it requests a software; Strands Brokers executes it and feeds the end result again. The loop continues till the mannequin produces a last reply. This accumulation of context throughout iterations is what makes brokers able to tackling multi-step duties that transcend what a single LLM name can deal with. The Strands Brokers SDK ships with over 40 pre-built instruments overlaying file I/O, shell execution, net search, AWS APIs, reminiscence, code execution, and extra. It additionally helps Mannequin Context Protocol (MCP), so instruments uncovered by MCP servers can be found to an agent with out further integration work. Including new instruments, together with the Exa net search instruments, follows the identical sample: drop them into the `instruments=[]` listing and the mannequin learns the best way to use them from their signatures.

Exa

Exa is a web-scale search engine constructed particularly for LLMs and AI brokers. Exa is a search engine that understands the which means of a question, not simply its key phrases. A question like “startups constructing local weather options” returns precise local weather startups, even when these pages by no means use that actual phrase. The mannequin matches on semantic similarity, not string overlap. Outcomes come again as clear, structured content material with no adverts or website positioning noise, prepared for an LLM to devour straight.

Strands Brokers and Exa: Integration overview

The Exa integration is out there via the strands-agents-tools package deal. It provides your agent two capabilities: looking the net for related content material and extracting full-page textual content from particular URLs. The diagram beneath visualizes the deep analysis assistant instance which is able to discuss in depth within the later a part of this weblog.

Strands Agents Deep Research Workflow

Each are optimized for AI consumption, returning structured content material that your agent can cause over straight.

  • exa_search: Search the net utilizing a number of modes together with auto, quick, and deep. Your agent can refine outcomes with filters for class, area, date, and textual content content material.
  • exa_get_contents: Retrieve full-page content material from URLs your agent has found whether or not from a earlier search or from its personal reasoning. The software checks for cached outcomes first to hurry up repeated requests. If contemporary content material is required, it could routinely fall again to dwell crawling to retrieve essentially the most up-to-date model of the web page.

Looking the net with exa_search

The exa_search software provides your agent management over net search that goes past a fundamental question string. The software helps 4 search modes. The default mode, auto, is the advisable start line for many use instances.

  • Immediate (~200ms) – Designed for real-time functions akin to autocomplete, dwell solutions, and voice brokers.
  • Quick (~450ms) – Optimized for velocity whereas nonetheless accessing Exa’s high quality index. Appropriate for agentic workflows the place your agent makes dozens of search calls.
  • Auto (~1s) [Recommended] – Balanced latency with high-quality outcomes. Beneficial for many use instances.
  • Deep (~3-6s) – Runs parallel searches throughout question variations for optimum protection. Greatest for analysis duties the place completeness issues.

Past search modes, exa_search provides your agent fine-grained management over how outcomes are filtered and scoped. You may slim a search to particular content material classes akin to information articles, firm web sites, GitHub repositories, PDFs, folks profiles, or monetary studies. Class filtering is only when your agent already is aware of what sort of supply it wants. For instance, filtering to analysis papers when the question is technical, or to information sources when recency is the precedence. You too can request content material and summaries in keeping with search outcomes, all in a single name:

agent.software.exa_search( question="current advances in AI security analysis", num_results=10, abstract={"question": "key analysis areas and findings"}) .

The response contains titles, URLs, and a synthesized abstract of every end result targeted on the question you specified. Your agent can construct foundational understanding of a subject with out studying each web page in full.

Extracting content material with exa_get_contents

As soon as your agent has discovered related URLs, whether or not from a earlier search or from its personal reasoning, the exa_get_contents software retrieves the full-page content material. You go it a listing of URLs, and it returns the extracted textual content, prepared for the agent to course of.Exa maintains a content material cache that serves outcomes immediately for pages it has already crawled. For pages that aren’t within the cache, or when your agent wants essentially the most present model of a web page, the software helps dwell crawling. You management this conduct with livecrawl modes. A configurable timeout controls how lengthy to attend for dwell crawls to finish.You too can management how a lot textual content is returned. For instance, to retrieve as much as 5,000 characters of plain textual content from a web page:

agent.software.exa_get_contents(urls=["https://example.com/blog-post"], spotlight={"maxCharacters": 5000})

Stipulations

To observe together with the examples on this submit, you want:

  • Python 3.10 or later
  • An AWS account with Amazon Bedrock entry
  • An Exa API key
  • The strands-agents and strands-agents-tools packages put in:
    • pip set up strands-agents strands-agents-tools

Setup

The Exa instruments observe the identical sample as each different software within the Strands Brokers framework, so if in case you have used different Strands instruments, the expertise is similar.The Strands Brokers SDK features a library of pre-built instruments overlaying file operations, net search, code execution, AWS companies, reminiscence administration, and extra. The Exa instruments are a part of this library. Import them and go them to the Agent constructor via the `instruments` parameter. The agent’s underlying LLM then decides when to name every software as a part of its reasoning loop. As a result of the combination talks to the Exa REST API straight, you don’t want to put in or handle a separate SDK. The one new dependency is the `strands-agents-tools` package deal.To make use of Exa with Strands Brokers, observe these steps:

1. Set your Exa API key

Exa requires an API key for authenticated entry. Set the EXA_API_KEY atmosphere variable together with your key earlier than working your agent. You may receive a key from the Exa dashboard:

export EXA_API_KEY="your_exa_api_key_here"

2. Import and register the instruments

In your agent code, import exa_search and exa_get_contents from strands_tools.exa and embrace them within the agent’s software listing:

from strands import Agent
from strands_tools.exa import exa_search, exa_get_contents
agent = Agent(instruments=[exa_search, exa_get_contents])

3. Invoke your agent

As soon as the instruments are registered, your agent can interleave search and content material extraction naturally as a part of its reasoning movement:

response = agent( "Seek for the newest traits in AI brokers and supply a concise abstract of key developments")

With the agent arrange, you can begin utilizing the Exa instruments for various search eventualities.

Instance: Constructing a Deep Analysis Agent with Exa

To see how each instruments work collectively, the next instance builds a deep analysis assistant that demonstrates each Exa instruments in a multi-step workflow. Given a analysis query, the agent runs 4 focused searches throughout totally different supply varieties, extracts full content material from essentially the most promising outcomes, and synthesizes every part right into a structured analysis temporary. Your complete workflow executes inside a single agent invocation, with a number of software calls occurring as a part of the reasoning loop.The important thing design perception is that totally different supply varieties require totally different search parameters, however not totally different instruments. The 2 Exa instruments are reused all through the workflow with totally different parameter configurations at every step: class to focus on information, PDFs, or repositories; date filters for recency; JSON schemas for structured extraction; and dwell crawling for freshness.

Get began

  1. Join an Exa API key on the Exa dashboard
  2. Clone the pattern repository and run the deep analysis assistant
  3. Modify the system immediate to focus on your area: swap class filters, date ranges, and JSON schemas to match your use case

Organising the agent

The setup takes a mannequin, a system immediate, and the 2 Exa instruments:


from strands import Agent
from strands.fashions.bedrock import BedrockModel
from strands_tools.exa import exa_search, exa_get_contents

def create_research_agent() -> Agent:
    mannequin = BedrockModel(
        model_id="us.anthropic.claude-sonnet-4-6",
        region_name="us-west-2",
        max_tokens=20000,
    )
    return Agent(
        mannequin=mannequin,
        system_prompt=load_system_prompt(),
        instruments=[exa_search, exa_get_contents],
    )

A system immediate defines the analysis workflow, guiding the agent via six steps: 4 focused searches throughout totally different supply varieties, a deep-dive content material extraction, and a last synthesis go. The agent decides when and the best way to name every software, the best way to interpret the outcomes, and when to maneuver to the subsequent step as a part of its reasoning loop. The 6-step analysis workflowEach step instructs the agent to name the Exa instruments with totally different parameters tuned for that type of content material.

Step 1: Overview search – A broad sweep utilizing auto mode builds foundational understanding. The system immediate instructs the agent to name `exa_search` with these parameters.

- sort: "auto"
- num_results: 5
- textual content: {"maxCharacters": 2000}
- highlights: {"maxCharacters": 4000}
- abstract: {"question": "What are the important thing ideas, details, and vital particulars?"}
- subpages: 2
- subpage_target: ["overview", "about", "introduction"]
- max_age_hours: 168

Step 2: Information search – The main focus narrows to information sources inside a 30-day date window. The date boundary is computed in Python and injected into the immediate. The max_age_hours units the utmost acceptable age (in hours) for cached content material.

- class: "information"
- num_results: 5
- start_published_date: 
- textual content: {"maxCharacters": 1500}
- abstract: {"question": "What are the important thing bulletins, developments, and information?"}
- max_age_hours: 24

Step 3: Analysis papers – For educational depth, the search targets the analysis paper class with a guided question to extract key findings, methodology, and conclusions as concise excerpts.

- class: "analysis paper"
- num_results: 5
- textual content: {"maxCharacters": 2000}
- abstract: {
    "question": "Extract the analysis findings, methodology, and conclusions",
    "schema": {
      "sort": "object",
      "properties": {
        "title": {"sort": "string", "description": "Paper title"},
        "main_findings": {"sort": "string", "description": "Key findings and outcomes"},
        "methodology": {"sort": "string", "description": "Analysis methodology used"},
        "conclusions": {"sort": "string", "description": "Important conclusions"}
      },
      "required": ["main_findings", "conclusions"]
    }
  }

Step 4: GitHub initiatives – Open supply implementations floor via the github class.

- class: "github"
- num_results: 5
- highlights: {"maxCharacters": 4000}

Step 5: Deep dive – The agent switches from discovery to extraction. The 2 or three most promising URLs from earlier steps get their full content material pulled with exa_get_contents. This step makes use of pressured dwell crawling ("all the time" as an alternative of "fallback") for contemporary content material, a better character restrict (4000) for complete extraction, and subpage crawling to observe hyperlinks to references, citations, and methodology pages.

- urls: <2-3 most respected URLs from earlier searches>
- textual content: {"maxCharacters": 4000}
- highlights: {"maxCharacters": 4000}
- abstract: {"question": "Extract all vital particulars, insights, and actionable data"}
- subpages: 3
- subpage_target: ["references", "citations", "bibliography", "methodology"]
- max_age_hours: 0

Step 6: Synthesis – No instruments are known as on this last step. Every part gathered from the earlier steps feeds right into a structured analysis temporary with sections for an government abstract, subject overview, current developments, key analysis and papers, instruments and implementations, deep dive insights, and an entire listing of sources with URLs.

The multi-step workflow gives a number of benefits over a single search name or a fundamental search API wrapper:

  • Grounded solutions – Each declare within the last temporary traces again to a supply URL, lowering hallucination.
  • Environment friendly token utilization – Summaries at search and extraction time hold the content material concise, so the LLM works with distilled information relatively than uncooked web page dumps.
  • Autonomous depth – The agent iterates throughout supply varieties (information, papers, code repositories, full pages) with out human steering, overlaying floor {that a} single search couldn’t.

Tracing with Amazon Bedrock AgentCore Observability

A 6-step pipeline with a number of software calls is tough to debug with out structured tracing. Amazon Bedrock AgentCore Observability, constructed on OpenTelemetry, devices the complete agent run with minimal code modifications. Every software name and LLM invocation turns into a span with parent-child relationships.Within the CloudWatch GenAI Observability Dashboard, every analysis run seems as a full hint. You may see the common span latency throughout totally different spans within the agent.

Agent Metrics — Errors and Latency Dashboard

You may drill into particular person spans to examine:

  • Device name parameters per exa_search or exa_get_contents invocation, verifying the agent used the right class, date vary, and content material limits at every step

Trace Detail — Tool Call JSON Payload

  • Latency per step, figuring out whether or not the information search or the deep dive extraction is the bottleneck
  • Token consumption by LLM invocation, displaying token distribution throughout search steps versus synthesis

Trace Detail — System Prompt and Agent Initialization

Agentic workflows are non-deterministic. The identical question can produce totally different search outcomes, totally different URL alternatives for the deep dive, and totally different synthesis outputs. Hint knowledge turns debugging from guesswork into inspection. An instance of the ultimate response and the analysis temporary is proven within the last step as within the screenshot beneath –

Trace Detail — Final Synthesis Output

Greatest practices for utilizing Exa instruments

As you combine Exa instruments into your brokers, just a few patterns will help you optimize for high quality, latency, and price. The next suggestions will allow you to get essentially the most out of the Exa instruments in your agent workflows. For extra on search varieties, content material modes, and superior filtering, see the Exa finest practices documentation.

  • Begin with auto and modify from there: The auto search sort handles most queries nicely. Swap to deep for analysis duties the place lacking a related supply is expensive, and to quick or immediate when the agent makes many sequential searches and cumulative latency issues greater than per-query completeness.
  • Management content material dimension to handle token budgets: Set maxCharacters on “highlights” discipline (the place default maxCharacters is 4,000).

Clear up sources

This walkthrough doesn’t create any persistent AWS sources. If you happen to now not want your Exa API key, revoke it from the Exa dashboard

Conclusion

The Strands Brokers SDK and Exa present a path to constructing AI brokers which are grounded in present, correct net data. Exa’s search delivers semantic understanding, class filtering narrows outcomes to the fitting content material sort, AI summaries with JSON schemas return precisely the construction your agent wants, and dwell crawling supplies freshness. The Strands Brokers integration exposes these capabilities via two instruments and some traces of setup code.

Because the deep analysis assistant demonstrates, you’ll be able to construct a multi-step analysis agent that searches throughout information, educational papers, and code repositories, extracts full web page content material from the most effective outcomes, and synthesizes every part right into a grounded temporary, all pushed by a single system immediate. The agent targets supply varieties with class filters, controls recency with date ranges, shapes output with JSON schemas and manages freshness with dwell crawling. You may take a look at search, contents, and reply endpoints straight from the Exa dashboard earlier than wiring them into your agent. Your complete workflow is traceable via Amazon Bedrock AgentCore Observability, turning non-deterministic agent conduct into inspectable, debuggable spans. The sample applies past analysis to aggressive intelligence, technical assist, market evaluation, and different domains the place brokers want real-time net data.Attempt the deep analysis assistant pattern with your personal analysis questions. Get your Exa API key to start out constructing, discover the Amazon Bedrock documentation to study extra in regards to the underlying platform, and share your suggestions on the Strands Brokers GitHub repository.


Concerning the authors

Madhu Samhitha Vangara is a Worldwide GenAI Specialist Answer Architect at AWS, specializing in Agentic AI GTM for Amazon Bedrock AgentCore and Strands Brokers. She brings a deep understanding of enterprise enterprise worth, with earlier trade expertise in Juniper Networks, VMware, Barclays, and IGCAR. She interprets rising AI capabilities and analysis into measurable outcomes for patrons. She is a speaker at AI conferences like AWS re:Invent, NVIDIA GTC, AI Summit and others the place she makes a speciality of multi-agent methods, agent observability, LLMs, accomplice ecosystems, and production-grade Agentic AI. She holds a grasp’s in Laptop Science from UMass Amherst. Outdoors work, she’s a skilled Indian classical dancer and an artwork fanatic.

Manoj Selvakumar is a GenAI Specialist Options Architect at AWS specializing in agentic AI methods. He helps startups and enterprises architect manufacturing AI brokers utilizing the Strands Brokers SDK and Amazon Bedrock AgentCore, with experience in multi-agent orchestration, context engineering, and inference optimization. His work with prospects spans long-running activity patterns, reminiscence administration, and manufacturing scaling throughout distributed deployments. He drives technical adoption and ecosystem development for Strands Brokers, via open-source samples, accomplice integrations, and neighborhood enablement.

Asheesh Goja is CTO for superintelligence prospects at Lambda. Beforehand, he was a Principal Gen AI Options Architect at AWS. Earlier, he labored at Cisco and UPS, main initiatives to speed up adoption of rising applied sciences. His experience spans ideation, co-design, incubation, and enterprise product improvement. He holds a broad portfolio of {hardware} and software program patents, together with a real-time C++ DSL, IoT units, and Laptop Imaginative and prescient and Edge AI prototypes. An energetic contributor to Generative AI and Edge AI, he shares insights via tech blogs and as a speaker at trade conferences and boards.

Mani Khanuja is a Technical AI Chief and Principal Generative AI Options Architect at AWS with 20+ years of expertise constructing AI platforms from scratch and driving enterprise AI technique. She works straight with prospects to construct their Generative AI technique, from structure to manufacturing deployment at scale. Her present focus is scaling autonomous AI brokers safely and effectively: growing stateful, memory-driven brokers with personalization, advancing AI governance frameworks, and translating cutting-edge analysis into real-world enterprise methods. She is the writer of Utilized Machine Studying and Excessive-Efficiency Computing on AWS. She can also be a acknowledged technical speaker at Re:Invent, Grace Hopper Celebration, AI Engineer Summit, and AWS Summits worldwide. She resides in Seal Seaside, California, the place she stays energetic with lengthy runs alongside the coast.

Ishan Goswami is the Founding DevRel Engineer at Exa, the place he leads developer relations. He builds and ships integrations, open-source demo apps, MCP servers, and plugins that make Exa straightforward to make use of inside any AI app or workflow. Earlier than Exa, Ishan co-founded a text-to-video startup. He has constructed apps which have been utilized by hundreds of thousands of individuals and open supply initiatives with 1000’s of GitHub stars.

Nitya Sridhar is the Head of Advertising and marketing at Exa, the place she leads product launches, technical blogs, development campaigns, and way more. She works carefully with engineering and GTM to deliver Exa to builders and enterprises all over the world, with a concentrate on clear technical storytelling and turning new product options into tales the neighborhood can use.

Tags: AgentsBuildingExasearchenabledStrandsWeb
Previous Post

Studying Phrase Vectors for Sentiment Evaluation: A Python Copy

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • How Cursor Really Indexes Your Codebase

    404 shares
    Share 162 Tweet 101
  • Construct a serverless audio summarization resolution with Amazon Bedrock and Whisper

    403 shares
    Share 161 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    403 shares
    Share 161 Tweet 101
  • Democratizing AI: How Thomson Reuters Open Area helps no-code AI for each skilled with Amazon Bedrock

    403 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Constructing net search-enabled brokers with Strands and Exa
  • Studying Phrase Vectors for Sentiment Evaluation: A Python Copy
  • Safe AI brokers with Amazon Bedrock AgentCore Identification on Amazon ECS
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.