Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Iterate quicker with Amazon Bedrock AgentCore Runtime direct code deployment

admin by admin
November 5, 2025
in Artificial Intelligence
0
Iterate quicker with Amazon Bedrock AgentCore Runtime direct code deployment
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


Amazon Bedrock AgentCore is an agentic platform for constructing, deploying, and working efficient brokers securely at scale. Amazon Bedrock AgentCore Runtime is a completely managed service of Bedrock AgentCore, which supplies low latency serverless environments to deploy brokers and instruments. It supplies session isolation, helps a number of agent frameworks together with well-liked open-source frameworks, and handles multimodal workloads and long-running brokers.

definition is suppliedagent iscontainer

desire to not fear about Docker experience and container infrastructure when deploying agents.

On this submit, we’ll exhibit the way to use direct code deployment (for Python).

Introducing AgentCore Runtime direct code deployment

With the container deployment methodology, builders create a Dockerfile, construct ARM-compatible containers, handle ECR repositories, and add containers for code modifications. This works effectively the place container DevOps pipelines have already been established to automate deployments. 

deployment, which can considerably enhance developer time and productiveness. D

We’ll focus on the strengths of every deployment possibility that will help you select the best strategy on your use case. 

With direct code deployment, builders create a zipper archive of code and dependencies, add to Amazon S3, and configure the bucket within the agent configuration. When utilizing the AgentCore starter toolkit, the toolkit handles dependency detection, packaging, and add which supplies a much-simplified developer expertise. Direct code deployment can be supported utilizing the API.

Let’s examine the deployment steps at a excessive stage between the 2 strategies:

Container-based deployment

The container-based deployment methodology includes the next steps:

  • Create a Dockerfile
  • Construct ARM-compatible container
  • Create ECR repository
  • Add to ECR
  • Deploy to AgentCore Runtime

Direct code deployment

The direct code deployment methodology includes the next steps:

  • Package deal your code and dependencies into a zipper archive
  • Add it to S3
  • Configure the bucket in agent configuration
  • Deploy to AgentCore Runtime

Find out how to use direct code deployment

Let’s illustrate how direct code deployment works with an agent created with Strands Brokers SDK and utilizing the AgentCore starter-toolkit to deploy the agent.

Stipulations

Earlier than you start, be sure you have the next:

  • Any of the variations of Python 3.10 to three.13
  • Your most popular package deal supervisor put in. For instance, we use uv package deal supervisor.
  • AWS account for creating and deploying brokers
  • Amazon Bedrock mannequin entry to Anthropic Claude Sonnet 4.0

Step 1: Initialize your mission

Arrange a brand new Python mission utilizing the uv package deal supervisor, then navigate into the mission listing:

Step 2: Add the dependencies for the mission

Set up the required Bedrock AgentCore libraries and improvement instruments on your mission. On this instance, dependencies are added utilizing .toml file, alternatively they are often laid out in necessities.txt file:

uv add bedrock-agentcore strands-agents strands-agents-tools
uv add --dev bedrock-agentcore-starter-toolkit
supply .venv/bin/activate

Step 3: Create an agent.py file

Create the principle agent implementation file that defines your AI agent’s conduct:

from bedrock_agentcore import BedrockAgentCoreApp 
from strands import Agent, device 
from strands_tools import calculator  
from strands.fashions import BedrockModel 
import logging 

app = BedrockAgentCoreApp(debug=True) 

# Logging setup 
logging.basicConfig(stage=logging.INFO) 
logger = logging.getLogger(__name__) 

# Create a customized device  
@device 
def climate(): 
     """ Get climate """  
     return "sunny" 

model_id = "us.anthropic.claude-sonnet-4-20250514-v1:0" 
mannequin = BedrockModel( 
     model_id=model_id, 
) 

agent = Agent( 
     mannequin=mannequin, 
     instruments=[calculator, weather], 
     system_prompt="You are a useful assistant. You are able to do simple arithmetic calculation, and inform the climate." 
) 

@app.entrypoint 
def invoke(payload): 
     """Your AI agent perform""" 
     user_input = payload.get("immediate", "Good day! How can I assist you right now?") 
     logger.data("n Person enter: %s", user_input) 
     response = agent(user_input) 
     logger.data("n Agent outcome: %s ", response.message) 
     return response.message['content'][0]['text'] 

if __name__ == "__main__": 
     app.run() 

Step 4: Deploy to AgentCore Runtime

Configure and deploy your agent to the AgentCore Runtime atmosphere:

agentcore configure --entrypoint agent.py --name 

It will launch an interactive session the place you configure the S3 bucket to add the zip deployment package deal to and select a deployment configuration kind (as proven within the following configuration). To go for direct code deployment, select possibility 1 – Code Zip.

Deployment Configuration

Choose deployment kind:

  1. Code Zip (beneficial) – Easy, serverless, no Docker required
  2. Container – For customized runtimes or complicated dependencies

This command creates a zipper deployment package deal, uploads it to the desired S3 bucket, and launches the agent within the AgentCore Runtime atmosphere, making it able to obtain and course of requests.

To check the answer, let’s immediate the agent to see how the climate is:

agentcore invoke '{"immediate":"How is the climate right now?"}'

The primary deployment takes roughly 30 seconds to finish, however subsequent updates to the agent profit from the streamlined direct code deployment course of and will take lower than half the time, supporting quicker iteration cycles throughout improvement.

When to decide on direct code as a substitute of container-based deployment

Let’s have a look at a number of the dimensions and see how the direct code and container-based deployment choices are totally different. It will assist you select the choice that’s best for you:

  • Deployment course of: Direct code deploys brokers as zip recordsdata with no Docker, ECR, or CodeBuild required. Container-based deployment makes use of Docker and ECR with full Dockerfile management.
  • Deployment time: Though there’s not a lot distinction throughout first deployment of an agent, subsequent updates to the agent are considerably quicker with direct code deployment (from a median of 30 seconds for containers to about 10 seconds for direct code deployment).
  • Artifact storage: inth  2026
  • Customization: Direct code deployment helps customized dependencies via ZIP-based packaging, whereas container based mostly is determined by a Dockerfile.
  • Package deal dimension: Direct code deployment limits the package deal dimension to 250MB whereas container-based packages may be as much as 2GB in dimension.
  • Language Assist: Direct code presently helps Python 3.10, 3.11, 3.12, and three.13. Container-based deployment helps many languages and runtimes.

Our basic steering is:

Container-based deployment is the best selection when your package deal exceeds 250MB, you’ve gotten current container CI/CD pipelines, otherwise you want extremely specialised dependencies and customized packaging necessities. Select containers should you require multi-language assist, customized system dependencies or direct management over artifact storage and versioning in your account.

Direct code deployment is the best selection when your package deal is below 250MB, you utilize Python 3.10-3.13 with widespread frameworks like LangGraph, Strands, or CrewAI, and also you want fast prototyping with quick iteration cycles. Select direct code in case your construct course of is easy with out complicated dependencies, and also you wish to take away the Docker/ECR/CodeBuild setup.

A hybrid strategy works effectively for a lot of groups, use direct code for fast prototyping and experimentation the place quick iteration and easy setup speed up improvement, then graduate to containers for manufacturing when package deal dimension, multi-language necessities, or specialised construct processes demand it.

Conclusion

Amazon Bedrock AgentCore direct code deployment makes iterative agent improvement cycles even quicker, whereas nonetheless benefiting from enterprise safety and scale of deployments. Builders can now quickly prototype and iterate by deploying their code immediately, with out having to create a container. To get began with Amazon Bedrock AgentCore direct code deployment, go to the AWS documentation.


In regards to the authors

Chaitra Mathur is as a GenAI Specialist Options Architect at AWS. She works with clients throughout industries in constructing scalable generative AI platforms and operationalizing them. All through her profession, she has shared her experience at quite a few conferences and has authored a number of blogs within the Machine Studying and Generative AI domains.

Author QingweiQingwei Li is a Machine Studying Specialist at Amazon Net Providers. He acquired his Ph.D. in Operations Analysis after he broke his advisor’s analysis grant account and didn’t ship the Nobel Prize he promised. At present he helps clients within the monetary service and insurance coverage business construct machine studying options on AWS. In his spare time, he likes studying and educating.

Kosti Vasilakakis is a Principal PM at AWS on the Agentic AI crew, the place he has led the design and improvement of a number of Bedrock AgentCore companies from the bottom up, together with Runtime, Browser, Code Interpreter, and Identification. He beforehand labored on Amazon SageMaker since its early days, launching AI/ML capabilities now utilized by 1000’s of firms worldwide. Earlier in his profession, Kosti was an information scientist. Outdoors of labor, he builds private productiveness automations, performs tennis, and enjoys life together with his spouse and children.

Tags: AgentCoreAmazonBedrockcodeDeploymentDirectfasterIterateRuntime
Previous Post

NumPy for Absolute Inexperienced persons: A Mission-Based mostly Method to Information Evaluation

Next Post

What Constructing My First Dashboard Taught Me About Information Storytelling

Next Post
What Constructing My First Dashboard Taught Me About Information Storytelling

What Constructing My First Dashboard Taught Me About Information Storytelling

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    402 shares
    Share 161 Tweet 101
  • The Journey from Jupyter to Programmer: A Fast-Begin Information

    402 shares
    Share 161 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    402 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    402 shares
    Share 161 Tweet 101
  • The right way to run Qwen 2.5 on AWS AI chips utilizing Hugging Face libraries

    402 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Do You Actually Want GraphRAG? A Practitioner’s Information Past the Hype
  • Introducing agent-to-agent protocol assist in Amazon Bedrock AgentCore Runtime
  • The Three Ages of Knowledge Science: When to Use Conventional Machine Studying, Deep Studying, or an LLM (Defined with One Instance)
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.