Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Persist session state with filesystem configuration and execute shell instructions

admin by admin
April 6, 2026
in Artificial Intelligence
0
Persist session state with filesystem configuration and execute shell instructions
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


AI brokers have developed considerably past chat. Writing code, persist filesystem state, execute shell instructions, and managing states all through the filesystem are some examples of issues that they will do. As agentic coding assistants and growth workflows have matured, the filesystem has turn out to be brokers’ main working reminiscence, extending their capabilities past the context window. This shift creates two challenges that each workforce that’s constructing manufacturing brokers runs into:

  • The filesystem is ephemeral. When your agent’s session stops, every part that it created, just like the put in dependencies, the generated code, or the native git historical past disappears.
  • When your workflow wants a deterministic operation like npm take a look at or git push, you’re compelled to route it by way of the big language mannequin (LLM) or construct customized tooling exterior the runtime. Neither possibility is sweet.

Amazon Bedrock AgentCore Runtime now addresses each challenges with two capabilities: managed session storage for persistent agent filesystem state (public preview) and execute command (InvokeAgentRuntimeCommand) for operating shell instructions immediately contained in the microVM related to every energetic agent session. Every of them is beneficial by itself. Collectively, they unlock workflows that weren’t doable earlier than.

On this submit, we undergo how you can use managed session storage to persist your agent’s filesystem state and how you can execute shell instructions immediately in your agent’s surroundings.

Inside an AgentCore Runtime session

AgentCore Runtime runs every session in a devoted microVM with remoted assets, together with its personal kernel, reminiscence, and filesystem. This structure gives sturdy safety boundaries, but it surely additionally implies that each session boots right into a clear filesystem. When the microVM terminates, whether or not by way of specific cease or idle timeout, every part that the agent created disappears.

Take into consideration what which means in apply. Your coding agent spends twenty minutes scaffolding a undertaking: organising listing constructions, putting in dependencies, producing boilerplate code, configuring construct tooling. You step away for lunch, and while you come again and invoke the identical session, the agent begins from scratch. Each bundle re-installed, each file re-generated. Twenty minutes of compute burned earlier than the agent can do helpful work once more. This limitation could be addressed by writing checkpoint logic to add information to Amazon Easy Storage Service (Amazon S3) earlier than stopping a session and downloading them on resume or to maintain classes alive to keep away from dropping state. This workaround can work, but it surely doesn’t handle the constraints on the filesystem degree and complexity is being added into agent code.

The identical friction exists for deterministic operations. When the agent finishes a repair and it’s essential to run assessments, routing the command by way of the LLM as a device name provides token price, latency, and non-determinism to a predictable operation Another choice is to construct a separate orchestration logic exterior of the runtime, which requires you to connect with the agent’s filesystem, including complexity.

Managed session storage (public preview): state that survives

The primary problem, ephemeral filesystems, is addressed by managed session storage. It offers your agent a persistent listing that survives cease/resume cycles. Persistence is constructed into the runtime and configured at agent creation—every part written to that listing survives even when the compute surroundings is changed.

Configuring persistent storage

To configure persistent storage, do the next:

Add sessionStorage to your agent runtime’s filesystemConfiguration:

aws bedrock-agentcore create-agent-runtime 
  --agent-runtime-name "coding-agent" 
  --role-arn "arn:aws:iam::111122223333:position/AgentExecutionRole" 
  --agent-runtime-artifact '{"containerConfiguration": {
    "containerUri": "123456789012.dkr.ecr.us-west-2.amazonaws.com/my-agent:newest"
  }}' 
  --filesystem-configurations '[{
    "sessionStorage": {
      "mountPath": "/mnt/workspace"
    }
  }]'

Or utilizing the AWS SDK for Python (Boto3):

import boto3

# Use the management aircraft consumer for creating and managing runtimes
control_client = boto3.consumer('bedrock-agentcore-control', region_name="us-west-2")

response = control_client.create_agent_runtime(
    agentRuntimeName="coding-agent",
    agentRuntimeArtifact={
        'containerConfiguration': {
            'containerUri': '123456789012.dkr.ecr.us-west-2.amazonaws.com/my-agent:newest'
        }
    },
    roleArn='arn:aws:iam::111122223333:position/AgentExecutionRole',
    protocolConfiguration={
        'serverProtocol': 'HTTP'
    },
    networkConfiguration={
        'networkMode': 'PUBLIC'
    },
    filesystemConfigurations=[{
        'sessionStorage': {
            'mountPath': '/mnt/workspace'
        }
    }]
)

Notice: AgentCore makes use of two Boto3 service purchasers. The management aircraft consumer (bedrock-agentcore-control) handles runtime lifecycle operations like CreateAgentRuntime, GetAgentRuntime, and DeleteAgentRuntime. The information aircraft consumer (bedrock-agentcore) handles session operations like InvokeAgentRuntime and InvokeAgentRuntimeCommand. The mount path should begin with /mnt adopted by a folder identify (for instance, /mnt/workspace or /mnt/knowledge). After configured, any file that your agent writes to this path is routinely persevered to managed storage.

The cease/resume expertise

You invoke your agent and ask it to arrange a undertaking:

aws bedrock-agentcore invoke-agent-runtime 
  --agent-runtime-arn "arn:...:agent-runtime/coding-agent" 
  --runtime-session-id "session-001" 
  --payload '{"immediate": "Arrange the undertaking and set up dependencies in /mnt/workspace"}'

The agent downloads the code, installs the packages and generates the configuration within the microVM that’s devoted to that session. Then, you cease the session or the idle timeout kicks in, and the microVM terminates.

You come again and invoke with the identical runtime-session-id:

aws bedrock-agentcore invoke-agent-runtime 
  --agent-runtime-arn "arn:...:agent-runtime/coding-agent" 
  --runtime-session-id "session-001" 
  --payload '{"immediate": "Run the assessments and repair any failures"}'

A brand new compute surroundings (microVM) spins up and mounts the identical storage. The agent sees /mnt/workspace precisely because it left it, together with supply information, node_modules, construct artifacts and .git historical past. The agent picks up mid-thought, with out the necessity to re-install and re-generate.

From the agent’s perspective, nothing particular is going on. It reads and writes information to a listing prefer it usually would. Your agent code doesn’t want to vary—no particular APIs, no save/restore logic, no serialization. Write a file to /mnt/workspace, cease the session, resume it, and the file is there.

The session’s compute surroundings (microVM) from yesterday is gone, however the filesystem survived.

Controlling how lengthy the info lives

By default, session storage knowledge is retained for 14 days of idle time. If the session isn’t resumed inside this window, the info is cleaned up. When the agent endpoint is up to date to a special model and the identical runtime-session-id is invoked, the session knowledge is refreshed. This provides the mounted listing a clear context for the brand new model.

A multi-day growth workflow

Let’s stroll by way of what this seems like in apply. Day 1 – You invoke your coding agent and ask it to obtain a code base, examine the information, and arrange the event surroundings:

aws bedrock-agentcore invoke-agent-runtime 
  --agent-runtime-arn "arn:...:agent-runtime/coding-agent" 
  --runtime-session-id "fefc1779-e5e7-49cf-a2c4-abaf478680c4" 
  --payload '{"immediate": "Obtain the code from s3://amzn-s3-demo-bucket/fastapi-demo-main.zip and checklist all information"}'

The agent downloads the repository to /mnt/workspace, extracts it, and experiences again:

Information within the fastapi-demo-main undertaking:
- Dockerfile
- README.md
- foremost.py
- necessities.txt

You shut your laptop computer and go residence. Day 2 – You invoke with the identical session ID:

aws bedrock-agentcore invoke-agent-runtime 
  --agent-runtime-arn "arn:...:agent-runtime/coding-agent" 
  --runtime-session-id "fefc1779-e5e7-49cf-a2c4-abaf478680c4" 
  --payload '{"immediate": "Add a brand new operate referred to as hello_world to foremost.py"}'

The agent sees the undertaking precisely because it left it. It modifies foremost.py immediately. No re-downloading, no re-extracting. Whenever you ask the agent to checklist the information, every part is there, together with the modified foremost.py with the brand new hello_world operate. The compute surroundings (microVM) from yesterday has already been terminated, however the work persists.

That addresses the primary problem, however now your agent has written new code and it’s essential to confirm that it really works. That is the place the second functionality is available in.

Execute shell command: deterministic operations, immediately within the agent’s surroundings

The second problem, operating deterministic operations with out routing them by way of the LLM, is addressed by InvokeAgentRuntimeCommand. You’ll be able to execute shell instructions immediately inside a operating AgentCore Runtime session and stream the output again over HTTP/2.

The important thing perception is that brokers and shell instructions are good at various things:

Use execute command Use the agent
The operation has a identified command (npm take a look at, git push) The operation requires reasoning (“analyze this code and repair the bug”)
You need deterministic execution—identical command, identical end result You need the LLM to determine what to do
You want streaming output from a long-running course of You want the agent to make use of instruments in a loop
The operation is a validation gate in your workflow The operation is the artistic or analytical work
You’re bootstrapping the surroundings earlier than the agent begins You’re asking the agent to work on a process

When your agent finishes writing code and it’s essential to run assessments, you shouldn’t want the LLM for that. npm take a look at is npm take a look at. The command is understood, the habits needs to be deterministic, and also you need the uncooked output, not the LLM’s interpretation of it.

Operating a command

Execute a command utilizing the AWS SDK for Python (Boto3):

import boto3
import sys

consumer = boto3.consumer('bedrock-agentcore', region_name="us-west-2")

response = consumer.invoke_agent_runtime_command(
    agentRuntimeArn='arn:aws:bedrock-agentcore:us-west-2:111122223333:runtime/my-agent',
    runtimeSessionId='session-id-at-least-33-characters-long',
    physique={
        'command': '/bin/bash -c "npm take a look at"',
        'timeout': 60
    }
)

for occasion in response['stream']:
    if 'chunk' in occasion:
        chunk = occasion['chunk']

        if 'contentStart' in chunk:
            print("Command execution began")

        if 'contentDelta' in chunk:
            delta = chunk['contentDelta']
            if delta.get('stdout'):
                print(delta['stdout'], finish='')
            if delta.get('stderr'):
                print(delta['stderr'], finish='', file=sys.stderr)

        if 'contentStop' in chunk:
            cease = chunk['contentStop']
            print(f"nExit code: {cease.get('exitCode')}, Standing: {cease.get('standing')}")

The response streams three occasion sorts in actual time:

Occasion When Accommodates
contentStart First chunk Confirms the command began
contentDelta Throughout execution stdout and/or stderr output
contentStop Final chunk exitCode and standing (COMPLETED or TIMED_OUT)

Because the output is streamed because it’s produced, you’ll be able to detect a failure within the first few seconds and react instantly, slightly than ready for the total run.

Container, identical filesystem

That is the essential element: instructions run in the identical container, filesystem, and surroundings as your agent, not a sidecar, or a separate course of speaking over a socket. A file that the agent wrote at /mnt/workspace/repair.py is straight away seen to a command operating cat /mnt/workspace/repair.py. There’s no synchronization step, no file switch, and no shared quantity to configure.

The AgentCore Runtime microVM doesn’t embrace developer instruments by default. This additionally implies that any instruments that your instructions depend upon, git, npm, or language runtimes, should be added in your container picture or put in dynamically at runtime.

Design decisions that form how you employ it

  • One-shot execution. Every command spawns a brand new bash course of, runs to completion (or outing), and returns. No persistent shell session between instructions. This matches how agent frameworks use command execution, craft a command, run it, learn the output, and determine what to do subsequent.
  • Non-blocking. Command execution doesn’t block agent invocations. You’ll be able to invoke the agent and run instructions concurrently on the identical session.
  • Stateless between instructions. Every command begins contemporary, there’s no shell historical past and surroundings variables from earlier instructions don’t carry over. In case you want state, encode it within the command: cd /workspace && export NODE_ENV=take a look at && npm take a look at.

What individuals are constructing with it

  • Take a look at automation — After the agent writes code, run npm take a look at or pytest as a command. Stream the output and feed particular failures again to the agent for iteration.
  • Git workflows — Branching, committing, and pushing are deterministic. Run them as instructions, protecting model management logic out of the LLM.
  • Surroundings bootstrapping — Clone repos, set up packages, arrange construct tooling earlier than the agent begins. This might be quicker and extra dependable as direct instructions.
  • Construct pipelines — Something with a identified command that ought to run precisely as specified, for instance: cargo construct --release, mvn bundle, go construct.
  • Validation gates — Run linters, sort checkers, safety scanners as a gate after the agent writes code, however earlier than committing.
  • Debugging — Examine the runtime surroundings: examine put in packages, disk utilization, and operating processes. These might be helpful for understanding agent failures.

Higher collectively: the filesystem is the shared context

Managed session storage (in public preview) addresses the ephemeral filesystem problem. Execute command addresses the deterministic operations problem. Every is effective by itself, however they’re extra highly effective when mixed as a result of they share the identical filesystem that ties the whole workflow collectively.

When your agent runtime has managed session storage configured at /mnt/workspace, every part operates on the identical persistent listing:

  • InvokeAgentRuntime writes code, generates artifacts, and manages information in /mnt/workspace.
  • InvokeAgentRuntimeCommand runs assessments, git operations, and builds studying from and writing to the identical /mnt/workspace.
  • Cease the session. Compute (microVM) spins down. /mnt/workspace is persevered.
  • Resume the following day. New compute mounts the identical storage. Each the agent and execute command see the identical information.

The filesystem turns into the shared context that connects agent reasoning, deterministic operations, and time. Right here’s what that appears like in code:

import boto3
import json

consumer = boto3.consumer('bedrock-agentcore', region_name="us-west-2")

AGENT_ARN = 'arn:aws:bedrock-agentcore:us-west-2:111122223333:runtime/my-coding-agent'
SESSION_ID = 'fefc1779-e5e7-49cf-a2c4-abaf478680c4'

def run_command(command, timeout=60):
    """Execute a shell command and return the exit code."""
    response = consumer.invoke_agent_runtime_command(
        agentRuntimeArn=AGENT_ARN,
        runtimeSessionId=SESSION_ID,
        contentType="utility/json",
        settle for="utility/vnd.amazon.eventstream",
        physique={'command': command, 'timeout': timeout}
    )
    for occasion in response.get('stream', []):
        if 'chunk' in occasion and 'contentStop' in occasion['chunk']:
            return occasion['chunk']['contentStop'].get('exitCode')
    return None

# Step 1: The agent analyzes the difficulty and writes a repair
# (Reasoning process → use InvokeAgentRuntime)
response = consumer.invoke_agent_runtime(
    agentRuntimeArn=AGENT_ARN,
    runtimeSessionId=SESSION_ID,
    payload=json.dumps({
        "immediate": "Learn JIRA-1234 and implement the repair in /mnt/workspace"
    }).encode()
)
# Course of agent response...

# Step 2: Run the take a look at suite
# (Deterministic operation → use InvokeAgentRuntimeCommand)
exit_code = run_command('/bin/bash -c "cd /mnt/workspace && npm take a look at"', timeout=300)

# Step 3: If assessments move, commit and push
# (Deterministic operation → use InvokeAgentRuntimeCommand)
if exit_code == 0:
    run_command('/bin/bash -c "cd /mnt/workspace && git checkout -b repair/JIRA-1234"')
    run_command('/bin/bash -c "cd /mnt/workspace && git add -A && git commit -m 'Repair JIRA-1234'"')
    run_command('/bin/bash -c "cd /mnt/workspace && git push origin repair/JIRA-1234"')

The agent writes the code whereas the platform runs the instructions. Every does what it’s designed for. As a result of /mnt/workspace is backed by managed session storage, you’ll be able to cease this session, come again the following day, and the whole workspace continues to be there prepared for the agent to proceed iterating.

That is the sample: the agent causes, execute command acts, and the persistent filesystem remembers. The three capabilities type a loop that doesn’t break while you shut your laptop computer.

Getting began

Each capabilities are actually accessible. Right here’s how you can begin utilizing them:

Managed session storage (public preview) — Add filesystemConfigurations with sessionStorage when calling CreateAgentRuntime. Specify a mount path beginning with /mnt. Every thing your agent writes to that path persists throughout cease/resume cycles. Most allowed knowledge is 1 GB per session.

Execute command — Name InvokeAgentRuntimeCommand with a command string and timeout on any energetic session. The command runs in the identical container as your agent, with entry to the identical filesystem.

To get began with tutorials and pattern code:

It began with two challenges: brokers that lose their work when classes cease, and deterministic operations that needed to be routed by way of the LLM or constructed exterior the runtime. Managed session storage and execute command addresses each of those challenges. The shared filesystem between them creates a growth loop the place the agent causes, instructions execute, and the work persists throughout classes. Check out the brand new Amazon Bedrock AgentCore capabilities and tell us what you construct.


Concerning the Authors

Evandro Franco

Evandro Franco is a Sr. Knowledge Scientist engaged on Amazon Net Providers. He’s a part of the International GTM workforce that helps AWS clients overcome enterprise challenges associated to AI/ML on high of AWS, primarily on Amazon Bedrock AgentCore and Strands Brokers. He has greater than 18 years of expertise working with expertise, from software program growth, infrastructure, serverless, to machine studying. In his free time, Evandro enjoys taking part in together with his son, primarily constructing some humorous Lego bricks.

Rui Cardoso

Rui Cardoso is a Sr. Associate Options Architect at Amazon Net Providers (AWS). He’s specializing in AI/ML and IoT. He works with AWS Companions and helps them in creating options in AWS. When not working, he enjoys biking, climbing and studying new issues.

Kosti Vasilakakis

Kosti Vasilakakis is a Principal PM at AWS on the Agentic AI workforce, the place he has led the design and growth of a number of Bedrock AgentCore providers from the bottom up, together with Runtime, Browser, Code Interpreter, and Identification. He beforehand labored on Amazon SageMaker since its early days, launching AI/ML capabilities now utilized by 1000’s of corporations worldwide. Earlier in his profession, Kosti was a knowledge scientist. Outdoors of labor, he builds private productiveness automations, performs tennis, and enjoys life together with his spouse and youngsters.

Vignesh Somasundaram

Vignesh Somasundaram is a founding Software program Growth Engineer at AWS on the Amazon Bedrock AgentCore workforce, the place he builds AI infrastructure for deploying brokers at scale. With a Grasp’s in Laptop Science from Purdue College and a Bachelor’s from Anna College, he’s enthusiastic about constructing large-scale distributed programs and tackling architectural challenges. When he’s not at work, you’ll discover him outside taking part in cricket, badminton, or exploring nature.

Adarsh Srikanth

Adarsh Srikanth is a founding Software program Growth Engineer at Amazon Bedrock AgentCore, the place he architects and develops platforms that energy AI agent providers. He earned his grasp’s diploma in laptop science from the College of Southern California and has three years {of professional} software program engineering expertise. Outdoors of labor, Adarsh enjoys exploring nationwide parks, climbing, and taking part in racquet sports activities.

Abhimanyu Siwach

Abhimanyu Siwach is a founding Software program Growth Engineer at Amazon Bedrock AgentCore, the place he drives the structure and technical path of the platform that allows clients to deploy and handle AI brokers at scale. He holds a level in Laptop Science from BITS Pilani. With over eight years at Amazon spanning groups together with Final Mile, Promoting and AWS, he brings deep expertise in constructing large-scale distributed programs. Outdoors of labor, Abhimanyu enjoys touring, constructing AI-powered apps, and exploring new applied sciences.

Tags: commandsConfigurationexecutefilesystemPersistsessionshellState
Previous Post

Proxy-Pointer RAG: Reaching Vectorless Accuracy at Vector RAG Scale and Price

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • How Cursor Really Indexes Your Codebase

    404 shares
    Share 162 Tweet 101
  • Construct a serverless audio summarization resolution with Amazon Bedrock and Whisper

    403 shares
    Share 161 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    403 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    403 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Persist session state with filesystem configuration and execute shell instructions
  • Proxy-Pointer RAG: Reaching Vectorless Accuracy at Vector RAG Scale and Price
  • Rocket Shut transforms mortgage doc processing with Amazon Bedrock and Amazon Textract
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.