AI brokers are quickly evolving from mere chat interfaces into subtle autonomous staff that deal with advanced, time-intensive duties. As organizations deploy brokers to coach machine studying (ML) fashions, course of giant datasets, and run prolonged simulations, the Mannequin Context Protocol (MCP) has emerged as an ordinary for agent-server integrations. However a essential problem stays: these operations can take minutes or hours to finish, far exceeding typical session timeframes. By utilizing Amazon Bedrock AgentCore and Strands Brokers to implement persistent state administration, you may allow seamless, cross-session process execution in manufacturing environments. Think about your AI agent initiating a multi-hour knowledge processing job, your person closing their laptop computer, and the system seamlessly retrieving accomplished outcomes when the person returns days later—with full visibility into process progress, outcomes, and errors. This functionality transforms AI brokers from conversational assistants into dependable autonomous staff that may deal with enterprise-scale operations. With out these architectural patterns, you’ll encounter timeout errors, inefficient useful resource utilization, and potential knowledge loss when connections terminate unexpectedly.
On this publish, we give you a complete method to realize this. First, we introduce a context message technique that maintains steady communication between servers and purchasers throughout prolonged operations. Subsequent, we develop an asynchronous process administration framework that enables your AI brokers to provoke long-running processes with out blocking different operations. Lastly, we show the best way to carry these methods along with Amazon Bedrock AgentCore and Strands Brokers to construct production-ready AI brokers that may deal with advanced, time-intensive operations reliably.
Frequent approaches to deal with long-running duties
When designing MCP servers for long-running duties, you may face a elementary architectural resolution: ought to the server preserve an lively connection and supply real-time updates, or ought to it decouple process execution from the preliminary request? This selection results in two distinct approaches: context messaging and async process administration.
Utilizing context messaging
The context messaging method maintains steady communication between the MCP server and consumer all through process execution. That is achieved by utilizing MCP’s built-in context object to ship periodic notifications to the consumer. This method is perfect for situations the place duties are usually accomplished inside 10–quarter-hour and community connectivity stays secure. The context messaging method provides these benefits:
- Simple implementation
- No further polling logic required
- Simple consumer implementation
- Minimal overhead
Utilizing async process administration
The async process administration method separates process initiation from execution and consequence retrieval. After executing the MCP instrument, the instrument instantly returns a process initiation message whereas executing the duty within the background. This method excels in demanding enterprise situations the place duties may run for hours, customers want flexibility to disconnect and reconnect, and system reliability is paramount. The async process administration method gives these advantages:
- True fire-and-forget operation
- Secure consumer disconnection whereas duties proceed processing
- Information loss prevention via persistent storage
- Help for long-running operations (hours)
- Resilience in opposition to community interruptions
- Asynchronous workflows
Context messaging
Let’s start by exploring the context messaging method, which gives an easy answer for dealing with reasonably lengthy operations whereas sustaining lively connections. This method builds immediately on present capabilities of MCP and requires minimal further infrastructure, making it a wonderful start line for extending your agent’s processing cut-off dates. Think about you’ve constructed an MCP server for an AI agent that helps knowledge scientists prepare ML fashions. When a person asks the agent to coach a posh mannequin, the underlying course of may take 10–quarter-hour—far past the everyday 30-second to 2-minute HTTP timeout restrict in most environments. And not using a correct technique, the connection would drop, the operation would fail, and the person can be left annoyed. In a Streamable HTTP transport for MCP consumer implementation, these timeout constraints are notably limiting. When process execution exceeds the timeout restrict, the connection aborts and the agent’s workflow interrupts. That is the place context messaging is available in. The next diagram illustrates the workflow when implementing the context messaging method. Context messaging makes use of the built-in context object of MCP to ship periodic indicators from the server to the MCP consumer, successfully maintaining the connection alive all through longer operations. Consider it as sending “heartbeat” messages that assist stop the connection from timing out.
Here’s a code instance to implement the context messaging:
The important thing component right here is the Context parameter within the instrument definition. Once you embrace a parameter with the Context sort annotation, FastMCP robotically injects this object, supplying you with entry to strategies akin to ctx.information() and ctx.report_progress(). These strategies ship messages to the related consumer with out terminating instrument execution.
The report_progress() calls inside the coaching loop function these essential heartbeat messages, ensuring the MCP connection stays lively all through the prolonged processing interval.
For a lot of real-world situations, precise progress can’t be simply quantified—akin to when processing unpredictable datasets or making exterior API calls. In these instances, you may implement a time-based heartbeat system:
This sample creates an asynchronous timer that runs alongside your important process, sending common standing updates each few seconds. Utilizing asyncio.Occasion() for coordination facilitates clear shutdown of the timer when the principle work is accomplished.
When to make use of context messaging
Context messaging works finest when:
- Duties take 1–quarter-hour to finish*
- Community connections are typically secure
- The consumer session can stay lively all through the operation
- You want real-time progress updates throughout processing
- Duties have predictable, finite execution instances with clear termination situations
*Observe: “quarter-hour” is predicated on the utmost time for synchronous requests Amazon Bedrock AgentCore provided. Extra particulars about Bedrock AgentCore service quotas might be discovered at Quotas for Amazon Bedrock AgentCore. If the infrastructure internet hosting the agent doesn’t implement onerous cut-off dates, be extraordinarily cautious when utilizing this method for duties which may probably hold or run indefinitely. With out correct safeguards, a caught process may preserve an open connection indefinitely, resulting in useful resource depletion, unresponsive processes, and probably system-wide stability points.
Listed below are some necessary limitations to think about:
- Steady connection required – The consumer session should stay lively all through the complete operation. If the person closes their browser or the community drops, the work is misplaced.
- Useful resource consumption – Retaining connections open consumes server and consumer sources, probably rising prices for long-running operations.
- Community dependency – Community instability can nonetheless interrupt the method, requiring a full restart.
- Final timeout limits – Most infrastructures have onerous timeout limits that may’t be circumvented with heartbeat messages.
Subsequently, for actually long-running operations which may take hours or for situations the place customers must disconnect and reconnect later, you’ll want the extra strong asynchronous process administration method.
Async process administration
Not like the context messaging method the place purchasers should preserve steady connections, the async process administration sample follows a “fireplace and neglect” mannequin:
- Activity initiation – Shopper makes a request to begin a process and instantly receives a process ID
- Background processing – Server executes the work asynchronously, with no consumer connection required
- Standing checking – Shopper can reconnect every time to test progress utilizing the duty ID
- Outcome retrieval – After they’re accomplished, outcomes stay obtainable for retrieval every time the consumer reconnects
The next determine illustrates the workflow within the asynchronous process administration method.
This sample mirrors the way you work together with batch processing methods in enterprise environments—submit a job, disconnect, and test again later when handy. Right here’s a sensible implementation that demonstrates these ideas:
This implementation creates a process administration system with three distinct MCP instruments:
model_training()– The entry level that initiates a brand new process. Somewhat than performing the work immediately, it:- Generates a singular process identifier utilizing Universally Distinctive Identifier (UUID)
- Creates an preliminary process document within the storage dictionary
- Launches the precise processing as a background process utilizing
asyncio.create_task() - Returns instantly with the duty ID, permitting the consumer to disconnect
check_task_status()– Permits purchasers to watch progress at their comfort by:- Trying up the duty by ID within the storage dictionary
- Returning present standing and progress data
- Offering applicable error dealing with for lacking duties
get_task_results()– Retrieves accomplished outcomes when prepared by:- Verifying the duty exists and is accomplished
- Returning the outcomes saved throughout background processing
- Offering clear error messages when outcomes aren’t prepared
The precise work occurs within the personal _execute_model_training() operate, which runs independently within the background after the preliminary consumer request is accomplished. It updates the duty’s standing and progress within the shared storage because it progresses, making this data obtainable for subsequent standing checks.
Limitations to think about
Though the async process administration method helps clear up connectivity points, it introduces its personal set of limitations:
- Consumer expertise friction – The method requires customers to manually test process standing, keep in mind process IDs throughout periods, and explicitly request outcomes, rising interplay complexity.
- Unstable reminiscence storage – Utilizing in-memory storage (as in our instance) means the duties and outcomes are misplaced if the server restarts, making the answer unsuitable for manufacturing with out persistent storage.
- Serverless surroundings constraints – In ephemeral serverless environments, situations are robotically terminated after intervals of inactivity, inflicting the in-memory process state to be completely misplaced. This creates a paradoxical state of affairs the place the answer designed to deal with long-running operations turns into susceptible to the precise length it goals to help. Until customers preserve common check-ins to assist stop session cut-off dates, each duties and outcomes may vanish.
Shifting towards a strong answer
To handle these essential limitations, it’s worthwhile to embrace exterior persistence that survives each server restarts and occasion terminations. That is the place integration with devoted storage companies turns into important. By utilizing exterior agent reminiscence storage methods, you may basically change the place and the way process data is maintained. As an alternative of counting on the MCP server’s risky reminiscence, this method makes use of persistent exterior agent reminiscence storage companies that stay obtainable no matter server state.
The important thing innovation on this enhanced method is that when the MCP server runs a long-running process, it writes the interim or remaining outcomes immediately into exterior reminiscence storage, akin to Amazon Bedrock AgentCore Reminiscence that the agent can entry, as illustrated within the following determine. This helps create resilience in opposition to two kinds of runtime failures:
- The occasion working the MCP server might be terminated because of inactivity after process completion
- The occasion internet hosting the agent itself might be recycled in ephemeral serverless environments
With exterior reminiscence storage, when customers return to work together with the agent—whether or not minutes, hours, or days later—the agent can retrieve the finished process outcomes from persistent storage. This method minimizes runtime dependencies: even when each the MCP server and agent situations are terminated, the duty outcomes stay safely preserved and accessible when wanted.
The subsequent part will discover the best way to implement this strong answer utilizing Amazon Bedrock AgentCore Runtime as a serverless internet hosting surroundings, AgentCore Reminiscence for persistent agent reminiscence storage, and the Strands Brokers framework to orchestrate these parts right into a cohesive system that maintains process state throughout session boundaries.
Amazon Bedrock AgentCore and Strands Brokers implementation
Earlier than diving into the implementation particulars, it’s necessary to grasp the deployment choices obtainable for MCP servers on Amazon Bedrock AgentCore. There are two major approaches: Amazon Bedrock AgentCore Gateway and AgentCore Runtime. AgentCore Gateway has a 5-minute timeout for invocations, making it unsuitable for internet hosting MCP servers that present instruments requiring prolonged response instances or long-running operations. AgentCore Runtime provides considerably extra flexibility with a 15-minute request timeout (for synchronous requests) and adjustable most session length (for asynchronous processes; the default length is 8 hours) and idle session timeout. Though you can host an MCP server in a conventional serverful surroundings for limitless execution time, AgentCore Runtime gives an optimum stability for many manufacturing situations. You achieve serverless advantages akin to computerized scaling, pay-per-use pricing, and no infrastructure administration, whereas the adjustable maximums session length covers most real-world lengthy working duties—from knowledge processing and mannequin coaching to report technology and complicated simulations. You should use this method to construct subtle AI brokers with out the operational overhead of managing servers whereas reserving serverful deployments just for the uncommon instances that genuinely require multiday executions. For extra details about AgentCore Runtime and AgentCore Gateway service quotas, consult with Quotas for Amazon Bedrock AgentCore.
Subsequent, we stroll via the implementation, which is illustrated within the following diagram. This implementation consists of two interconnected parts: the MCP server that executes long-running duties and writes outcomes to AgentCore Reminiscence, and the agent that manages the dialog stream and retrieves these outcomes when wanted. This structure creates a seamless expertise the place customers can disconnect throughout prolonged processes and return later to seek out their outcomes ready for them.
MCP server implementation
Let’s look at how our MCP server implementation makes use of AgentCore Reminiscence to realize persistence:
The implementation depends on two key parts that allow persistence and session administration.
- The
agentcore_memory_client.create_event()methodology serves because the bridge between instrument execution and chronic reminiscence storage. When a background process is accomplished, this methodology saves the outcomes on to the agent’s reminiscence in AgentCore Reminiscence utilizing the desired reminiscence ID, actor ID, and session ID. Not like conventional approaches the place outcomes is perhaps saved briefly or require guide retrieval, this integration permits process outcomes to turn out to be everlasting elements of the agent’s conversational reminiscence. The agent can then reference these leads to future interactions, making a steady knowledge-building expertise throughout a number of periods. - The second essential element entails extracting session context via
ctx.request_context.request.headers.get("mcp-session-id", ""). The"Mcp-Session-Id"is a part of customary MCP protocol. You should use this header to go a composite identifier containing three important items of data in a delimited format:session_id@@@memory_id@@@actor_id. This method permits our implementation to retrieve the mandatory context identifiers from a single header worth. Headers are used as a substitute of surroundings variables by necessity—these identifiers change dynamically with every dialog, whereas surroundings variables stay static from container startup. This design selection is especially necessary in multi-tenant situations the place a single MCP server concurrently handles requests from a number of customers, every with their very own distinct session context.
One other necessary side on this instance entails correct message formatting when storing occasions. Every message saved to AgentCore Reminiscence requires two parts: the content material and a task identifier. These two parts should be formatted in a means that the agent framework might be acknowledged. Right here is an instance for Strands Brokers framework:
The content material is an inside JSON object (serialized with json.dumps()) that comprises the message particulars, together with function, textual content content material, and message ID. The outer function identifier (USER on this instance) helps AgentCore Reminiscence categorize the message supply.
Strands Brokers implementation
Integrating Amazon Bedrock AgentCore Reminiscence with Strands Brokers is remarkably easy utilizing the AgentCoreMemorySessionManager class from the Bedrock AgentCore SDK. As proven within the following code instance, implementation requires minimal configuration—create an AgentCoreMemoryConfig together with your session identifiers, initialize the session supervisor with this config, and go it on to your agent constructor. The session supervisor transparently handles the reminiscence operations behind the scenes, sustaining dialog historical past and context throughout interactions whereas organizing recollections utilizing the mix of session_id, memory_id, and actor_id. For extra data, consult with AgentCore Reminiscence Session Supervisor.
The session context administration is especially elegant right here. The agent receives session identifiers via the payload and context parameters equipped by AgentCore Runtime. These identifiers type a vital contextual bridge that connects person interactions throughout a number of periods. The session_id might be extracted from the context object (producing a brand new one if wanted), and the memory_id and actor_id might be retrieved from the payload. These identifiers are then packaged right into a customized HTTP header (Mcp-Session-Id) that’s handed to the MCP server throughout connection institution.
To take care of this persistent expertise throughout a number of interactions, purchasers should persistently present the identical identifiers when invoking the agent:
By persistently offering the identical memory_id, actor_id, and runtimeSessionId throughout invocations, customers can create a steady conversational expertise the place process outcomes persist independently of session boundaries. When a person returns days later, the agent can robotically retrieve each dialog historical past and the duty outcomes that have been accomplished throughout their absence.
This structure represents a major development in AI agent capabilities—remodeling long-running operations from fragile, connection-dependent processes into strong, persistent duties that proceed working no matter connection state. The result’s a system that may ship actually asynchronous AI help, the place advanced work continues within the background and outcomes are seamlessly built-in every time the person returns to the dialog.
Conclusion
On this publish, we’ve explored sensible methods to assist AI brokers deal with duties that take minutes and even hours to finish. Whether or not utilizing the extra easy method of maintaining connections alive or the extra superior methodology of injecting process outcomes to agent’s reminiscence, these strategies allow your AI agent to sort out invaluable advanced work with out irritating cut-off dates or misplaced outcomes.
We invite you to attempt these approaches in your individual AI agent tasks. Begin with context messaging for reasonable duties, then transfer to async administration as your wants develop. The options we’ve shared might be rapidly tailored to your particular wants, serving to you construct AI that delivers outcomes reliably—even when customers disconnect and return days later. What long-running duties may your AI assistants deal with higher with these strategies?
To study extra, see the Amazon Bedrock AgentCore documentation and discover our pattern pocket book.
In regards to the Authors
Haochen Xie is a Senior Information Scientist at AWS Generative AI Innovation Heart. He’s an bizarre particular person.
Flora Wang is an Utilized Scientist at AWS Generative AI Innovation Heart, the place she works with prospects to architect and implement scalable Generative AI options that tackle their distinctive enterprise challenges. She focuses on mannequin customization strategies and agent-based AI methods, serving to organizations harness the complete potential of generative AI know-how.
Yuan Tian is an Utilized Scientist on the AWS Generative AI Innovation Heart, the place he works with prospects throughout various industries—together with healthcare, life sciences, finance, and vitality—to architect and implement generative AI options akin to agentic methods. He brings a singular interdisciplinary perspective, combining experience in machine studying with computational biology.
Hari Prasanna Das is an Utilized Scientist on the AWS Generative AI Innovation Heart, the place he works with AWS prospects throughout completely different verticals to expedite their use of Generative AI. Hari holds a PhD in Electrical Engineering and Laptop Sciences from the College of California, Berkeley. His analysis pursuits embrace Generative AI, Deep Studying, Laptop Imaginative and prescient, and Information-Environment friendly Machine Studying.





