Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Selecting Between LLM Agent Frameworks | by Aparna Dhinakaran | Sep, 2024

admin by admin
September 21, 2024
in Artificial Intelligence
0
Selecting Between LLM Agent Frameworks | by Aparna Dhinakaran | Sep, 2024
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


The tradeoffs between constructing bespoke code-based brokers and the key agent frameworks.

Aparna Dhinakaran

Towards Data Science

Picture by writer

Because of John Gilhuly for his contributions to this piece.

Brokers are having a second. With a number of new frameworks and recent funding within the area, trendy AI brokers are overcoming shaky origins to quickly supplant RAG as an implementation precedence. So will 2024 lastly be the yr that autonomous AI techniques that may take over writing our emails, reserving flights, speaking to our information, or seemingly another job?

Perhaps, however a lot work stays to get to that time. Any developer constructing an agent should not solely select foundations — which mannequin, use case, and structure to make use of — but in addition which framework to leverage. Do you go together with the long-standing LangGraph, or the newer entrant LlamaIndex Workflows? Or do you go the normal route and code the entire thing your self?

This submit goals to make that selection a bit simpler. Over the previous few weeks, I constructed the identical agent in main frameworks to look at a number of the strengths and weaknesses of every at a technical degree. The entire code for every agent is obtainable in this repo.

Background on the Agent Used for Testing

The agent used for testing consists of perform calling, a number of instruments or abilities, connections to outdoors assets, and shared state or reminiscence.

The agent has the next capabilities:

  1. Answering questions from a information base
  2. Speaking to information: answering questions on telemetry information of an LLM software
  3. Analyzing information: analyzing higher-level traits and patterns in retrieved telemetry information

In an effort to accomplish these, the agent has three beginning abilities: RAG with product documentation, SQL era on a hint database, and information evaluation. A easy gradio-powered interface is used for the agent UI, with the agent itself structured as a chatbot.

The primary choice you’ve gotten when growing an agent is to skip the frameworks completely and construct the agent absolutely your self. When embarking on this mission, this was the method I began with.

Picture created by writer

Pure Code Structure

The code-based agent beneath is made up of an OpenAI-powered router that makes use of perform calling to pick the correct ability to make use of. After that ability completes, it returns again to the router to both name one other ability or reply to the person.

The agent retains an ongoing record of messages and responses that’s handed absolutely into the router on every name to protect context by cycles.

def router(messages):
if not any(
isinstance(message, dict) and message.get("position") == "system" for message in messages
):
system_prompt = {"position": "system", "content material": SYSTEM_PROMPT}
messages.append(system_prompt)

response = shopper.chat.completions.create(
mannequin="gpt-4o",
messages=messages,
instruments=skill_map.get_combined_function_description_for_openai(),
)

messages.append(response.decisions[0].message)
tool_calls = response.decisions[0].message.tool_calls
if tool_calls:
handle_tool_calls(tool_calls, messages)
return router(messages)
else:
return response.decisions[0].message.content material

The talents themselves are outlined in their very own courses (e.g. GenerateSQLQuery) which are collectively held in a SkillMap. The router itself solely interacts with the SkillMap, which it makes use of to load ability names, descriptions, and callable features. This method implies that including a brand new ability to the agent is so simple as writing that ability as its personal class, then including it to the record of abilities within the SkillMap. The concept right here is to make it straightforward so as to add new abilities with out disturbing the router code.

class SkillMap:
def __init__(self):
abilities = [AnalyzeData(), GenerateSQLQuery()]

self.skill_map = {}
for ability in abilities:
self.skill_map[skill.get_function_name()] = (
ability.get_function_dict(),
ability.get_function_callable(),
)

def get_function_callable_by_name(self, skill_name) -> Callable:
return self.skill_map[skill_name][1]

def get_combined_function_description_for_openai(self):
combined_dict = []
for _, (function_dict, _) in self.skill_map.objects():
combined_dict.append(function_dict)
return combined_dict

def get_function_list(self):
return record(self.skill_map.keys())

def get_list_of_function_callables(self):
return [skill[1] for ability in self.skill_map.values()]

def get_function_description_by_name(self, skill_name):
return str(self.skill_map[skill_name][0]["function"])

Total, this method is pretty easy to implement however comes with just a few challenges.

Challenges with Pure Code Brokers

The primary issue lies in structuring the router system immediate. Usually, the router within the instance above insisted on producing SQL itself as an alternative of delegating that to the correct ability. If you happen to’ve ever tried to get an LLM not to do one thing, you know the way irritating that have may be; discovering a working immediate took many rounds of debugging. Accounting for the totally different output codecs from every step was additionally tough. Since I opted to not use structured outputs, I needed to be prepared for a number of totally different codecs from every of the LLM calls in my router and abilities.

Advantages of a Pure Code Agent

A code-based method supplies an excellent baseline and place to begin, providing an effective way to find out how brokers work with out counting on canned agent tutorials from prevailing frameworks. Though convincing the LLM to behave may be difficult, the code construction itself is straightforward sufficient to make use of and would possibly make sense for sure use instances (extra within the evaluation part beneath).

LangGraph is likely one of the longest-standing agent frameworks, first releasing in January 2024. The framework is constructed to deal with the acyclic nature of current pipelines and chains by adopting a Pregel graph construction as an alternative. LangGraph makes it simpler to outline loops in your agent by including the ideas of nodes, edges, and conditional edges to traverse a graph. LangGraph is constructed on prime of LangChain, and makes use of the objects and kinds from that framework.

Picture created by writer

LangGraph Structure

The LangGraph agent seems just like the code-based agent on paper, however the code behind it’s drastically totally different. LangGraph nonetheless makes use of a “router” technically, in that it calls OpenAI with features and makes use of the response to proceed to a brand new step. Nevertheless the best way this system strikes between abilities is managed fully in a different way.

instruments = [generate_and_run_sql_query, data_analyzer]
mannequin = ChatOpenAI(mannequin="gpt-4o", temperature=0).bind_tools(instruments)

def create_agent_graph():
workflow = StateGraph(MessagesState)

tool_node = ToolNode(instruments)
workflow.add_node("agent", call_model)
workflow.add_node("instruments", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("instruments", "agent")

checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)
return app

The graph outlined right here has a node for the preliminary OpenAI name, known as “agent” above, and one for the software dealing with step, known as “instruments.” LangGraph has a built-in object known as ToolNode that takes an inventory of callable instruments and triggers them primarily based on a ChatMessage response, earlier than returning to the “agent” node once more.

def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "instruments"
return END

def call_model(state: MessagesState):
messages = state["messages"]
response = mannequin.invoke(messages)
return {"messages": [response]}

After every name of the “agent” node (put one other means: the router within the code-based agent), the should_continue edge decides whether or not to return the response to the person or cross on to the ToolNode to deal with software calls.

All through every node, the “state” shops the record of messages and responses from OpenAI, just like the code-based agent’s method.

Challenges with LangGraph

Many of the difficulties with LangGraph within the instance stem from the necessity to use Langchain objects for issues to stream properly.

Problem #1: Perform Name Validation

In an effort to use the ToolNode object, I needed to refactor most of my current Talent code. The ToolNode takes an inventory of callable features, which initially made me assume I might use my current features, nonetheless issues broke down because of my perform parameters.

The talents have been outlined as courses with a callable member perform, that means that they had “self” as their first parameter. GPT-4o was good sufficient to not embrace the “self” parameter within the generated perform name, nonetheless LangGraph learn this as a validation error because of a lacking parameter.

This took hours to determine, as a result of the error message as an alternative marked the third parameter within the perform (“args” on the information evaluation ability) because the lacking parameter:

pydantic.v1.error_wrappers.ValidationError: 1 validation error for data_analysis_toolSchema
args discipline required (sort=value_error.lacking)

It’s value mentioning that the error message originated from Pydantic, not from LangGraph.

I ultimately bit the bullet and redefined my abilities as primary strategies with Langchain’s @software decorator, and was in a position to get issues working.

@software
def generate_and_run_sql_query(question: str):
"""Generates and runs an SQL question primarily based on the immediate.

Args:
question (str): A string containing the unique person immediate.

Returns:
str: The results of the SQL question.
"""

Problem #2: Debugging

As talked about, debugging in a framework is tough. This primarily comes all the way down to complicated error messages and abstracted ideas that make it tougher to view variables.

The abstracted ideas primarily present up when making an attempt to debug the messages being despatched across the agent. LangGraph shops these messages in state[“messages”]. Some nodes throughout the graph pull from these messages routinely, which might make it obscure the worth of messages when they’re accessed by the node.

A sequential view of the agent’s actions (picture by writer)

LangGraph Advantages

One of many essential advantages of LangGraph is that it’s straightforward to work with. The graph construction code is clear and accessible. Particularly when you’ve got complicated node logic, having a single view of the graph makes it simpler to grasp how the agent is linked collectively. LangGraph additionally makes it easy to transform an current software in-built LangChain.

Takeaway

If you happen to use all the pieces within the framework, LangGraph works cleanly; if you happen to step outdoors of it, put together for some debugging complications.

Workflows is a more recent entrant into the agent framework area, premiering earlier this summer season. Like LangGraph, it goals to make looping brokers simpler to construct. Workflows additionally has a selected concentrate on working asynchronously.

Some components of Workflows appear to be in direct response to LangGraph, particularly its use of occasions as an alternative of edges and conditional edges. Workflows use steps (analogous to nodes in LangGraph) to accommodate logic, and emitted and acquired occasions to maneuver between steps.

Picture created by writer

The construction above seems just like the LangGraph construction, save for one addition. I added a setup step to the Workflow to arrange the agent context, extra on this beneath. Regardless of the same construction, there’s very totally different code powering it.

Workflows Structure

The code beneath defines the Workflow construction. Just like LangGraph, that is the place I ready the state and hooked up the talents to the LLM object.

class AgentFlow(Workflow):
def __init__(self, llm, timeout=300):
tremendous().__init__(timeout=timeout)
self.llm = llm
self.reminiscence = ChatMemoryBuffer(token_limit=1000).from_defaults(llm=llm)
self.instruments = []
for func in skill_map.get_function_list():
self.instruments.append(
FunctionTool(
skill_map.get_function_callable_by_name(func),
metadata=ToolMetadata(
identify=func, description=skill_map.get_function_description_by_name(func)
),
)
)

@step
async def prepare_agent(self, ev: StartEvent) -> RouterInputEvent:
user_input = ev.enter
user_msg = ChatMessage(position="person", content material=user_input)
self.reminiscence.put(user_msg)

chat_history = self.reminiscence.get()
return RouterInputEvent(enter=chat_history)

That is additionally the place I outline an additional step, “prepare_agent”. This step creates a ChatMessage from the person enter and provides it to the workflow reminiscence. Splitting this out as a separate step implies that we do return to it because the agent loops by steps, which avoids repeatedly including the person message to the reminiscence.

Within the LangGraph case, I completed the identical factor with a run_agent methodology that lived outdoors the graph. This alteration is usually stylistic, nonetheless it’s cleaner in my view to accommodate this logic with the Workflow and graph as we’ve executed right here.

With the Workflow arrange, I then outlined the routing code:

@step
async def router(self, ev: RouterInputEvent) -> ToolCallEvent | StopEvent:
messages = ev.enter

if not any(
isinstance(message, dict) and message.get("position") == "system" for message in messages
):
system_prompt = ChatMessage(position="system", content material=SYSTEM_PROMPT)
messages.insert(0, system_prompt)

with using_prompt_template(template=SYSTEM_PROMPT, model="v0.1"):
response = await self.llm.achat_with_tools(
mannequin="gpt-4o",
messages=messages,
instruments=self.instruments,
)

self.reminiscence.put(response.message)

tool_calls = self.llm.get_tool_calls_from_response(response, error_on_no_tool_call=False)
if tool_calls:
return ToolCallEvent(tool_calls=tool_calls)
else:
return StopEvent(consequence=response.message.content material)

And the software name dealing with code:

@step
async def tool_call_handler(self, ev: ToolCallEvent) -> RouterInputEvent:
tool_calls = ev.tool_calls

for tool_call in tool_calls:
function_name = tool_call.tool_name
arguments = tool_call.tool_kwargs
if "enter" in arguments:
arguments["prompt"] = arguments.pop("enter")

strive:
function_callable = skill_map.get_function_callable_by_name(function_name)
besides KeyError:
function_result = "Error: Unknown perform name"

function_result = function_callable(arguments)
message = ChatMessage(
position="software",
content material=function_result,
additional_kwargs={"tool_call_id": tool_call.tool_id},
)

self.reminiscence.put(message)

return RouterInputEvent(enter=self.reminiscence.get())

Each of those look extra just like the code-based agent than the LangGraph agent. That is primarily as a result of Workflows retains the conditional routing logic within the steps versus in conditional edges — traces 18–24 have been a conditional edge in LangGraph, whereas now they’re simply a part of the routing step — and the truth that LangGraph has a ToolNode object that does nearly all the pieces within the tool_call_handler methodology routinely.

Transferring previous the routing step, one factor I used to be very comfortable to see is that I might use my SkillMap and current abilities from my code-based agent with Workflows. These required no adjustments to work with Workflows, which made my life a lot simpler.

Challenges with Workflows

Problem #1: Sync vs Async

Whereas asynchronous execution is preferable for a reside agent, debugging a synchronous agent is way simpler. Workflows is designed to work asynchronously, and making an attempt to pressure synchronous execution was very tough.

I initially thought I might simply be capable of take away the “async” methodology designations and swap from “achat_with_tools” to “chat_with_tools”. Nevertheless, for the reason that underlying strategies throughout the Workflow class have been additionally marked as asynchronous, it was essential to redefine these with a view to run synchronously. I ended up sticking to an asynchronous method, however this didn’t make debugging tougher.

A sequential view of the agent’s actions (picture by writer)

Problem #2: Pydantic Validation Errors

In a repeat of the woes with LangGraph, related issues emerged round complicated Pydantic validation errors on abilities. Luckily, these have been simpler to deal with this time since Workflows was in a position to deal with member features simply wonderful. I finally simply ended up having to be extra prescriptive in creating LlamaIndex FunctionTool objects for my abilities:

for func in skill_map.get_function_list(): 
self.instruments.append(FunctionTool(
skill_map.get_function_callable_by_name(func),
metadata=ToolMetadata(identify=func, description=skill_map.get_function_description_by_name(func))))

Excerpt from AgentFlow.__init__ that builds FunctionTools

Advantages of Workflows

I had a a lot simpler time constructing the Workflows agent than I did the LangGraph agent, primarily as a result of Workflows nonetheless required me to write down routing logic and gear dealing with code myself as an alternative of offering built-in features. This additionally meant that my Workflow agent appeared extraordinarily just like my code-based agent.

The most important distinction got here in the usage of occasions. I used two customized occasions to maneuver between steps in my agent:

class ToolCallEvent(Occasion):
tool_calls: record[ToolSelection]

class RouterInputEvent(Occasion):
enter: record[ChatMessage]

The emitter-receiver, event-based structure took the place of instantly calling a number of the strategies in my agent, just like the software name handler.

When you have extra complicated techniques with a number of steps which are triggering asynchronously and would possibly emit a number of occasions, this structure turns into very useful to handle that cleanly.

Different advantages of Workflows embrace the truth that it is extremely light-weight and doesn’t pressure a lot construction on you (other than the usage of sure LlamaIndex objects) and that its event-based structure supplies a useful various to direct perform calling — particularly for complicated, asynchronous functions.

Trying throughout the three approaches, each has its advantages.

The no framework method is the best to implement. As a result of any abstractions are outlined by the developer (i.e. SkillMap object within the above instance), preserving varied sorts and objects straight is straightforward. The readability and accessibility of the code completely comes all the way down to the person developer nonetheless, and it’s straightforward to see how more and more complicated brokers might get messy with out some enforced construction.

LangGraph supplies fairly a little bit of construction, which makes the agent very clearly outlined. If a broader crew is collaborating on an agent, this construction would supply a useful means of implementing an structure. LangGraph additionally would possibly present an excellent place to begin with brokers for these not as acquainted with the construction. There’s a tradeoff, nonetheless — since LangGraph does fairly a bit for you, it may well result in complications if you happen to don’t absolutely purchase into the framework; the code could also be very clear, however it’s possible you’ll pay for it with extra debugging.

Workflows falls someplace within the center. The event-based structure may be extraordinarily useful for some tasks, and the truth that much less is required by way of utilizing of LlamaIndex sorts supplies higher flexibility for these not be absolutely utilizing the framework throughout their software.

Picture created by writer

Finally, the core query may come all the way down to “are you already utilizing LlamaIndex or LangChain to orchestrate your software?” LangGraph and Workflows are each so entwined with their respective underlying frameworks that the extra advantages of every agent-specific framework may not trigger you to modify on advantage alone.

The pure code method will doubtless at all times be a horny choice. When you have the rigor to doc and implement any abstractions created, then guaranteeing nothing in an exterior framework slows you down is straightforward.

In fact, “it relies upon” is rarely a satisfying reply. These three questions ought to assist you to resolve which framework to make use of in your subsequent agent mission.

Are you already utilizing LlamaIndex or LangChain for vital items of your mission?

If sure, discover that choice first.

Are you acquainted with frequent agent constructions, or would you like one thing telling you ways it’s best to construction your agent?

If you happen to fall into the latter group, strive Workflows. If you happen to actually fall into the latter group, strive LangGraph.

Has your agent been constructed earlier than?

One of many framework advantages is that there are various tutorials and examples constructed with every. There are far fewer examples of pure code brokers to construct from.

Picture created by writer

Choosing an agent framework is only one selection amongst many that can affect outcomes in manufacturing for generative AI techniques. As at all times, it pays to have strong guardrails and LLM tracing in place — and to be agile as new agent frameworks, analysis, and fashions upend established methods.

Tags: AgentAparnaChoosingDhinakaranFrameworksLLMSep
Previous Post

Tremendous-tune Meta Llama 3.1 fashions utilizing torchtune on Amazon SageMaker

Next Post

Combine dynamic net content material in your generative AI utility utilizing an online search API and Amazon Bedrock Brokers

Next Post
Combine dynamic net content material in your generative AI utility utilizing an online search API and Amazon Bedrock Brokers

Combine dynamic net content material in your generative AI utility utilizing an online search API and Amazon Bedrock Brokers

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    401 shares
    Share 160 Tweet 100
  • Diffusion Mannequin from Scratch in Pytorch | by Nicholas DiSalvo | Jul, 2024

    401 shares
    Share 160 Tweet 100
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    401 shares
    Share 160 Tweet 100
  • Proton launches ‘Privacy-First’ AI Email Assistant to Compete with Google and Microsoft

    400 shares
    Share 160 Tweet 100
  • Streamlit fairly styled dataframes half 1: utilizing the pandas Styler

    400 shares
    Share 160 Tweet 100

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Select the Proper One: Evaluating Subject Fashions for Enterprise Intelligence
  • How Infosys improved accessibility for Occasion Data utilizing Amazon Nova Professional, Amazon Bedrock and Amazon Elemental Media Providers
  • Authorities Funding Graph RAG | In the direction of Information Science
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.