of automating a major variety of duties. Because the launch of ChatGPT in 2022, we’ve seen increasingly AI merchandise available on the market using LLMs. Nevertheless, there are nonetheless lots of enhancements that needs to be made in the way in which we make the most of LLMs. Bettering your immediate with an LLM immediate improver and using cached tokens are, for instance, two easy methods you’ll be able to make the most of to vastly enhance the efficiency of your LLM software.
On this article, I’ll focus on a number of particular methods you’ll be able to apply to the way in which you create and construction your prompts, which can scale back latency and price, and likewise enhance the standard of your responses. The aim is to current you with these particular methods, so you’ll be able to instantly implement them into your personal LLM software.

Why you must optimize your immediate
In lots of instances, you might need a immediate that works with a given LLM and yields enough outcomes. Nevertheless, in lots of instances, you haven’t spent a lot time optimizing the immediate, which leaves lots of potential on the desk.
I argue that utilizing the particular methods I’ll current on this article, you’ll be able to simply each enhance the standard of your responses and scale back prices with out a lot effort. Simply because a immediate and LLM work doesn’t imply it’s performing optimally, and in lots of instances, you’ll be able to see nice enhancements with little or no effort.
Particular methods to optimize
On this part, I’ll cowl the particular methods you’ll be able to make the most of to optimize your prompts.
All the time preserve static content material early
The primary approach I’ll cowl is to at all times preserve static content material early in your immediate. With static content material, I check with content material that is still the identical while you make a number of API calls.
The rationale you must preserve the static content material early is that each one the large LLM suppliers, comparable to Anthropic, Google, and OpenAI, make the most of cached tokens. Cached tokens are tokens which have already been processed in a earlier API request, and that may be processed cheaply and rapidly. It varies from supplier to supplier, however cached enter tokens are often priced round 10% of regular enter tokens.
Cached tokens are tokens which have already been processed in a earlier API request, and that may be processed cheaper and sooner than regular tokens
Which means, if you happen to ship in the identical immediate two occasions in a row, the enter tokens of the second immediate will solely price 1/tenth the enter tokens of the primary immediate. This works as a result of the LLM suppliers cache the processing of those enter tokens, which makes processing your new request cheaper and sooner.
In observe, caching enter tokens is finished by preserving variables on the finish of the immediate.
For instance, when you have an extended system immediate with a query that varies from request to request, you must do one thing like this:
immediate = f"""
{lengthy static system immediate}
{person immediate}
"""
For instance:
immediate = f"""
You're a doc knowledgeable ...
It is best to at all times reply on this format ...
If a person asks about ... you must reply ...
{person query}
"""
Right here we’ve the static content material of the immediate first, earlier than we put the variable contents (the person query) final.
In some situations, you need to feed in doc contents. In the event you’re processing lots of totally different paperwork, you must preserve the doc content material on the finish of the immediate:
# if processing totally different paperwork
immediate = f"""
{static system immediate}
{variable immediate instruction 1}
{doc content material}
{variable immediate instruction 2}
{person query}
"""
Nevertheless, suppose you’re processing the identical paperwork a number of occasions. In that case, you may make positive the tokens of the doc are additionally cached by making certain no variables are put into the immediate beforehand:
# if processing the identical paperwork a number of occasions
immediate = f"""
{static system immediate}
{doc content material} # preserve this earlier than any variable directions
{variable immediate instruction 1}
{variable immediate instruction 2}
{person query}
"""
Be aware that cached tokens are often solely activated if the primary 1024 tokens are the identical in two requests. For instance, in case your static system immediate within the above instance is shorter than 1024 tokens, you’ll not make the most of any cached tokens.
# do NOT do that
immediate = f"""
{variable content material} < --- this removes all utilization of cached tokens
{static system immediate}
{doc content material}
{variable immediate instruction 1}
{variable immediate instruction 2}
{person query}
"""
Your prompts ought to at all times be constructed up with probably the most static contents first (the content material various the least from request to request), the probably the most dynamic content material (the content material various probably the most from request to request)
- When you have an extended system and person immediate with none variables, you must preserve that first, and add the variables on the finish of the immediate
- If you’re fetching textual content from paperwork, for instance, and processing the identical doc twice, you must
Might be doc contents, or when you have an extended immediate -> make use of caching
Query on the finish
One other approach you must make the most of to enhance LLM efficiency is to at all times put the person query on the finish of your immediate. Ideally, you arrange it so you might have your system immediate containing all the final directions, and the person immediate merely consists of solely the person query, comparable to under:
system_prompt = ""
user_prompt = f"{user_question}"
In Anthropic’s immediate engineering docs, the state that features the person immediate on the finish can enhance efficiency by as much as 30%, particularly if you’re utilizing lengthy contexts. Together with the query ultimately makes it clearer to the mannequin which job it’s making an attempt to realize, and can, in lots of instances, result in higher outcomes.
Utilizing a immediate optimizer
Plenty of occasions, when people write prompts, they develop into messy, inconsistent, embrace redundant content material, and lack construction. Thus, you must at all times feed your immediate via a immediate optimizer.
The best immediate optimizer you need to use is to immediate an LLM to enhance this immediate {immediate}, and it’ll give you a extra structured immediate, with much less redundant content material, and so forth.
A fair higher method, nevertheless, is to make use of a particular immediate optimizer, comparable to one you will discover in OpenAI’s or Anthropic’s consoles. These optimizers are LLMs particularly prompted and created to optimize your prompts, and can often yield higher outcomes. Moreover, you must be certain to incorporate:
- Particulars in regards to the job you’re making an attempt to realize
- Examples of duties the immediate succeeded at, and the enter and output
- Instance of duties the immediate failed at, with the enter and output
Offering this extra data will often yield approach higher outcomes, and also you’ll find yourself with a a lot better immediate. In lots of instances, you’ll solely spend round 10-Quarter-hour and find yourself with a far more performant immediate. This makes utilizing a immediate optimizer one of many lowest effort approaches to enhancing LLM efficiency.
Benchmark LLMs
The LLM you utilize may also considerably influence the efficiency of your LLM software. Completely different LLMs are good at totally different duties, so you could check out the totally different LLMs in your particular software space. I like to recommend at the very least organising entry to the largest LLM suppliers like Google Gemini, OpenAI, and Anthropic. Setting this up is kind of easy, and switching your LLM supplier takes a matter of minutes if you have already got credentials arrange. Moreover, you’ll be able to contemplate testing open-source LLMs as effectively, although they often require extra effort.
You now must arrange a particular benchmark for the duty you’re making an attempt to realize, and see which LLM works finest. Moreover, you must often verify mannequin efficiency, for the reason that massive LLM suppliers often improve their fashions, with out essentially popping out with a brand new model. It is best to, after all, even be able to check out any new fashions popping out from the massive LLM suppliers.
Conclusion
On this article, I’ve lined 4 totally different methods you’ll be able to make the most of to enhance the efficiency of your LLM software. I mentioned using cached tokens, having the query on the finish of the immediate, utilizing immediate optimizers, and creating particular LLM benchmarks. These are all comparatively easy to arrange and do, and may result in a major efficiency enhance. I consider many related and easy methods exist, and you must at all times attempt to be looking out for them. These subjects are often described in numerous weblog posts, the place Anthropic is among the blogs that has helped me enhance LLM efficiency probably the most.
👉 Discover me on socials:
🧑💻 Get in contact
✍️ Medium
You may also learn a few of my different articles:


