Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Use an LLM-Powered Boilerplate for Constructing Your Personal Node.js API

admin by admin
February 24, 2025
in Artificial Intelligence
0
Use an LLM-Powered Boilerplate for Constructing Your Personal Node.js API
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter

For a very long time, one of many frequent methods to start out new Node.js initiatives was utilizing boilerplate templates. These templates assist builders reuse acquainted code constructions and implement commonplace options, comparable to entry to cloud file storage. With the newest developments in LLM, mission boilerplates seem like extra helpful than ever.

Constructing on this progress, I’ve prolonged my present Node.js API boilerplate with a brand new software LLM Codegen. This standalone function allows the boilerplate to routinely generate module code for any objective primarily based on textual content descriptions. The generated module comes full with E2E checks, database migrations, seed knowledge, and vital enterprise logic.

Historical past

I initially created a GitHub repository for a Node.js API boilerplate to consolidate the perfect practices I’ve developed through the years. A lot of the implementation is predicated on code from an actual Node.js API working in manufacturing on AWS.

I’m keen about vertical slicing structure and Clear Code rules to maintain the codebase maintainable and clear. With latest developments in LLM, notably its assist for giant contexts and its capability to generate high-quality code, I made a decision to experiment with producing clear TypeScript code primarily based on my boilerplate. This boilerplate follows particular constructions and patterns that I imagine are of top quality. The important thing query was whether or not the generated code would observe the identical patterns and construction. Based mostly on my findings, it does.

To recap, right here’s a fast spotlight of the Node.js API boilerplate’s key options:

  • Vertical slicing structure primarily based on DDD & MVC rules
  • Companies enter validation utilizing ZOD
  • Decoupling software elements with dependency injection (InversifyJS)
  • Integration and E2E testing with Supertest
  • Multi-service setup utilizing Dockercompose

Over the previous month, I’ve spent my weekends formalizing the answer and implementing the required code-generation logic. Beneath, I’ll share the small print.

Implementation Overview

Let’s discover the specifics of the implementation. All Code Technology logic is organized on the mission root degree, contained in the llm-codegen folder, making certain simple navigation. The Node.js boilerplate code has no dependency on llm-codegen, so it may be used as a daily template with out modification.

LLM-Codegen folder construction

It covers the next use instances:

  • Producing clear, well-structured code for brand spanking new module primarily based on enter description. The generated module turns into a part of the Node.js REST API software.
  • Creating database migrations and lengthening seed scripts with fundamental knowledge for the brand new module.
  • Producing and fixing E2E checks for the brand new code and making certain all checks move.

The generated code after the primary stage is clear and adheres to vertical slicing structure rules. It consists of solely the required enterprise logic for CRUD operations. In comparison with different code technology approaches, it produces clear, maintainable, and compilable code with legitimate E2E checks.

The second use case includes producing DB migration with the suitable schema and updating the seed script with the required knowledge. This job is especially well-suited for LLM, which handles it exceptionally effectively.

The ultimate use case is producing E2E checks, which assist affirm that the generated code works accurately. Throughout the working of E2E checks, an SQLite3 database is used for migrations and seeds.

Primarily supported LLM shoppers are OpenAI and Claude.

Use It

To get began, navigate to the basis folder llm-codegen and set up all dependencies by working:

npm i

llm-codegen doesn’t depend on Docker or every other heavy third-party dependencies, making setup and execution simple and simple. Earlier than working the software, make sure that you set at the very least one *_API_KEY atmosphere variable within the .env file with the suitable API key in your chosen LLM supplier. All supported atmosphere variables are listed within the .env.pattern file (OPENAI_API_KEY, CLAUDE_API_KEY and so on.) You should utilize OpenAI, Anthropic Claude, or OpenRouter LLaMA. As of mid-December, OpenRouter LLaMA is surprisingly free to make use of. It’s potential to register right here and acquire a token at no cost utilization. Nonetheless, the output high quality of this free LLaMA mannequin might be improved, as many of the generated code fails to move the compilation stage.

To begin llm-codegen, run the next command:

npm run begin

Subsequent, you’ll be requested to enter the module description and title. Within the module description, you’ll be able to specify all vital necessities, comparable to entity attributes and required operations. The core remaining work is carried out by micro-agents: Developer, Troubleshooter, and TestsFixer.

Right here is an instance of a profitable code technology:

Profitable code technology

Beneath is one other instance demonstrating how a compilation error was fastened:

The next is an instance of a generated orders module code:

A key element is that you may generate code step-by-step, beginning with one module and including others till all required APIs are full. This method permits you to generate code for all required modules in only a few command runs.

How It Works

As talked about earlier, all work is carried out by these micro-agents: Developer, Troubleshooter and TestsFixer, managed by the Orchestrator. They run within the listed order, with the Developer producing many of the codebase. After every code technology step, a test is carried out for lacking information primarily based on their roles (e.g., routes, controllers, companies). If any information are lacking, a brand new code technology try is made, together with directions within the immediate concerning the lacking information and examples for every position. As soon as the Developer completes its work, TypeScript compilation begins. If any errors are discovered, the Troubleshooter takes over, passing the errors to the immediate and ready for the corrected code. Lastly, when the compilation succeeds, E2E checks are run. At any time when a check fails, the TestsFixer steps in with particular immediate directions, making certain all checks move and the code stays clear.

All micro-agents are derived from the BaseAgent class and actively reuse its base methodology implementations. Right here is the Developer implementation for reference:

Every agent makes use of its particular immediate. Take a look at this GitHub hyperlink for the immediate utilized by the Developer.

After dedicating important effort to analysis and testing, I refined the prompts for all micro-agents, leading to clear, well-structured code with only a few points.

Throughout the improvement and testing, it was used with varied module descriptions, starting from easy to extremely detailed. Listed below are just a few examples:

- The module liable for library guide administration should deal with endpoints for CRUD operations on books.
- The module liable for the orders administration. It should present CRUD operations for dealing with buyer orders. Customers can create new orders, learn order particulars, replace order statuses or data, and delete orders which can be canceled or accomplished. Order should have subsequent attributes: title, standing, positioned supply, description, picture url
- Asset Administration System with an "Belongings" module providing CRUD operations for firm property. Customers can add new property to the stock, learn asset particulars, replace data comparable to upkeep schedules or asset areas, and delete data of disposed or offered property.

Testing with gpt-4o-mini and claude-3-5-sonnet-20241022 confirmed comparable output code high quality, though Sonnet is costlier. Claude Haiku (claude-3–5-haiku-20241022), whereas cheaper and comparable in value to gpt-4o-mini, usually produces non-compilable code. Total, with gpt-4o-mini, a single code technology session consumes a mean of round 11k enter tokens and 15k output tokens. This quantities to a price of roughly 2 cents per session, primarily based on token pricing of 15 cents per 1M enter tokens and 60 cents per 1M output tokens (as of December 2024).

Beneath are Anthropic utilization logs displaying token consumption:

Based mostly on my experimentation over the previous few weeks, I conclude that whereas there should be some points with passing generated checks, 95% of the time generated code is compilable and runnable.

I hope you discovered some inspiration right here and that it serves as a place to begin in your subsequent Node.js API or an improve to your present mission. Ought to you’ve got solutions for enhancements, be happy to contribute by submitting PR for code or immediate updates.

When you loved this text, be happy to clap or share your ideas within the feedback, whether or not concepts or questions. Thanks for studying, and comfortable experimenting!

UPDATE [February 9, 2025]: The LLM-Codegen GitHub repository was up to date with DeepSeek API assist. It’s cheaper than gpt-4o-mini and gives almost the identical output high quality, however it has an extended response time and typically struggles with API request errors.

Until in any other case famous, all photos are by the creator


Tags: APIBoilerplateBuildingLLMPoweredNode.js
Previous Post

LLM steady self-instruct fine-tuning framework powered by a compound AI system on Amazon SageMaker

Next Post

Mistral-Small-24B-Instruct-2501 is now accessible on SageMaker Jumpstart and Amazon Bedrock Market

Next Post
Mistral-Small-24B-Instruct-2501 is now accessible on SageMaker Jumpstart and Amazon Bedrock Market

Mistral-Small-24B-Instruct-2501 is now accessible on SageMaker Jumpstart and Amazon Bedrock Market

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    401 shares
    Share 160 Tweet 100
  • Diffusion Mannequin from Scratch in Pytorch | by Nicholas DiSalvo | Jul, 2024

    401 shares
    Share 160 Tweet 100
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    401 shares
    Share 160 Tweet 100
  • Proton launches ‘Privacy-First’ AI Email Assistant to Compete with Google and Microsoft

    401 shares
    Share 160 Tweet 100
  • Streamlit fairly styled dataframes half 1: utilizing the pandas Styler

    400 shares
    Share 160 Tweet 100

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Pipelining AI/ML Coaching Workloads with CUDA Streams
  • Structured information response with Amazon Bedrock: Immediate Engineering and Instrument Use
  • Use OpenAI Whisper for Automated Transcriptions
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.