Authorities and non-profit organizations evaluating grant proposals face a major problem: sifting by tons of of detailed submissions, every with distinctive deserves, to determine essentially the most promising initiatives. This arduous, time-consuming course of is often step one within the grant administration course of, which is crucial to driving significant social impression.
The AWS Social Duty & Impression (SRI) staff acknowledged a possibility to enhance this perform utilizing generative AI. The staff developed an modern answer to streamline grant proposal evaluate and analysis by utilizing the pure language processing (NLP) capabilities of Amazon Bedrock. Amazon Bedrock is a totally managed service that permits you to use your selection of high-performing basis fashions (FMs) from main AI firms like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon by a single API, together with a broad set of capabilities that that you must construct generative AI functions with safety, privateness, and accountable AI.
Traditionally, AWS Well being Fairness Initiative functions had been reviewed manually by a evaluate committee. It took 14 or extra days every cycle for all functions to be totally reviewed. On common, this system obtained 90 functions per cycle. The June 2024 AWS Well being Fairness Initiative utility cycle obtained 139 functions, this system’s largest inflow to this point. It might have taken an estimated 21 days for the evaluate committee to manually course of these many functions. The Amazon Bedrock centered method lowered the evaluate time to 2 days (a 90% discount).
The aim was to boost the effectivity and consistency of the evaluate course of, empowering clients to construct impactful options sooner. By combining the superior NLP capabilities of Amazon Bedrock with considerate immediate engineering, the staff created a dynamic, data-driven, and equitable answer demonstrating the transformative potential of enormous language fashions (LLMs) within the social impression area.
On this publish, we discover the technical implementation particulars and key learnings from the staff’s Amazon Bedrock powered grant proposal evaluate answer, offering a blueprint for organizations in search of to optimize their grants administration processes.
Constructing an efficient immediate for reviewing grant proposals utilizing generative AI
Immediate engineering is the artwork of crafting efficient prompts to instruct and information generative AI fashions, resembling LLMs, to provide the specified outputs. By thoughtfully designing prompts, practitioners can unlock the complete potential of generative AI methods and apply them to a variety of real-world eventualities.
When constructing a immediate for our Amazon Bedrock mannequin to evaluate grant proposals, we used a number of immediate engineering methods to verify the mannequin’s responses had been tailor-made, structured, and actionable. This included assigning the mannequin a selected persona, offering step-by-step directions, and specifying the specified output format.
First, we assigned the mannequin the persona of an knowledgeable in public well being, with a give attention to bettering healthcare outcomes for underserved populations. This context helps prime the mannequin to judge the proposal from the attitude of an issue knowledgeable (SME) who thinks holistically about international challenges and community-level impression. By clearly defining the persona, we ensure the mannequin’s responses are tailor-made to the specified analysis lens.
A number of personas might be assigned towards the identical rubric to account for numerous views. For instance, when the persona “Public Well being Topic Matter Skilled” was assigned, the mannequin offered eager insights on the mission’s impression potential and proof foundation. When the persona “Enterprise Capitalist” was assigned, the mannequin offered extra strong suggestions on the group’s articulated milestones and sustainability plan for publish funding. Equally, when the persona “Software program Growth Engineer” was assigned, the mannequin relayed material experience on the proposed use of AWS expertise.
Subsequent, we broke down the evaluate course of right into a structured set of directions for the mannequin to observe. This contains reviewing the proposal, assessing it throughout particular dimensions (impression potential, innovation, feasibility, sustainability), after which offering an general abstract and rating. Outlining these step-by-step directives provides the mannequin clear steering on the required activity components and helps produce a complete and constant evaluation.
Lastly, we specified the specified output format as JSON, with distinct sections for the dimensional assessments, general abstract, and general rating. Prescribing this structured response format makes positive that the mannequin’s output might be ingested, saved, and analyzed by our grant evaluate staff, moderately than being delivered in free-form textual content. This stage of management over the output helps streamline the downstream use of the mannequin’s evaluations.
By combining these immediate engineering methods—function task, step-by-step directions, and output formatting—we had been capable of craft a immediate that elicits thorough, goal, and actionable grant proposal assessments from our generative AI mannequin. This structured method allows us to successfully use the mannequin’s capabilities to assist our grant evaluate course of in a scalable and environment friendly method.
Constructing a dynamic proposal evaluate utility with Streamlit and generative AI
To exhibit and take a look at the capabilities of a dynamic proposal evaluate answer, we constructed a fast prototype implementation utilizing Streamlit, Amazon Bedrock, and Amazon DynamoDB. It’s vital to notice that this implementation isn’t supposed for manufacturing use, however moderately serves as a proof of idea and a place to begin for additional growth. The applying permits customers to outline and save numerous personas and analysis rubrics, which might then be dynamically utilized when reviewing proposal submissions. This method allows a tailor-made and related evaluation of every proposal, based mostly on the desired standards.
The applying’s structure consists of a number of key parts, which we focus on on this part.
The staff used DynamoDB, a NoSQL database, to retailer the personas, rubrics, and submitted proposals. The info saved in DynamoDB was despatched to Streamlit, an internet utility interface. On Streamlit, the staff added the persona and rubric to the immediate and despatched the immediate to Amazon Bedrock.
Amazon Bedrock used Anthropic’s Claude 3 Sonnet FM to judge the submitted proposals towards the immediate. The mannequin’s prompts are dynamically generated based mostly on the chosen persona and rubric. Amazon Bedrock would ship the analysis outcomes again to Streamlit for staff evaluate.
The next diagram illustrates the present of the previous determine.
The workflow consists of the next steps:
- Customers can create and handle personas and rubrics by the Streamlit utility. These are saved within the DynamoDB database.
- When a person submits a proposal for evaluate, they select the specified persona and rubric from the obtainable choices.
- The Streamlit utility generates a dynamic immediate for the Amazon Bedrock mannequin, incorporating the chosen persona and rubric particulars.
- The Amazon Bedrock mannequin evaluates the proposal based mostly on the dynamic immediate and returns the evaluation outcomes.
- The analysis outcomes are saved within the DynamoDB database and introduced to the person by the Streamlit utility.
Impression
This fast prototype demonstrates the potential for a scalable and versatile proposal evaluate course of, permitting organizations to:
- Cut back utility processing time by as much as 90%
- Streamline the evaluate course of by automating the analysis duties
- Seize structured knowledge on the proposals and assessments for additional evaluation
- Incorporate various views by enabling using a number of personas and rubrics
All through the implementation, the AWS SRI staff targeted on creating an interactive and user-friendly expertise. By working hands-on with the Streamlit utility and observing the impression of dynamic persona and rubric choice, customers can acquire sensible expertise in constructing AI-powered functions that deal with real-world challenges.
Issues for a production-grade implementation
Though the fast prototype demonstrates the potential of this answer, a production-grade implementation requires extra issues and using extra AWS companies. Some key issues embody:
- Scalability and efficiency – For dealing with massive volumes of proposals and concurrent customers, a serverless structure utilizing AWS Lambda, Amazon API Gateway, DynamoDB, and Amazon Easy Storage Service (Amazon S3) would offer improved scalability, availability, and reliability.
- Safety and compliance – Relying on the sensitivity of the information concerned, extra safety measures resembling encryption, authentication and entry management, and auditing are crucial. Companies like AWS Key Administration Service (KMS), Amazon Cognito, AWS Id and Entry Administration (IAM), and AWS CloudTrail may also help meet these necessities.
- Monitoring and logging – Implementing strong monitoring and logging mechanisms utilizing companies like Amazon CloudWatch and AWS X-Ray allow monitoring efficiency, figuring out points, and sustaining compliance.
- Automated testing and deployment – Implementing automated testing and deployment pipelines utilizing companies like AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy assist present constant and dependable deployments, decreasing the chance of errors and downtime.
- Price optimization – Implementing price optimization methods, resembling utilizing AWS Price Explorer and AWS Budgets, may also help handle prices and assist keep environment friendly useful resource utilization.
- Accountable AI issues – Implementing safeguards—resembling Amazon Bedrock Guardrails—and monitoring mechanisms may also help implement the accountable and moral use of the generative AI mannequin, together with bias detection, content material moderation, and human oversight. Though the AWS Well being Fairness Initiative utility kind collected buyer data resembling title, e-mail deal with, and nation of operation, this was systematically omitted when despatched to the Amazon Bedrock enabled instrument to keep away from bias within the mannequin and defend buyer knowledge.
By utilizing the complete suite of AWS companies and following greatest practices for safety, scalability, and accountable AI, organizations can construct a production-ready answer that meets their particular necessities whereas attaining compliance, reliability, and cost-effectiveness.
Conclusion
Amazon Bedrock—coupled with efficient immediate engineering—enabled AWS SRI to evaluate grant proposals and ship awards to clients in days as an alternative of weeks. The abilities developed on this mission—resembling constructing internet functions with Streamlit, integrating with NoSQL databases like DynamoDB, and customizing generative AI prompts—are extremely transferable and relevant to a variety of industries and use circumstances.
In regards to the authors
Carolyn Vigil is a World Lead for AWS Social Duty & Impression Buyer Engagement. She drives strategic initiatives that leverage cloud computing for social impression worldwide. A passionate advocate for underserved communities, she has co-founded two non-profit organizations serving people with developmental disabilities and their households. Carolyn enjoys Mountain adventures along with her household and pals in her free time.
Lauren Hollis is a Program Supervisor for AWS Social Duty and Impression. She leverages her background in economics, healthcare analysis, and expertise to assist mission-driven organizations ship social impression utilizing AWS cloud expertise. In her free time, Lauren enjoys studying an taking part in the piano and cello.
Ben West is a hands-on builder with expertise in machine studying, huge knowledge analytics, and full-stack software program growth. As a technical program supervisor on the AWS Social Duty & Impression staff, Ben leverages all kinds of cloud, edge, and Web of Issues (IoT) applied sciences to develop modern prototypes and assist public sector organizations make a constructive impression on this planet. Ben is an Military Veteran that enjoys cooking and being outside.
Mike Haggerty is a Senior Techniques Growth Engineer (Sr. SysDE) at Amazon Internet Companies (AWS), working throughout the PACE-EDGE staff. On this function, he contributes to AWS’s edge computing initiatives as a part of the Worldwide Public Sector (WWPS) group’s PACE (Prototyping and Buyer Engineering) staff. Past his skilled duties, Mike is a pet remedy volunteer who, collectively along with his canine Gnocchi, offers assist companies at local people services.