As a knowledge scientist engaged on time-series forecasting, I’ve run into anomalies and outliers greater than I can rely. Throughout demand forecasting, finance, visitors, and gross sales information, I preserve working into spikes and dips which are arduous to interpret.
Anomaly dealing with is often a grey space, hardly ever black or white, however indicators of deeper points. Some anomalies are actual indicators like holidays, climate occasions, promotions, or viral moments; others are simply information glitches, however each look the identical at first look. The sooner we detect anomalies in information, the sooner motion may be taken to forestall poor efficiency and injury.
We’re coping with important time-series information, and detecting anomalies is essential. In the event you take away a real occasion, a invaluable sign information level is eliminated, and in the event you preserve a false alarm sign, the coaching information accommodates noise.
Most ML-based detectors flag spikes based mostly on Z-scores, IQR thresholds, or different static strategies with none context. With latest developments in AI, we have now a greater choice to design an anomaly-handling agent that causes about every case. An agent that detects uncommon conduct, checks context, and decides whether or not to repair the information, preserve it as an actual sign, or flag it for overview.
On this article, we construct such an agent step-by-step that mixes easy statistical detection with an AI agent that acts as a primary line of protection for time-series information, decreasing handbook intervention whereas preserving the indicators that matter most. We are going to detect and deal with anomalies in COVID-19 information by autonomous decision-making based mostly on the severity of the anomaly, utilizing:
- Dwell epidemiological information from the illness.sh API.
- Statistical anomaly detection.
- Severity classification.
- A GroqCloud-powered AI agent that takes autonomous selections whether or not to:
- Repair the anomaly
- Hold the anomaly
- Flag anomaly for human overview
That is agentic determination intelligence, not merely anomaly detection.

Picture by writer.
Why is conventional anomaly detection alone not sufficient?
There are conventional ML strategies like isolation forests designed for anomaly detection, however they lack end-to-end determination orchestration. They’re unable to behave on them shortly sufficient in manufacturing environments. We’re implementing an AI agent to fill this hole by turning uncooked anomaly scores into autonomous, end-to-end selections dynamically on reside information.
Conventional Anomaly Detection
The normal anomaly detection follows the pipeline strategy as drawn under:

Limitations of Conventional Anomaly Detection
- Works on static guidelines and manually units thresholds.
- It’s single-dimensional and handles easy information.
- No contextual reasoning.
- Human-driven determination making.
- Handbook-driven motion.
Anomaly Detection and Dealing with with an AI Agent
The AI Agent anomaly detection follows the pipeline strategy as drawn under:

Why does this work higher in apply?
- Works on real-time information.
- It’s multidimensional and may deal with complicated information.
- Works on contextual reasoning.
- Adaptive & self-learning determination making.
- Take autonomous motion.
Selecting a sensible dataset for our instance
We’re utilizing real-world COVID-19 information to detect anomalies, as it’s noisy, reveals spikes, and the outcomes assist in the development of public well being.
What do we wish the AI Agent to resolve?
The objective is to constantly monitor COVID-19 information, discover anomalies, outline their severity, and take autonomous selections and resolve motion to be taken:
- Flag anomaly for human overview
- Repair the anomaly
- Hold the anomaly
Knowledge Supply
For the information, we’re utilizing free, reside illness.sh information through API. This API supplies information on each day confirmed instances, deaths and recoveries. For the AI Agent implementation, we’re specializing in each day case counts, which are perfect for anomaly detection.
Knowledge license: This tutorial makes use of COVID-19 historic case counts retrieved through the illness.sh API. The underlying dataset (JHU CSSE COVID-19 Knowledge Repository) is licensed underneath CC BY 4.0, which allows business use with attribution. (Accessed on January 22, 2026)
How do the items match collectively?
Excessive-Stage system structure of the anomaly detection on COVID-19 information utilizing an AI Agent is as follows:

Picture by writer
Constructing the AI Agent Step-by-Step
Let’s go step-by-step to know methods to load information utilizing illness.sh, detect anomalies, classify them, and implement an AI agent that causes and takes acceptable motion as per the severity of the anomalies.
Step 1: Set up Required Libraries
Step one is to put in required libraries like phidata, groq, python-dotenv, tabulate, and streamlit.
pip set up phidata
pip set up groq
pip set up python-dotenv #library to load .env file
pip set up tabulate
pip set up streamlit
Step 2: Setting File Set-up
Open your IDE and create a undertaking folder, and underneath that folder, create an environmental file “.env” to retailer GROQ_API_KEY.
GROQ_API_KEY="your_groq_api_key_here"
Step 3: Knowledge Ingestion
Earlier than constructing any agent, we want a knowledge supply that’s noisy sufficient to floor actual anomalies, however structured sufficient to purpose about. COVID-19 each day case counts are match as they include reporting delays, sudden spikes, and regime modifications. For simplicity, we intentionally prohibit ourselves to a single univariate time collection.
Load information from the illness.sh utilizing request URL and extract the date and each day case rely based mostly on the chosen nation and the variety of days for which you wish to extract information. The info is transformed right into a structured dataframe by parsing json, formatting date and sorting chronologically.
# ---------------------------------------
# DATA INGESTION (illness.sh)
# ---------------------------------------
def load_live_covid_data(nation: str , days:int):
url = f"https://illness.sh/v3/covid-19/historic/{nation}?lastdays={days}"
response = requests.get(url)
information = response.json()["timeline"]["cases"]
df = (
pd.DataFrame(listing(information.objects()), columns=["Date", "Cases"])
.assign(Date=lambda d: pd.to_datetime(d["Date"], format="%m/%d/%y"))
.sort_values("Date")
.reset_index(drop=True)
)
return df
Step 4: Anomalies Detection
We are going to now detect irregular conduct in COVID-19 time-series information by detecting sudden spikes and fast progress traits. Case counts are typically steady, and huge deviations or sharp will increase point out significant anomalies. We are going to now detect anomalies utilizing statistical strategies and binary labeling for deterministic and reproducible anomaly detection. Two parameters are calculated to detect anomalies.
- Spike Detection
- A sudden spike in information is detected utilizing the Z-score; if any information level falls outdoors the Z-score vary, it have to be an anomaly.
- Development Price Detection
- The day-over-day progress price is calculated; if it exceeds 40%, it’s flagged.
# ---------------------------------------
# ANOMALY DETECTION
# ---------------------------------------
def detect_anomalies(df):
values = df["Cases"].values
imply, std = values.imply(), values.std()
spike_idx = [
i for i, v in enumerate(values)
if abs(v - mean) > 3 * std
]
progress = np.diff(values) / np.most(values[:-1], 1)
growth_idx = [i + 1 for i, g in enumerate(growth) if g > 0.4]
anomalies = set(spike_idx + growth_idx)
df["Anomaly"] = ["YES" if i in anomalies else "NO" for i in range(len(df))]
return df
If there may be an anomaly in accordance with both spike or progress or with each parameters, the “Anomaly” is ready to “YES”; in any other case set to “NO”.
Step 5: Severity Classification
All anomalies should not equal; we are going to classify them as ‘CRITICAL’, ‘WARNING’, or ‘MINOR’ to information AI Agent selections. Mounted rolling home windows and rule-based thresholds are used to categorise severity. Severity is classed solely when an anomaly exists; in any other case, Severity, Agent Determination, and Motion parameters within the dataframe are set to ‘clean’.
# ---------------------------------------
# CONFIG
# ---------------------------------------
ROLLING_WINDOW = 7
MIN_ABS_INCREASE = 500
# ---------------------------------------
# SEVERITY CLASSIFICATION
# ---------------------------------------
def compute_severity(df):
df = df.sort_values("Date").reset_index(drop=True)
df["Severity"] = ""
df["Agent Decision"] = ""
df["Action"] = ""
for i in vary(len(df)):
if df.loc[i, "Anomaly"] == "YES":
if i < ROLLING_WINDOW:
df.loc[i, "Severity"] = ""
curr = df.loc[i, "Cases"]
baseline = df.loc[i - ROLLING_WINDOW:i- 1, "Cases"].imply()
abs_inc = curr - baseline
progress = abs_inc / max(baseline, 1)
if abs_inc < MIN_ABS_INCREASE:
df.loc[i, "Severity"] = ""
if progress >= 1.0:
df.loc[i, "Severity"] = "CRITICAL"
elif progress >= 0.4:
df.loc[i, "Severity"] = "WARNING"
else:
df.loc[i, "Severity"] = "MINOR"
return df
Within the above code, to categorise the anomaly severity, every anomaly is in contrast with 7-day historic information (ROLLING_WINDOW = 7), and absolute and relative progress are calculated.
- Absolute Development
A MIN_ABS_INCREASE = 500 is outlined as a config parameter the place modifications under this worth are thought-about very small, a negligible change. If absolutely the progress is lower than MIN_ABS_INCREASE, then ignore it and preserve the severity clean. Absolute progress detects significant real-world influence, doesn’t react to noise or minor fluctuations, and prevents false alarms when progress proportion is excessive.
- Relative Development:
Relative progress helps in detecting explosive traits. If progress is larger than or equal to 100% improve over baseline, it means a sudden outbreak, and it’s assigned as ‘CRITICAL’; if progress is larger than 40%, it means sustained acceleration and wishes monitoring, and it’s assigned as ‘WARNING’; in any other case assigned as ‘MINOR’.
After severity classification, it’s prepared for the AI Agent to make an autonomous determination and motion.
Step 6: Construct Immediate for AI Agent
Under is the immediate that defines how the AI agent causes and makes selections based mostly on structured context and predefined severity when an anomaly is detected. The agent is restricted to 3 specific actions and should return a single, deterministic response for protected automation.
def build_agent_prompt(obs):
return f"""
You might be an AI monitoring agent for COVID-19 information.
Noticed anomaly:
Date: {obs['date']}
Circumstances: {obs['cases']}
Severity: {obs['severity']}
Determination guidelines:
- FIX_ANOMALY: noise, reporting fluctuation
- KEEP_ANOMALY: actual outbreak sign
- FLAG_FOR_REVIEW: extreme or ambiguous anomaly
Reply with ONLY one in all:
FIX_ANOMALY
KEEP_ANOMALY
FLAG_FOR_REVIEW
"""
Three information factors, i.e., date, variety of instances reported, and severity, are supplied to the immediate explicitly, which helps the AI Agent to decide autonomously.
Step 7: Create your Agent with GroqCloud
We at the moment are creating an autonomous AI agent utilizing GroqCloud that makes clever contextual selections on detected anomalies and their severities and takes acceptable actions. Three predefined actions for the AI Agent implement validated outputs solely.
# ---------------------------------------
# BUILDING AI AGENT
# ---------------------------------------
agent = Agent(
identify="CovidAnomalyAgent",
mannequin=Groq(id="openai/gpt-oss-120b"),
directions="""
You might be an AI agent monitoring reside COVID-19 time-series information.
Detect anomalies, resolve in accordance with the anomaly:
"FIX_ANOMALY", "KEEP_ANOMALY", "FLAG_FOR_REVIEW"."""
)
for i in vary(len(df)):
if df.loc[i, "Anomaly"] == "YES":
obs = build_observation(df, i)
immediate = build_agent_prompt(obs)
response = agent.run(immediate)
determination = response.messages[-1].content material.strip()
determination = determination if determination in VALID_ACTIONS else "FLAG_FOR_REVIEW"
df = agent_action(df, i, determination)
An AI agent named “CovidAnomalyAgent” is created, which makes use of an LLM mannequin hosted by GroqCloud for quick and low-latency reasoning. AI Agent runs a well-defined immediate, observes information, contextual reasoning, makes an autonomous determination, and takes actions inside protected constraints.
An AI Agent will not be dealing with anomalies however making clever selections for every detected anomaly. The agent’s determination precisely displays anomaly severity and required motion.
# ---------------------------------------
# Agent ACTION DECIDER
# ---------------------------------------
def agent_action(df, idx,motion):
df.loc[idx, "Agent Decision"] = motion
if motion == "FIX_ANOMALY":
fix_anomaly(df, idx)
elif motion == "KEEP_ANOMALY":
df.loc[idx, "Action"] = "Accepted as an actual outbreak sign"
elif motion == "FLAG_FOR_REVIEW":
df.loc[idx, "Action"] = "Flagged for human overview"
return df
AI Agent ignores regular information factors with no anomaly and considers solely information factors with “ANOMALY= YES”. The AI agent is constrained to return solely three legitimate selections: “FIX_ANOMALY“, “KEEP_ANOMALY“, and “FLAG_FOR_REVIEW“, and accordingly, motion is taken as outlined within the desk under:
| Agent Determination | Motion |
| FIX_ANOMALY | Auto-corrected by an AI agent |
| KEEP_ANOMALY | Accepted as an actual outbreak sign |
| FLAG_FOR_REVIEW | Flagged for human overview |
For minor anomalies, the AI agent routinely fixes the information, preserves legitimate anomalies as-is, and flags important instances for human overview.
Step 8: Repair Anomaly
Minor anomalies are attributable to reporting noise and are corrected utilizing native rolling imply smoothing over latest historic values.
# ---------------------------------------
# FIX ANOMALY
# ---------------------------------------
def fix_anomaly(df, idx):
window = df.loc[max(0, idx - 3):idx - 1, "Cases"]
if len(window) > 0:
df.loc[idx, "Cases"] = int(window.imply())
df.loc[idx, "Severity"] = ""
df.loc[idx, "Action"] = "Auto-corrected by an AI agent"
It takes the rapid 3 days of previous information, calculates its imply, and smooths the anomaly by changing its worth with this common. By the native rolling imply smoothing strategy, non permanent spikes and information glitches may be dealt with.
As soon as an anomaly is fastened, the information level is not thought-about dangerous, and severity is deliberately eliminated to keep away from confusion. “Motion” is up to date to “Auto-corrected by an AI agent”.
Full Code
Kindly undergo the entire code for the statistical anomaly detection and AI Agent implementation for anomaly dealing with.
https://github.com/rautmadhura4/anomaly_detection_agent/tree/essential
Outcomes
Let’s evaluate the outcomes for the nation, “India,” with several types of severity detected and the way the AI Agent handles them.
Situation 1: A Native Implementation
The primary try is a local implementation the place we detect minor anomalies and the AI Agent fixes them routinely. Under is the snapshot of the COVID information desk of India with severity.

We now have additionally carried out a Streamlit dashboard to overview the AI Agent’s selections and actions. Within the under end result snapshot, you’ll be able to see that numerous minor anomalies are fastened by the AI Agent.

This works finest when anomalies are localized noise relatively than regime modifications.
Situation 2: A Boundary Situation
Right here, important anomalies are detected, and the AI Agent raises a flag for overview as proven within the snapshot of the COVID information desk of India with severity.

On the Streamlit dashboard AI Agent’s selections and actions are proven within the end result snapshot. You’ll be able to see that every one the important anomalies have been flagged for human overview by the AI Agent.

Severity gating prevents harmful auto-corrections in high-impact anomalies.
Situation 3: A Limitation
For the limitation situation, warning and demanding anomalies are detected as proven within the snapshot of the COVID information desk of India with severity.

On the Streamlit dashboard AI Agent’s selections and actions are proven under within the end result snapshot. You’ll be able to see that the important anomaly is flagged for human overview by AI Agent, however the WARNING anomaly is routinely fastened. In lots of actual settings, a WARNING-level anomaly ought to be preserved and monitored relatively than corrected.

This failure highlights why WARNING thresholds ought to be tuned and why human overview stays important.
Use the entire code and take a look at anomaly detection for the COVID-19 dataset, with totally different parameters.
Future Scope and Enhancements
We now have used a really restricted dataset and carried out rule-based anomaly detection, however sooner or later, some enhancements may be achieved within the AI Agent implementation:
- In our implementation, an anomaly is detected, and a choice is made based mostly on case rely solely. Sooner or later, information may be extra elaborate with options like hospitalization data, vaccination information, and others.
- Anomaly detection is completed right here utilizing statistical strategies, which will also be ML-driven sooner or later to establish extra complicated patterns.
- Now, we have now carried out a single-agent structure; sooner or later multi-agent structure may be carried out to enhance scalability, readability, and resilience.
- Sooner or later human suggestions loop must also take care to make improved selections.
Remaining Takeaways
Smarter AI brokers allow operational AI that makes selections utilizing contextual reasoning, takes motion to repair anomalies, and escalates to people when wanted. There are some sensible takeaways to remember whereas constructing an AI Agent for anomaly detection:
- To detect anomalies, use statistical strategies and implement AI brokers for contextual decision-making.
- Minor anomalies are protected to be autocorrected as they’re typically reported as noise. Vital ought to by no means be autocorrected and flagged for overview by area specialists in order that real-world indicators don’t get suppressed.
- This AI agent should not be utilized in conditions the place anomalies immediately set off irreversible actions.
When statistical strategies and an AI agent strategy are mixed correctly, they rework anomaly detection from simply an alerting system right into a managed, decision-driven system with out compromising security.

