Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Assume Your Python Code Is Sluggish? Cease Guessing and Begin Measuring

admin by admin
December 27, 2025
in Artificial Intelligence
0
Assume Your Python Code Is Sluggish? Cease Guessing and Begin Measuring
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


I used to be engaged on a script the opposite day, and it was driving me nuts. It labored, positive, but it surely was simply… gradual. Actually gradual. I had that feeling that this could possibly be a lot sooner if I may work out the place the hold-up was.

My first thought was to begin tweaking issues. I may optimise the information loading. Or rewrite that for loop? However I ended myself. I’ve fallen into that entice earlier than, spending hours “optimising” a bit of code solely to search out it made barely any distinction to the general runtime. Donald Knuth had a degree when he mentioned, “Untimely optimisation is the basis of all evil.”

I made a decision to take a extra methodical method. As an alternative of guessing, I used to be going to search out out for positive. I wanted to profile the code to acquire laborious knowledge on precisely which features have been consuming the vast majority of the clock cycles.

On this article, I’ll stroll you thru the precise course of I used. We’ll take a intentionally gradual Python script and use two implausible instruments to pinpoint its bottlenecks with surgical precision.

The primary of those instruments known as cProfile, a strong profiler constructed into Python. The opposite known as snakeviz, a sensible instrument that transforms the profiler’s output into an interactive visible map.

Organising a improvement surroundings

Earlier than we begin coding, let’s arrange our improvement surroundings. The perfect follow is to create a separate Python surroundings the place you possibly can set up any essential software program and experiment, realizing that something you do received’t impression the remainder of your system. I’ll be utilizing conda for this, however you need to use any technique with which you’re acquainted.

#create our take a look at surroundings
conda create -n profiling_lab python=3.11 -y

# Now activate it
conda activate profiling_lab

Now that we have now our surroundings arrange, we have to set up snakeviz for our visualisations and numpy for the instance script. cProfile is already included with Python, so there’s nothing extra to do there. As we’ll be working our scripts with a Jupyter Pocket book, we’ll additionally set up that.

# Set up our visualization instrument and numpy
pip set up snakeviz numpy jupyter

Now sort in jupyter pocket book into your command immediate. You need to see a jupyter pocket book open in your browser. If that doesn’t occur robotically, you’ll probably see a screenful of knowledge after the jupyter pocket book command. Close to the underside of that, there will likely be a URL that it is best to copy and paste into your browser to provoke the Jupyter Pocket book.

Your URL will likely be completely different to mine, but it surely ought to look one thing like this:-

http://127.0.0.1:8888/tree?token=3b9f7bd07b6966b41b68e2350721b2d0b6f388d248cc69da

With our instruments prepared, it’s time to have a look at the code we’re going to repair.

Our “Downside” Script

To correctly take a look at our profiling instruments, we’d like a script that reveals clear efficiency points. I’ve written a easy program that simulates processing issues with reminiscence, iteration and CPU cycles, making it an ideal candidate for our investigation.

# run_all_systems.py
import time
import math

# ===================================================================
CPU_ITERATIONS = 34552942
STRING_ITERATIONS = 46658100
LOOP_ITERATIONS = 171796964
# ===================================================================

# --- Process 1: A Calibrated CPU-Sure Bottleneck ---
def cpu_heavy_task(iterations):
    print("  -> Operating CPU-bound activity...")
    end result = 0
    for i in vary(iterations):
        end result += math.sin(i) * math.cos(i) + math.sqrt(i)
    return end result

# --- Process 2: A Calibrated Reminiscence/String Bottleneck ---
def memory_heavy_string_task(iterations):
    print("  -> Operating Reminiscence/String-bound activity...")
    report = ""
    chunk = "report_item_abcdefg_123456789_"
    for i in vary(iterations):
        report += f"|{chunk}{i}"
    return report

# --- Process 3: A Calibrated "Thousand Cuts" Iteration Bottleneck ---
def simulate_tiny_op(n):
    go

def iteration_heavy_task(iterations):
    print("  -> Operating Iteration-bound activity...")
    for i in vary(iterations):
        simulate_tiny_op(i)
    return "OK"

# --- Important Orchestrator ---
def run_all_systems():
    print("--- Beginning FINAL SLOW Balanced Showcase ---")
    
    cpu_result = cpu_heavy_task(iterations=CPU_ITERATIONS)
    string_result = memory_heavy_string_task(iterations=STRING_ITERATIONS)
    iteration_result = iteration_heavy_task(iterations=LOOP_ITERATIONS)

    print("--- FINAL SLOW Balanced Showcase Completed ---")

Step 1: Amassing the Knowledge with cProfile

Our first instrument, cProfile, is a deterministic profiler constructed into Python. We are able to run it from code to execute our script and file detailed statistics about each operate name. 

import cProfile, pstats, io

pr = cProfile.Profile()
pr.allow()

# Run the operate you wish to profile
run_all_systems()

pr.disable()

# Dump stats to a string and print the highest 10 by cumulative time
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats("cumtime")
ps.print_stats(10)
print(s.getvalue())

Right here is the output.

--- Beginning FINAL SLOW Balanced Showcase ---
  -> Operating CPU-bound activity...
  -> Operating Reminiscence/String-bound activity...
  -> Operating Iteration-bound activity...
--- FINAL SLOW Balanced Showcase Completed ---
         275455984 operate calls in 30.497 seconds

   Ordered by: cumulative time
   Record decreased from 47 to 10 because of restriction <10>

   ncalls  tottime  percall  cumtime  percall filename:lineno(operate)
        2    0.000    0.000   30.520   15.260 /dwelling/tom/.native/lib/python3.10/site-packages/IPython/core/interactiveshell.py:3541(run_code)
        2    0.000    0.000   30.520   15.260 {built-in technique builtins.exec}
        1    0.000    0.000   30.497   30.497 /tmp/ipykernel_173802/1743829582.py:41(run_all_systems)
        1    9.652    9.652   14.394   14.394 /tmp/ipykernel_173802/1743829582.py:34(iteration_heavy_task)
        1    7.232    7.232   12.211   12.211 /tmp/ipykernel_173802/1743829582.py:14(cpu_heavy_task)
171796964    4.742    0.000    4.742    0.000 /tmp/ipykernel_173802/1743829582.py:31(simulate_tiny_op)
        1    3.891    3.891    3.892    3.892 /tmp/ipykernel_173802/1743829582.py:22(memory_heavy_string_task)
 34552942    1.888    0.000    1.888    0.000 {built-in technique math.sin}
 34552942    1.820    0.000    1.820    0.000 {built-in technique math.cos}
 34552942    1.271    0.000    1.271    0.000 {built-in technique math.sqrt}

We have now a bunch of numbers that may be tough to interpret. That is the place snakeviz comes into its personal. 

Step 2: Visualising the bottleneck with snakeviz

That is the place the magic occurs. Snakeviz takes the output of our profiling file and converts it into an interactive, browser-based chart, making it simpler to search out bottlenecks.

So let’s use that instrument to visualise what we have now. As I’m utilizing a Jupyter Pocket book, we have to load it first.

%load_ext snakeviz

And we run it like this.

%%snakeviz
primary()

The output is available in two components. First is a visualisation like this.

Picture by Writer

What you see is a top-down “icicle” chart. From the highest to the underside, it represents the decision hierarchy. 

On the very high: Python is executing our script ().

Subsequent: the script’s __main__ execution (:1()). Then the operate run_all_systems. Inside that, it calls two key features: iteration_heavy_task and cpu_heavy_task.

The memory-intensive processing half isn’t labelled on the chart. That’s as a result of the proportion of time related to this activity is far smaller than the instances apportioned to the opposite two intensive features. Because of this, we see a a lot smaller, unlabelled block to the correct of the cpu_heavy_task block.

Observe that, for evaluation, there may be additionally a Snakeviz chart model known as a Sunburst chart. It seems to be a bit like a pie chart besides it comprises a set of more and more giant concentric circles and arcs. The concept beng that the time taken by features to run is represented by the angular extent of the arc measurement of the circle. The foundation operate is a circle in the midst of viz. The foundation operate runs by calling the sub-functions beneath it and so forth. We wont be taking a look at that show sort on this article.

Visible affirmation, like this, may be a lot extra impactful than observing a desk of numbers. I didn’t must guess anymore the place to look; the information was staring me proper within the face. 

The visualisation is rapidly adopted by a block of textual content detailing the timings for varied components of your code, very like the output of the cprofile instrument. I’m solely exhibiting the primary dozen or so traces of this, as there have been 30+ in complete.

ncalls tottime percall cumtime percall filename:lineno(operate)
----------------------------------------------------------------
1 9.581 9.581 14.3 14.3 1062495604.py:34(iteration_heavy_task)
1 7.868 7.868 12.92 12.92 1062495604.py:14(cpu_heavy_task)
171796964 4.717 2.745e-08 4.717 2.745e-08 1062495604.py:31(simulate_tiny_op)
1 3.848 3.848 3.848 3.848 1062495604.py:22(memory_heavy_string_task)
34552942 1.91 5.527e-08 1.91 5.527e-08 ~:0()
34552942 1.836 5.313e-08 1.836 5.313e-08 ~:0()
34552942 1.305 3.778e-08 1.305 3.778e-08 ~:0()
1 0.02127 0.02127 31.09 31.09 :1()
4 0.0001764 4.409e-05 0.0001764 4.409e-05 socket.py:626(ship)
10 0.000123 1.23e-05 0.0004568 4.568e-05 iostream.py:655(write)
4 4.594e-05 1.148e-05 0.0002735 6.838e-05 iostream.py:259(schedule)
...
...
...

Step 3: The Repair

In fact, instruments like cprofiler and snakeviz don’t let you know how to kind out your efficiency points, however now that I knew precisely the place the issues have been, I may apply focused fixes. 

# final_showcase_fixed_v2.py
import time
import math
import numpy as np

# ===================================================================
CPU_ITERATIONS = 34552942
STRING_ITERATIONS = 46658100
LOOP_ITERATIONS = 171796964
# ===================================================================

# --- Repair 1: Vectorization for the CPU-Sure Process ---
def cpu_heavy_task_fixed(iterations):
    """
    Fastened through the use of NumPy to carry out the advanced math on a whole array
    without delay, in extremely optimized C code as a substitute of a Python loop.
    """
    print("  -> Operating CPU-bound activity...")
    # Create an array of numbers from 0 to iterations-1
    i = np.arange(iterations, dtype=np.float64)
    # The identical calculation, however vectorized, is orders of magnitude sooner
    result_array = np.sin(i) * np.cos(i) + np.sqrt(i)
    return np.sum(result_array)

# --- Repair 2: Environment friendly String Becoming a member of ---
def memory_heavy_string_task_fixed(iterations):
    """
    Fastened through the use of a listing comprehension and a single, environment friendly ''.be part of() name.
    This avoids creating hundreds of thousands of intermediate string objects.
    """
    print("  -> Operating Reminiscence/String-bound activity...")
    chunk = "report_item_abcdefg_123456789_"
    # A listing comprehension is quick and memory-efficient
    components = [f"|{chunk}{i}" for i in range(iterations)]
    return "".be part of(components)

# --- Repair 3: Eliminating the "Thousand Cuts" Loop ---
def iteration_heavy_task_fixed(iterations):
    """
    Fastened by recognizing the duty could be a no-op or a bulk operation.
    In a real-world state of affairs, you'll discover a technique to keep away from the loop completely.
    Right here, we reveal the repair by merely eradicating the pointless loop.
    The aim is to indicate the price of the loop itself was the issue.
    """
    print("  -> Operating Iteration-bound activity...")
    # The repair is to discover a bulk operation or get rid of the necessity for the loop.
    # Because the unique operate did nothing, the repair is to do nothing, however sooner.
    return "OK"

# --- Important Orchestrator ---
def run_all_systems():
    """
    The primary orchestrator now calls the FAST variations of the duties.
    """
    print("--- Beginning FINAL FAST Balanced Showcase ---")
    
    cpu_result = cpu_heavy_task_fixed(iterations=CPU_ITERATIONS)
    string_result = memory_heavy_string_task_fixed(iterations=STRING_ITERATIONS)
    iteration_result = iteration_heavy_task_fixed(iterations=LOOP_ITERATIONS)

    print("--- FINAL FAST Balanced Showcase Completed ---")

Now we are able to rerun the cprofiler on our up to date code.

import cProfile, pstats, io

pr = cProfile.Profile()
pr.allow()

# Run the operate you wish to profile
run_all_systems()

pr.disable()

# Dump stats to a string and print the highest 10 by cumulative time
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats("cumtime")
ps.print_stats(10)
print(s.getvalue())

#
# begin of output
#

--- Beginning FINAL FAST Balanced Showcase ---
  -> Operating CPU-bound activity...
  -> Operating Reminiscence/String-bound activity...
  -> Operating Iteration-bound activity...
--- FINAL FAST Balanced Showcase Completed ---
         197 operate calls in 6.063 seconds

   Ordered by: cumulative time
   Record decreased from 52 to 10 because of restriction <10>

   ncalls  tottime  percall  cumtime  percall filename:lineno(operate)
        2    0.000    0.000    6.063    3.031 /dwelling/tom/.native/lib/python3.10/site-packages/IPython/core/interactiveshell.py:3541(run_code)
        2    0.000    0.000    6.063    3.031 {built-in technique builtins.exec}
        1    0.002    0.002    6.063    6.063 /tmp/ipykernel_173802/1803406806.py:1()
        1    0.402    0.402    6.061    6.061 /tmp/ipykernel_173802/3782967348.py:52(run_all_systems)
        1    0.000    0.000    5.152    5.152 /tmp/ipykernel_173802/3782967348.py:27(memory_heavy_string_task_fixed)
        1    4.135    4.135    4.135    4.135 /tmp/ipykernel_173802/3782967348.py:35()
        1    1.017    1.017    1.017    1.017 {technique 'be part of' of 'str' objects}
        1    0.446    0.446    0.505    0.505 /tmp/ipykernel_173802/3782967348.py:14(cpu_heavy_task_fixed)
        1    0.045    0.045    0.045    0.045 {built-in technique numpy.arange}
        1    0.000    0.000    0.014    0.014 <__array_function__ internals>:177(sum)

That’s a implausible end result that demonstrates the facility of profiling. We spent our effort on the components of the code that mattered. To be thorough, I additionally ran snakeviz on the mounted script.

%%snakeviz
run_all_systems()
Picture by Writer

Essentially the most notable change is the discount in complete runtime, from roughly 30 seconds to roughly 6 seconds. This can be a 5x speedup, achieved by addressing the three primary bottlenecks that have been seen within the “earlier than” profile.

Let’s take a look at each individually.

1. The iteration_heavy_task

Earlier than (The Downside)
Within the first picture, the massive bar on the left, iteration_heavy_task, is the one largest bottleneck, consuming 14.3 seconds.

  • Why was it gradual? This activity was a basic “dying by a thousand cuts.” The operate simulate_tiny_op did nearly nothing, but it surely was known as hundreds of thousands of instances from inside a pure Python for loop. The immense overhead of the Python interpreter beginning and stopping a operate name repeatedly was all the supply of the slowness.

The Repair
The mounted model, iteration_heavy_task_fixed, recognised that the aim could possibly be achieved with out the loop. In our showcase, this meant eradicating the pointless loop completely. In a real-world software, this may contain discovering a single “bulk” operation to switch the iterative one.

After (The Consequence)
Within the second picture, the iteration_heavy_task bar is utterly gone. It’s now so quick that its runtime is a tiny fraction of a second and is invisible on the chart. We efficiently eradicated a 14.3-second downside.

2. The cpu_heavy_task

Earlier than (The Downside)
The second main bottleneck, clearly seen as the massive orange bar on the correct, is cpu_heavy_task, which took 12.9 seconds.

  • Why was it gradual? Just like the iteration activity, this operate was additionally restricted by the pace of the Python for loop. Whereas the mathematics operations inside have been quick, the interpreter needed to course of every of the hundreds of thousands of calculations individually, which is very inefficient for numerical duties.

The Repair
The repair was vectorisation utilizing the NumPy library. As an alternative of utilizing a Python loop, cpu_heavy_task_fixed created a NumPy array and carried out all of the mathematical operations (np.sqrt, np.sin, and so on.) on all the array concurrently. These operations are executed in extremely optimised, pre-compiled C code, utterly bypassing the gradual Python interpreter loop.

After (The Consequence).
Similar to the primary bottleneck, the cpu_heavy_task bar has vanished from the “after” diagram. Its runtime was decreased from 12.9 seconds to a couple milliseconds.

3. The memory_heavy_string_task

Earlier than (The Downside):
Within the first diagram, the memory-heavy_string_task was working, however its runtime was small in comparison with the opposite two bigger points, so it was relegated to the small, unlabeled sliver of house on the far proper. It was a comparatively minor problem.

The Repair
The repair for this activity was to switch the inefficient report += “…” string concatenation with a way more environment friendly technique: constructing a listing of all of the string components after which calling “”.be part of() a single time on the finish.

After (The Consequence)
Within the second diagram, we see the results of our success. Having eradicated the 2 10+ second bottlenecks, the memory-heavy-string-task-fixed is now the new dominant bottleneck, accounting for 4.34 seconds of the full 5.22-second runtime.

Snakeviz even lets us look inside this mounted operate. The brand new most important contributor is the orange bar labelled (checklist comprehension), which takes 3.52 seconds. This means that even within the mounted code, probably the most time-consuming half is now the method of making the in depth checklist of strings in reminiscence earlier than they are often joined.

Abstract

This text supplies a hands-on information to figuring out and resolving efficiency points in Python code, arguing that builders ought to utilise profiling instruments to measure efficiency as a substitute of counting on instinct or guesswork to pinpoint the supply of slowdowns.

I demonstrated a methodical workflow utilizing two key instruments:-

  • cProfile: Python’s built-in profiler, used to assemble detailed knowledge on operate calls and execution instances.
  • snakeviz: A visualisation instrument that turns cProfile’s knowledge into an interactive “icicle” chart, making it simple to visually determine which components of the code are consuming probably the most time.

The article makes use of a case research of a intentionally gradual script engineered with three distinct and important bottlenecks:

  1. An iteration-bound activity: A operate known as hundreds of thousands of instances in a loop, showcasing the efficiency price of Python’s operate name overhead (“dying by a thousand cuts”).
  2. A CPU-bound activity: A for loop performing hundreds of thousands of math calculations, highlighting the inefficiency of pure Python for heavy numerical work.
  3. A memory-bound activity: A big string constructed inefficiently utilizing repeated += concatenation.

By analysing the snakeviz output, I pinpointed these three issues and utilized focused fixes.

  • The iteration bottleneck was mounted by eliminating the pointless loop.
  • The CPU bottleneck was resolved with vectorisation utilizing NumPy, which executes mathematical operations in quick, compiled C code.
  • The reminiscence bottleneck was mounted by appending string components to a listing and utilizing a single, environment friendly “”.be part of() name.

These fixes resulted in a dramatic speedup, decreasing the script’s runtime from over 30 seconds to simply over 6 seconds. I concluded by demonstrating that, even after main points are resolved, the profiler can be utilized once more to determine new, smaller bottlenecks, illustrating that efficiency tuning is an iterative course of guided by measurement.

Tags: codeGuessingMeasuringPythonSlowStartStop
Previous Post

Agentic QA automation utilizing Amazon Bedrock AgentCore Browser and Amazon Nova Act

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    403 shares
    Share 161 Tweet 101
  • Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

    403 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    403 shares
    Share 161 Tweet 101
  • The Good-Sufficient Fact | In direction of Knowledge Science

    403 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Assume Your Python Code Is Sluggish? Cease Guessing and Begin Measuring
  • Agentic QA automation utilizing Amazon Bedrock AgentCore Browser and Amazon Nova Act
  • The Full Information to Utilizing Pydantic for Validating LLM Outputs
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.