Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Fashionable GUI Functions for Laptop Imaginative and prescient in Python

admin by admin
May 1, 2025
in Artificial Intelligence
0
Fashionable GUI Functions for Laptop Imaginative and prescient in Python
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


I’m an enormous fan of interactive visualizations. As a pc imaginative and prescient engineer, I deal nearly every day with picture processing associated duties and most of the time I’m iterating on an issue the place I want visible suggestions to make selections. Let’s consider a quite simple picture processing pipeline with a single step that has some parameters to remodel a picture:

Sample processing pipeline with missing visualization of output

How have you learnt which parameters to regulate? Does the pipeline even work as anticipated? With out visualizing your output, you would possibly miss out on some key insights and make sub optimum decisions.

Generally merely exhibiting the output picture and/or some calculated metrics may be sufficient to iterate on the parameters. However I’ve discovered myself in lots of conditions the place a device can be immensely useful to iterate shortly and interactively on my pipeline. So on this article I’ll present you tips on how to work with easy built-in interactive components from OpenCV in addition to tips on how to construct extra trendy person interfaces for Laptop Imaginative and prescient tasks utilizing customtkinter.

Conditions

If you wish to comply with alongside, I like to recommend you to arrange your native surroundings with uv and set up the next packages:

uv add numpy opencv-Python pillow customtkinter

Aim

Earlier than we dive into the code of the undertaking, let’s shortly define what we wish to construct. The applying ought to use the webcam feed and permit the person to pick several types of filters that will likely be utilized to the stream. The processed picture ought to be proven in real-time within the window. A tough sketch of a possible UI would look as follows:

OpenCV – GUI

Let’s begin with a easy loop that fetches frames out of your webcam and shows them in an OpenCV window.

import cv2

cap = cv2.VideoCapture(0)

whereas True:
    ret, body = cap.learn()
    if not ret:
        break

    cv2.imshow("Video Feed", body)
    
    key = cv2.waitKey(1) & 0xFF
    if key == ord('q'):
        break

cap.launch()
cv2.destroyAllWindows()

Keyboard Enter

The only manner so as to add interactivity right here, is by including keyboard inputs. For instance, we are able to cycle by totally different filters with the quantity keys.

...

filter_type = "regular"

whereas True:
    ...

    if filter_type == "grayscale":
        body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
    elif filter_type == "regular":
        move

    ...

    if key == ord('1'):
        filter_type = "regular"
    if key == ord('2'):
        filter_type = "grayscale"
        
    ...

Now you possibly can swap between the conventional picture and the grayscale model by urgent the quantity keys 1 and a pair of. Let’s additionally shortly add a caption to the picture so we are able to truly see the title of the filter we’re making use of.

Now we should be cautious right here: for those who check out the form of the body after the filter, you’ll discover that the dimensionality of the body array has modified. Keep in mind that OpenCV picture arrays are ordered HWC (top, width, shade) with shade as BGR (inexperienced, blue, purple), so the 640×480 picture from my webcam has form (480, 640, 3).

print(filter_type, body.form)
# regular (480, 640, 3)
# grayscale (480, 640)

Now as a result of the grayscale operation outputs a single channel picture, the colour dimension is dropped. If we now wish to draw on prime of this picture, we both have to specify a single channel shade for the grayscale picture or we convert that picture again to the unique BGR format. The second possibility is a bit cleaner as a result of we are able to unify the annotation of the picture.

if filter_type == "grayscale":
    body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
    move

if len(body.form) == 2: # Convert grayscale to BGR
    body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)

Caption

I wish to add a black border on the backside of the picture, on prime of which the title of the filter will likely be proven. We will make use of the copyMakeBorder perform to pad the picture with a border shade on the backside. Then we are able to add the textual content on prime of this border.

# Add a black border on the backside of the body
border_height = 50
border_color = (0, 0, 0)
body = cv2.copyMakeBorder(body, 0, border_height, 0, 0, cv2.BORDER_CONSTANT, worth=border_color)

# Present the filter title
cv2.putText(
    body,
    filter_type,
    (body.form[1] // 2 - 50, body.form[0] - border_height // 2 + 10),
    cv2.FONT_HERSHEY_SIMPLEX,
    1,
    (255, 255, 255),
    2,
    cv2.LINE_AA,
)

That is how the output ought to look, and you may swap between the conventional and grayscale mode and the frames will likely be captioned accordingly.

Sliders

Now as a substitute of utilizing the keyboard as enter technique, OpenCV provides a primary trackbar slider UI aspect. The trackbar must be initialized at the start of the script. We have to reference the identical window as we will likely be exhibiting our photos in later, so I’ll create a variable for the title of the window. Utilizing this title, we are able to create the trackbar and let or not it’s a selector for the index within the checklist of filters.

filter_types = ["normal", "grayscale"]

win_name = "Webcam Stream"
cv2.namedWindow(win_name)

tb_filter = "Filter"
# def createTrackbar(trackbarName: str, windowName: str, worth: int, depend: int, onChange: _typing.Callable[[int], None]) -> None: ...
cv2.createTrackbar(
    tb_filter,
    win_name,
    0,
    len(filter_types) - 1,
    lambda _: None,
)

Discover how we use an empty lambda for the onChange callback, we are going to fetch the worth manually within the loop. All the things else will keep the identical.

whereas True:
    ...

    # Get the chosen filter sort
    filter_id = cv2.getTrackbarPos(tb_filter, win_name)
    filter_type = filter_types[filter_id]

    ...

And voilà, we’ve got a trackbar to pick our filter.

Now we are able to additionally simply add extra filters simply by extending our checklist and implementing every processing step.

filter_types = [
    "normal",
    "grayscale",
    "blur",
    "threshold",
    "canny",
    "sobel",
    "laplacian",
]

...

    if filter_type == "grayscale":
        body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
    elif filter_type == "blur":
        body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
    elif filter_type == "threshold":
        grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
        _, thresholded_frame = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
    elif filter_type == "canny":
        body = cv2.Canny(body, threshold1=100, threshold2=200)
    elif filter_type == "sobel":
        body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
    elif filter_type == "laplacian":
        body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
    elif filter_type == "regular":
        move

    if body.dtype != np.uint8:
        # Scale the body to uint8 if crucial
        cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
        body = body.astype(np.uint8)

Fashionable GUI with CustomTkinter

Now I don’t learn about you however the present person interface doesn’t look very trendy to me. Don’t get me mistaken, there may be some magnificence within the type of the interface, however I want cleaner, extra trendy designs. Plus we’re already on the restrict of what OpenCV provides out of the field by way of UI components. Yep, no buttons, textual content fields, dropdowns, checkboxes or radio buttons and no customized layouts. So let’s see how we are able to rework the look and person expertise of this primary utility to a recent and clear one.

So to get began, we first have to create a category for our app. We create two frames: the primary one accommodates our filter choice on the left facet and the second wraps the picture show. For now, let’s begin with a easy placeholder textual content. Sadly there’s no out of the field opencv element from customtkinter straight, so we might want to shortly construct our personal within the subsequent few steps. However let’s first end the essential UI format.

import customtkinter


class App(customtkinter.CTk):
    def __init__(self) -> None:
        tremendous().__init__()

        self.title("Webcam Stream")
        self.geometry("800x600")

        self.filter_var = customtkinter.IntVar(worth=0)

        # Body for filters
        self.filters_frame = customtkinter.CTkFrame(self)
        self.filters_frame.pack(facet="left", fill="each", broaden=False, padx=10, pady=10)

        # Body for picture show
        self.image_frame = customtkinter.CTkFrame(self)
        self.image_frame.pack(facet="proper", fill="each", broaden=True, padx=10, pady=10)

        self.image_display = customtkinter.CTkLabel(self.image_frame, textual content="Loading...")
        self.image_display.pack(fill="each", broaden=True, padx=10, pady=10)

app = App()
app.mainloop()

Filter Radio Buttons

Now that the skeleton is constructed, we are able to begin filling in our elements. For the left facet, I will likely be utilizing the identical checklist of filter_types to populate a gaggle of radio buttons to pick the filter.

        # Create radio buttons for every filter sort
        self.filter_var = customtkinter.IntVar(worth=0)
        for filter_id, filter_type in enumerate(filter_types):
            rb_filter = customtkinter.CTkRadioButton(
                self.filters_frame,
                textual content=filter_type.capitalize(),
                variable=self.filter_var,
                worth=filter_id,
            )
            rb_filter.pack(padx=10, pady=10)

            if filter_id == 0:
                rb_filter.choose()

Picture Show Part

Now we are able to get began on the fascinating half, tips on how to get our OpenCV frames to point out up within the picture element. As a result of there’s no built-in element, let’s create our personal based mostly on the CTKLabel. This permits us to show a loading textual content whereas the webcam stream is beginning up.

...

class CTkImageDisplay(customtkinter.CTkLabel):
    """
    A reusable ctk widget widget to show opencv photos.
    """

    def __init__(
        self,
        grasp: Any,
    ) -> None:
        self._textvariable = customtkinter.StringVar(grasp, "Loading...")
        tremendous().__init__(
            grasp,
            textvariable=self._textvariable,
            picture=None,
        )

...

class App(customtkinter.CTk):
    def __init__(self) -> None:
        ...

        self.image_display = CTkImageDisplay(self.image_frame)
        self.image_display.pack(fill="each", broaden=True, padx=10, pady=10) 

To date nothing has modified, we merely swapped out the present label with our customized class implementation. In our CTKImageDisplay class we are able to outline an perform to point out a picture within the element, let’s name it set_frame.

import cv2
import numpy.typing as npt
from PIL import Picture

class CTkImageDisplay(customtkinter.CTkLabel):
    ...

    def set_frame(self, body: npt.NDArray) -> None:
        """
        Set the body to be displayed within the widget.

        Args:
            body: The brand new body to show, in opencv format (BGR).
        """
        target_width, target_height = body.form[1], body.form[0]

        # Convert the body to PIL Picture format
        frame_rgb = cv2.cvtColor(body, cv2.COLOR_BGR2RGB)
        frame_pil = Picture.fromarray(frame_rgb, "RGB")

        ctk_image = customtkinter.CTkImage(
            light_image=frame_pil,
            dark_image=frame_pil,
            measurement=(target_width, target_height),
        )
        self.configure(picture=ctk_image, textual content="")
        self._textvariable.set("")

Let’s digest this. First we have to know the way massive our picture element will likely be, we are able to extract that data from the form property of our picture array. To show the picture in tkinter, we want a Pillow Picture sort, we can not straight use the OpenCV array. To transform an OpenCV array to Pillow, we first have to convert the colour house from BGR to RGB after which we are able to use the Picture.fromarray perform to create the Pillow Picture object. Subsequent we are able to create a CTKImage, the place we use the identical picture regardless of the theme and set the dimensions in response to our body. Lastly we are able to use the configure technique to set the picture in our body. On the finish, we additionally reset the textual content variable to take away the “Loading…” textual content, despite the fact that it will theoretically be hidden behind the picture.

To shortly take a look at this, we are able to set the primary picture of our webcam within the constructor. (We are going to see in a second why this isn’t such a good suggestion)

class App(customtkinter.CTk):
    def __init__(self) -> None:
        ...
        
        cap = cv2.VideoCapture(0)
        _, frame0 = cap.learn()
        self.image_display.set_frame(frame0)

If you happen to run this, you’ll discover that the window takes a bit longer to pop up, however after a brief delay you need to see a static picture out of your webcam.

NOTE: If you happen to don’t have a webcam prepared you may also simply use an area video file by passing the file path to the cv2.VideoCapture constructor name.

Now this isn’t very thrilling, because the body doesn’t replace but. So let’s see what occurs if we attempt to do that naively.

class App(customtkinter.CTk):
    def __init__(self) -> None:
        ...

        cap = cv2.VideoCapture(0)
        whereas True:
            ret, body = cap.learn()
            if not ret:
                break

            self.image_display.set_frame(body)

Nearly the identical as earlier than, besides now we run the body loop as we did within the earlier chapter with the OpenCV GUI. If you happen to run this, you will notice… precisely nothing. The window by no means exhibits up, since we’re creating an infinite loop within the constructor of the app! That is additionally the explanation why this system solely confirmed up after a delay within the earlier instance, the opening of the Webcam stream is a blocking operation, and the occasion loop for the window can not run, so it doesn’t present up but.

So let’s repair this by including a barely higher implementation that enables the gui occasion loop to run whereas we additionally replace the body each on occasion. We will use the after technique of tkinter to schedule a perform name whereas yielding the method in the course of the wait time.


        ...

        self.cap = cv2.VideoCapture(0)
        self.after(10, self.update_frame)

    def update_frame(self) -> None:
        """
        Replace the displayed body.
        """
        
        ret, body = self.cap.learn()
        if not ret:
            return
        
        self.image_display.set_frame(body)

        self.after(10, self.update_frame)

So now we nonetheless arrange the webcam stream within the constructor, so we haven’t solved that downside but. However a minimum of we are able to see a steady stream of frames in our picture element.

Making use of Filters

Now that the body loop is working. we are able to re-implement our filters from the start and apply them to our webcam stream. Within the update_frame perform, we are able to examine the present filter variable and apply the corresponding filter perform.

    def update_frame(self) -> None:
        ...
        
        # Get the chosen filter sort
        filter_id = self.filter_var.get()
        filter_type = filter_types[filter_id]

        if filter_type == "grayscale":
            body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
        elif filter_type == "blur":
            body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
        elif filter_type == "threshold":
            grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
            _, body = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
        elif filter_type == "canny":
            body = cv2.Canny(body, threshold1=100, threshold2=200)
        elif filter_type == "sobel":
            body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
        elif filter_type == "laplacian":
            body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
        elif filter_type == "regular":
            move

        if body.dtype != np.uint8:
            # Scale the body to uint8 if crucial
            cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
            body = body.astype(np.uint8)
        if len(body.form) == 2:  # Convert grayscale to BGR
            body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
        
        self.image_display.set_frame(body)

        self.after(10, self.update_frame)

And now we’re again to the total performance of the appliance, you possibly can choose any filter on the left facet and it is going to be utilized in real-time to the webcam feed!

Multithreading and Synchronization

Now though the appliance runs as is, there are some issues with the present manner we run our body loop. Presently all the pieces runs in a single thread, the principle GUI thread. That is why at first, we don’t instantly see the window pop up, our webcam initialization blocks the principle thread. Now think about, if we did some heavier picture processing, possibly working the pictures by neural community, you wouldn’t need your person interface to at all times be blocked whereas the community is working inference. This can result in a really unresponsive person expertise when clicking the UI components!

A greater technique to deal with this in our utility is to separate the picture processing from the person interface. Typically that is nearly at all times a good suggestion to separate your GUI logic from any sort of non-trivial processing. So in our case, we are going to run a separate thread that’s liable for the picture loop. It’s going to learn the frames from the webcam stream and apply the filters.

NOTE: Python threads aren’t “actual” threads in a way that they don’t have the aptitude to run on totally different logical cpu cores and therefore is not going to actually run in parallel. In Python multithreading the context will swap between the threads, however because of the GIL, the worldwide interpreter lock, a single python course of can solely run one bodily thread. If you’d like “actual” parallel processing, you will have to make use of multiprocessing. Since our course of right here will not be CPU sure however truly I/O sure, multithreading suffices.

class App(customtkinter.CTk):
    def __init__(self) -> None:
        ...

        self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
        self.webcam_thread.begin()

    def run_webcam_loop(self) -> None:
        """
        Run the webcam loop in a separate thread.
        """
        self.cap = cv2.VideoCapture(0)
        if not self.cap.isOpened():
            return

        whereas True:
            ret, body = self.cap.learn()
            if not ret:
                break

            # Filters
            ...

            self.image_display.set_frame(body)

If you happen to run this, you’ll now see that our window opens up instantly and we even see our loading textual content whereas the webcam stream is opening up. Nonetheless, as quickly because the stream begins, the frames begin to flicker. Relying on numerous elements, you would possibly expertise totally different visible artifacts or errors at this stage.

Warning: flashing picture

Now why is that this occurring? The issue is that we’re concurrently attempting to replace the brand new body whereas the inner refresh loop of the person interface is perhaps utilizing the data of the array to attract it on the display screen. They’re each competing for a similar body array.

It’s typically not a good suggestion to straight replace the UI components from a special thread, in some frameworks this would possibly even be prevented and can elevate exceptions. In Tkinter, we are able to do it, however we are going to get bizarre outcomes. We’d like some sort of synchronization between our threads. That’s the place the Queue comes into play.

You’re most likely conversant in queues from the grocery retailer or theme parks. The idea of the queue right here could be very related: the primary aspect that goes into the queue additionally leaves first (First In First Out).

On this case, we truly simply need a queue with a single aspect, a single slot queue. The queue implementation in Python is thread-safe, that means we are able to put and get objects from the queue from totally different threads. Good for our use case, the processing thread will put the picture arrays to the queue and the GUI thread will attempt to get a component, however not block if the queue is empty.

class App(customtkinter.CTk):
    def __init__(self) -> None:
        ...

        self.queue = queue.Queue(maxsize=1)

        self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
        self.webcam_thread.begin()

        self.frame_loop_dt_ms = 16  # ~60 FPS
        self.after(self.frame_loop_dt_ms, self._update_frame)
    
    def _update_frame(self) -> None:
        """
        Replace the body within the picture show widget.
        """
        attempt:
            body = self.queue.get_nowait()
            self.image_display.set_frame(body)
        besides queue.Empty:
            move

        self.after(self.frame_loop_dt_ms, self._update_frame)

    def run_webcam_loop(self) -> None:
        ...

        whereas True:
            ...

            self.queue.put(body)

Discover how we transfer the direct name to the set_frame perform from the webcam loop which runs in its personal thread to the _update_frame perform that’s working on the principle thread, repeatedly scheduled in 16ms intervals.

Right here it’s essential to make use of the get_nowait perform in the principle thread, in any other case if we’d use the get perform, we’d be blocking there. This name does not block, however raises a queue.Empty exception if there’s no aspect to fetch so we’ve got to catch this and ignore it. Within the webcam loop, we are able to use the blocking put perform as a result of it doesn’t matter that we block the run_webcam_loop, there’s nothing else needing to run there.

And now all the pieces is working as anticipated, no extra flashing frames!

Conclusion

Combining a UI framework like Tkinter with OpenCV permits us to construct trendy trying purposes with an interactive graphical person interface. As a result of UI working in the principle thread, we run the picture processing in a separate thread and synchronize the information between the threads utilizing a single-slot queue. Yow will discover a cleaned up model of this demo within the repository beneath with a extra modular construction. Let me know for those who construct one thing fascinating with this strategy. Take care!



Checkout the total supply code within the GitHub repo:

https://github.com/trflorian/ctk-opencv


Tags: applicationscomputerGUIinPythonModernVision
Previous Post

Construct public-facing generative AI functions utilizing Amazon Q Enterprise for nameless customers

Next Post

Amazon Bedrock Mannequin Distillation: Increase perform calling accuracy whereas decreasing value and latency

Next Post
Amazon Bedrock Mannequin Distillation: Increase perform calling accuracy whereas decreasing value and latency

Amazon Bedrock Mannequin Distillation: Increase perform calling accuracy whereas decreasing value and latency

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    401 shares
    Share 160 Tweet 100
  • Diffusion Mannequin from Scratch in Pytorch | by Nicholas DiSalvo | Jul, 2024

    401 shares
    Share 160 Tweet 100
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    401 shares
    Share 160 Tweet 100
  • Proton launches ‘Privacy-First’ AI Email Assistant to Compete with Google and Microsoft

    401 shares
    Share 160 Tweet 100
  • Streamlit fairly styled dataframes half 1: utilizing the pandas Styler

    400 shares
    Share 160 Tweet 100

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Vxceed secures transport operations with Amazon Bedrock
  • Estimating Product-Stage Worth Elasticities Utilizing Hierarchical Bayesian
  • Safe distributed logging in scalable multi-account deployments utilizing Amazon Bedrock and LangChain
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.