Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

The Machine Studying “Introduction Calendar” Day 12: Logistic Regression in Excel

admin by admin
December 13, 2025
in Artificial Intelligence
0
The Machine Studying “Introduction Calendar” Day 12: Logistic Regression in Excel
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


Right now’s mannequin is Logistic Regression.

When you already know this mannequin, here’s a query for you:

Is Logistic Regression a regressor or a classifier?

Effectively, this query is strictly like: Is a tomato a fruit or a vegetable?

From a botanist’s viewpoint, a tomato is a fruit, as a result of they have a look at construction: seeds, flowers, plant biology.

From a prepare dinner’s viewpoint, a tomato is a vegetable, as a result of they have a look at style, how it’s utilized in a recipe, whether or not it goes in a salad or a dessert.

The identical object, two legitimate solutions, as a result of the standpoint is completely different.

Logistic Regression is strictly like that.

  • Within the Statistical / GLM perspective, it’s a regression. And there may be not the idea of “classification” on this framework anyway. There are gamma regression, logistic regression, Poisson regression…
  • Within the machine studying perspective, it’s used for classification. So it’s a classifier.

We’ll come again to this later.

For now, one factor is certain:

Logistic Regression could be very properly tailored when the goal variable is binary, and normally y is coded as 0 or 1.

However…

What’s a classifier for a weight-based mannequin?

So, y could be 0 or 1.

0 or 1, they’re numbers, proper?

So we will simply contemplate y as steady!

Sure, y = a x + b, with y = 0 or 1.

Why not?

Now, you could ask: why this query, now? Why it was not requested earlier than.

Effectively, for distance-based and tree-based fashions, a categorical y is actually categorical.

When y is categorical, like pink, blue, inexperienced, or just 0 and 1:

  • In Ok-NN, you classify by taking a look at neighbors of every class.
  • In centroid fashions, you examine with the centroid of every class.
  • In a resolution tree, you compute class proportions at every node.

In all these fashions:

Class labels aren’t numbers.
They’re classes.
The algorithms by no means deal with them as values.

So classification is pure and speedy.

However for weight-based fashions, issues work in a different way.

In a weight-based mannequin, we at all times compute one thing like:

y = a x + b

or, later, a extra complicated operate with coefficients.

This implies:

The mannequin works with numbers all over the place.

So right here is the important thing concept:

If the mannequin does regression, then this identical mannequin can be utilized for binary classification.

Sure, we will use linear regression for binary classification!

Since binary labels are 0 and 1, they’re already numeric.

And on this particular case: we can apply Bizarre Least Squares (OLS) straight on y = 0 and y = 1.

The mannequin will match a line, and we will use the identical closed-form system, as we will see under.

Logistic Regression in Excel – all photographs by writer

We will do the identical gradient descent, and it’ll completely work:

After which, to acquire the ultimate class prediction, we merely select a threshold.
It’s normally 0.5 (or 50 p.c), however relying on how strict you wish to be, you’ll be able to choose one other worth.

  • If the anticipated y≥0.5, predict class 1
  • In any other case, class 0

This can be a classifier.

And since the mannequin produces a numeric output, we will even determine the purpose the place: y=0.5.

This worth of x defines the resolution frontier.

Within the earlier instance, this occurs at x=9.
At this threshold, we already noticed one misclassification.

However an issue seems as quickly as we introduce a degree with a giant worth of x.

For instance, suppose we add a degree with: x= 50 and y = 1.

As a result of linear regression tries to suit a straight line by all the info, this single giant worth of x pulls the road upward.
The choice frontier shifts from x= to roughly x=12.

And now, with this new boundary, we find yourself with two misclassifications.

This illustrates the primary challenge:

A linear regression used as a classifier is extraordinarily delicate to excessive values of x. The choice frontier strikes dramatically, and the classification turns into unstable.

This is among the causes we want a mannequin that doesn’t behave linearly ceaselessly. A mannequin that stays between 0 and 1, even when x turns into very giant.

And that is precisely what the logistic operate will give us.

How Logistic Regression works

We begin with : ax + b, similar to the linear regression.

Then we apply one operate referred to as sigmoid, or logistic operate.

As we will see within the screenshot under, the worth of p is then between 0 and 1, so that is excellent.

  • p(x) is the predicted likelihood that y = 1
  • 1 − p(x) is the anticipated likelihood that y = 0

For classification, we will merely say:

  • If p(x) ≥ 0.5, predict class 1
  • In any other case, predict class 0

From probability to log-loss

Now, the OLS Linear Regression tries to reduce the MSE (Imply Squared Error).

Logistic regression for a binary goal makes use of the Bernoulli probability. For every remark i:

  • If yᵢ = 1, the likelihood of the info level is pᵢ
  • If yᵢ = 0, the likelihood of the info level is 1 − pᵢ

For the entire dataset, the chances are the product over all i. In observe, we take the logarithm, which turns the product right into a sum.

Within the GLM perspective, we attempt to maximize this log probability.

Within the machine studying perspective, we outline the loss because the adverse log probability and we reduce it. This provides the standard log-loss.

And it’s equal. We is not going to do the demonstration right here

Gradient Descent for Logistic Regression

Precept

Simply as we did for Linear Regression, we will additionally use Gradient Descent right here. The thought is at all times the identical:

  1. Begin from some preliminary values of a and b.
  2. Compute the loss and its gradient (derivatives) with respect to a and b.
  3. Transfer a and b a bit of bit within the course that reduces the loss.
  4. Repeat.

Nothing mysterious.
Simply the identical mechanical course of as earlier than.

Step 1. Gradient Calculation

For logistic regression, the gradients of the common log-loss observe a quite simple construction.

That is merely the common residual.

We’ll simply give the outcome under, for the system that we will implement in Excel. As you’ll be able to see, it’s fairly easy on the finish, even when the log-loss system could be complicated at first look.

Excel can compute these two portions with simple SUMPRODUCT formulation.

Step 2. Parameter Replace

As soon as the gradients are recognized, we replace the parameters.

This replace step is repeated at every iteration.
And iteration after iteration, the loss goes down, and the parameters converge to the optimum values.

We now have the entire image.
You have got seen the mannequin, the loss, the gradients, and the parameter updates.
And with the detailed view of every iteration in Excel, you’ll be able to truly play with the mannequin: change a worth, watch the curve transfer, and see the loss lower step-by-step.

It’s surprisingly satisfying to watch how all the pieces suits collectively so clearly.

What about multiclass classification?

For distance-based and tree-based fashions:

No challenge in any respect.
They naturally deal with a number of courses as a result of they by no means interpret the labels as numbers.

However for weight-based fashions?

Right here we hit an issue.

If we write numbers for the category: 1, 2, 3, and many others.

Then the mannequin will interpret these numbers as actual numeric values.
Which ends up in issues:

  • the mannequin thinks class 3 is “larger” than class 1
  • the midpoint between class 1 and sophistication 3 is class 2
  • distances between courses grow to be significant

However none of that is true in classification.

So:

For weight-based fashions, we can’t simply use y = 1, 2, 3 for multiclass classification.

This encoding is inaccurate.

We’ll see later find out how to repair this.

Conclusion

Ranging from a easy binary dataset, we noticed how a weight-based mannequin can act as a classifier, why linear regression shortly reaches its limits, and the way the logistic operate solves these issues by protecting predictions between 0 and 1.

Then, by expressing the mannequin by probability and log-loss, we obtained a formulation that’s each mathematically sound and straightforward to implement.
And as soon as all the pieces is positioned in Excel, your entire studying course of turns into seen: the chances, the loss, the gradients, the updates, and at last the convergence of the parameters.

With the detailed iteration desk, you’ll be able to truly see how the mannequin improves step-by-step.
You’ll be able to change a worth, regulate the educational price, or add a degree, and immediately observe how the curve and the loss react.
That is the true worth of doing machine studying in a spreadsheet: nothing is hidden, and each calculation is clear.

By constructing logistic regression this manner, you not solely perceive the mannequin, you perceive why it’s skilled.
And this instinct will stick with you as we transfer to extra superior fashions later within the Introduction Calendar.

Tags: AdventCalendarDayExcellearningLogisticmachineRegression
Previous Post

Constructing a voice-driven AWS assistant with Amazon Nova Sonic

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

    403 shares
    Share 161 Tweet 101
  • The Good-Sufficient Fact | In direction of Knowledge Science

    403 shares
    Share 161 Tweet 101
  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    402 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    402 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • The Machine Studying “Introduction Calendar” Day 12: Logistic Regression in Excel
  • Constructing a voice-driven AWS assistant with Amazon Nova Sonic
  • Spectral Neighborhood Detection in Scientific Data Graphs
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.