Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

The Geometry Behind the Dot Product: Unit Vectors, Projections, and Instinct

admin by admin
April 7, 2026
in Artificial Intelligence
0
The Geometry Behind the Dot Product: Unit Vectors, Projections, and Instinct
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter



This text is the primary of three elements. Every half stands by itself, so that you don’t have to learn the others to grasp it.

The dot product is without doubt one of the most vital operations in machine studying – however it’s exhausting to grasp with out the appropriate geometric foundations. On this first half, we construct these foundations:

· Unit vectors

· Scalar projection

· Vector projection

Whether or not you’re a scholar studying Linear Algebra for the primary time, or need to refresh these ideas, I like to recommend you learn this text.

Actually, we’ll introduce and clarify the dot product on this article, and within the subsequent article, we’ll discover it in larger depth.

The vector projection part is included as an elective bonus: useful, however not crucial for understanding the dot product.

The following half explores the dot product in larger depth: its geometric that means, its relationship to cosine similarity, and why the distinction issues.

The ultimate half connects these concepts to 2 main purposes: advice methods and NLP.


A vector 𝐯→massive mathbf{vec{v}} is named a unit vector if its magnitude is 1:

|𝐯→|=1LARGE mathbf{|vec{v}|} = 1

To take away the magnitude of a non-zero vector whereas conserving its route, we are able to normalize it. Normalization scales the vector by the issue:

1|𝐯→|LARGE frac{1}{|mathbf{vec{v}}|}

The normalized vector 𝐯^massive mathbf{hat{v}}  is the unit vector within the route of 𝐯→massive mathbf{vec{v}}: 

𝐯^=𝐯→|𝐯→|LARGE start{array} hline mathbf{hat{v}} = frac{mathbf{vec{v}}}{|mathbf{vec{v}}|} hline finish{array}

Notation 1. Any more, each time we normalize a vector 𝐯→massive mathbf{vec{v}},  or write 𝐯^massive mathbf{hat{v}}, we assume that 𝐯→≠0massive mathbf{vec{v}} neq 0. This notation, together with those that observe, can be related to the next articles.

This operation naturally separates a vector into its magnitude and its route:

𝐯→=|𝐯→|⏟magnitude⋅𝐯^⏟routeLARGE start{array} hline rule{0pt}{2.5em} mathbf{vec{v}} = underbrace{|mathbf{vec{v}}|}_{textual content{magnitude}} cdot underbrace{mathbf{hat{v}}}_{textual content{route}} [4.5em] hline finish{array}

Determine 1 illustrates this concept: 𝐯{mathbf{v}} and 𝐯^massive mathbf{hat{v}} level in the identical route, however have totally different magnitudes.

Determine 1-Separating “How A lot” from “Which Approach”. Any vector could be written because the product of its magnitude and its unit vector, which preserves route however has size 1. Picture by Writer (created utilizing Claude).

Similarity of unit vectors

In two dimensions, all unit vectors lie on the unit circle (radius 1, centered on the origin). A unit vector that varieties an angle θ with the x-axis has coordinates (cos θ, sin θ).

This implies the angle between two unit vectors encodes a pure similarity rating - as we’ll present shortly, this rating is strictly cos θ: equal to 1 once they level the identical means, 0 when perpendicular, and −1 when reverse.

Notation 2. All through this text, θ denotes the smallest angle between the 2 vectors, so 0°≤θ≤180°0° leq theta leq 180° .

In observe, we don’t know θ instantly – we all know the vectors’ coordinates.

We are able to present why the dot product of two unit vectors: a^largehat{a} and b^largehat{b} equals cos θ utilizing a geometrical argument in three steps:

1. Rotate the coordinate system till b^largehat{b} lies alongside the x-axis. Rotation doesn’t change angles or magnitudes.

2. Learn off the brand new coordinates. After rotation, b^largehat{b} has coordinates (1 , 0). Since a^largehat{a} is a unit vector at angle θ from the x-axis, the unit circle definition provides its coordinates as (cos θ, sin θ).

3. Multiply corresponding elements and sum:

a^⋅b^=ax⋅bx+ay⋅by=cos⁡θ⋅1+sin⁡θ⋅0=cos⁡θMassive start{aligned} hat{a} cdot hat{b} = a_x cdot b_x + a_y cdot b_y = costheta cdot 1 + sintheta cdot 0 = costheta finish{aligned}

This sum of component-wise merchandise is named the dot product:

a→⋅b→=a1⋅b1+a2⋅b2+⋯+an⋅bnMassive boxed{ start{aligned} vec{a} cdot vec{b} = a_1 cdot b_1 + a_2 cdot b_2 + cdots + a_n cdot b_n finish{aligned} }

See the illustration of those three steps in Determine 2 beneath:

Determine 2- By rotating our perspective to align with the x-axis, the coordinate math simplifies superbly to disclose why the 2 unit vectors’ dot product is the same as cos(θ). Picture by Writer (created utilizing Claude).

All the things above was proven in 2D, however the identical consequence holds in any variety of dimensions. Any two vectors, regardless of what number of dimensions they reside in, at all times lie in a single flat airplane. We are able to rotate that airplane to align with the xy-plane — and from there, the 2D proof applies precisely.

Notation 3. Within the diagrams that observe, we regularly draw one of many vectors (sometimes b→largevec{b}) alongside the horizontal axis. When b→largevec{b} just isn’t already aligned with the x-axis, we are able to at all times rotate our coordinate system as we did above (the “rotation trick”). Since rotation preserves all lengths, angles, and dot merchandise, each components derived on this orientation holds for any route of b→largevec{b}.


A vector can contribute in lots of instructions directly, however typically we care about just one route.

Scalar projection solutions the query: How a lot of 𝒂→massive boldsymbol{vec{a}} lies alongside the route of 𝒃→massive boldsymbol{vec{b}}?

This worth is unfavorable if the projection factors in the wrong way of b→largevec{b}.

The Shadow Analogy

Essentially the most intuitive means to consider scalar projection is because the size of a shadow. Think about you maintain a stick (vector a→massive vec{a}) at an angle above the bottom (the route of b→largevec{b}), and a lightweight supply shines straight down from above.

The shadow that the stick casts on the bottom is the scalar projection.

The animated determine beneath illustrates this concept:

Determine 3- Scalar projection as a shadow.
 The scalar projection measures how a lot of vector a lies within the route of b.
 It equals the size of the shadow that a casts onto b (Woo, 2023). The GIF was created by Claude

Calculation

Think about a lightweight supply shining straight down onto the road PS (the route of b→largevec{b}). The “shadow” that a→largevec{a} (the arrow from P to Q ) casts onto that line is strictly the section PR. You may see this in Determine 4.

Determine 4: Measuring Directional Alignment. The scalar projection (section PR) visually solutions the core query: “How a lot of vector a lies within the actual route of vector b.” Picture by Writer (created utilizing Claude).

Deriving the components

Now take a look at the triangle  PQRmassive PQR: the perpendicular drop from Qmassive Q creates a proper triangle, and its sides are:

  •  PQ=|a→|massive PQ = |vec{a}| (the hypotenuse).
  •  PRmassive PR (the adjoining facet – the shadow).
  •  QRmassive QR (the other facet – the perpendicular element).

From this triangle:

  1. The angle between a→largevec{a} and b→largevec{b} is θ.
  2. cos⁡(θ)=PR|a→|massive cos(theta) = frac{PR}{|vec{a}|} (essentially the most fundamental definition of cosine).
  3. Multiply each side by |a→|massive|vec{a}| :

PR=|a→|cos⁡(θ)LARGE start{array} hline PR = |vec{a}| cos(theta) hline finish{array}

The Phase 𝑷𝑹boldsymbol{PR} is the shadow size – the scalar projection of 𝒂→massive boldsymbol{vec{a}} on 𝒃→massive boldsymbol{vec{b}}.

When θ > 90°, the scalar projection turns into unfavorable too. Consider the shadow as flipping to the other facet.

How is the unit vector associated?

The shadow’s size (PR) doesn’t depend upon how lengthy b→largevec{b} is. It is dependent upon |a→|massive|vec{a}| and on θ.

Whenever you compute a→⋅b^largevec{a} cdot hat{b}, you’re asking: how a lot of a→largevec{a} lies alongside b→largevec{b} route?  That is the shadow size.

The unit vector acts like a route filter: multiplying a→largevec{a} by it extracts the element of a→largevec{a} alongside that route.

Let’s see it utilizing the rotation trick. We place b̂ alongside the x-axis:

a→=(|a→|cos⁡θ, |a→|sin⁡(θ))Massive vec{a} = (|vec{a}|costheta, |vec{a}|sin(theta))

and:

b^=(1,0)Massive hat{b} = (1, 0)

Then:

a→⋅b^=|a→|cos⁡θ⋅1+|a→|sin⁡(θ)⋅0=|a→|cos⁡θMassive start{aligned} vec{a} cdot hat{b} = |vec{a}|costheta cdot 1 + |vec{a}|sin(theta) cdot 0 = |vec{a}|costheta finish{aligned}

The scalar projection of 𝒂→massive boldsymbol{vec{a}} within the route of 𝒃→massive boldsymbol{vec{b}} is:

|a→|cos⁡θ=a→⋅b^=a→⋅b→|b→|LARGE renewcommand{arraystretch}{2} start{array} hline start{aligned} |vec{a}|costheta &= vec{a} cdot hat{b} &= frac{vec{a} cdot vec{b}}{|vec{b}|} finish{aligned} hline finish{array}


We apply the identical rotation trick another time, now with two basic vectors: a→largevec{a} and b→largevec{b}.

After rotation:

a→=(|a→|cos⁡θ, |a→|sin⁡θ)Massive vec{a} = (|vec{a}|costheta, |vec{a}|sintheta) ,

b→=(|b→|, 0)Massive vec{b} = (|vec{b}|, 0)

so:

a→⋅b→=|a→|cos⁡θ⋅|b→|+|a→|sin⁡θ⋅0=|a→||b→|cos⁡θMassive start{aligned} vec{a} cdot vec{b} = |vec{a}|costheta cdot |vec{b}| + |vec{a}|sintheta cdot 0 = |vec{a}||vec{b}|costheta finish{aligned}

The dot product of 𝒂→massive boldsymbol{vec{a}} and 𝒃→massive boldsymbol{vec{b}} is:

a→⋅b→=a1b1+⋯+anbn=∑i=1naibi=|a→||b→|cos⁡θMassive renewcommand{arraystretch}{2} start{array} hline vec{a} cdot vec{b} = a_1 b_1+ dots + a_n b_n = sum_{i=1}^{n} a_i b_i = |vec{a}||vec{b}|costheta hline finish{array}


Vector projection extracts the portion of vector 𝒂→massive boldsymbol{vec{a}} that factors alongside the route of vector 𝒃→massive boldsymbol{vec{b}}.

The Path Analogy

Think about two trails ranging from the identical level (the origin):

  • Path A results in a whale-watching spot.
  • Path B leads alongside the coast in a special route.

Right here’s the query projection solutions:

You’re solely allowed to stroll alongside Path B. How far do you have to stroll in order that you find yourself as shut as doable to the endpoint of Path A?

You stroll alongside B, and sooner or later, you cease. From the place you stopped, you look towards the top of Path A, and the road connecting you to it varieties an ideal 90° angle with Path B. That’s the important thing geometric truth – the closest level is at all times the place you’d make a right-angle flip.

The spot the place you cease on Path B is the projection of A onto B. It represents “the a part of A that goes in B’s route.

The remaining hole -  out of your stopping level to the precise finish of Path A  –  is the whole lot about A that has nothing to do with B’s route. This instance is illustrated in Determine 5 beneath: The vector that begins on the origin, factors alongside Path B, and ends on the closest level –is the vector projection of a→largevec{a} onto b→largevec{b} .

Determine 5 — Vector projection because the closest level to a route.
 Strolling alongside path B, the closest level to the endpoint of A happens the place the connecting section varieties a proper angle with B. This level is the projection of A onto B. Picture by Writer (created utilizing Claude)..

Scalar projection solutions: “How far did you stroll?”

That’s only a distance, a single quantity.

Vector projection solutions: “The place precisely are you?”

Extra exactly: “What’s the precise motion alongside Path B that will get you to that closest level?”

Now “1.5 kilometers” isn’t sufficient, you have to say “1.5 kilometers east alongside the coast.” That’s a distance plus a route: an arrow, not only a quantity. The arrow begins on the origin, factors alongside Path B, and ends on the closest level.

The space you walked is the scalar projection worth. The magnitude of the vector projection equals absolutely the worth of the scalar projection.

Unit vector  solutions : “Which route does Path B go?”

It’s precisely what b^largehat{b} represents. It’s Path B stripped of any size info  - simply the pure route of the coast.

vector projection=(how far you stroll)⏟scalar projection×(B route)⏟b^start{aligned} &textual content{vector projection} = &underbrace{(textual content{how far you stroll})}_{textual content{scalar projection}} occasions underbrace{(textual content{B route})}_{hat{b}} finish{aligned}

I do know the whale analog may be very particular; it was impressed by this good rationalization (Michael.P, 2014)

Determine 6 beneath reveals the identical shadow diagram as in Determine 4, with PR drawn as an arrow, as a result of the vector projection is a vector (with each size and route), not only a quantity.

Determine 6 — Vector projection as a directional shadow.
 Not like scalar projection (a size), the vector projection is an arrow alongside vector b. Picture by Writer (created utilizing Claude).

For the reason that projection should lie alongside b→largevec{b} , we want two issues for PR→largevec{PR} :

  1. Its magnitude is the scalar projection: |a→|cos⁡θmassive|vec{a}|costheta
  2. Its route is: b^largehat{b} (the route of b→largevec{b})

Any vector equals its magnitude occasions its route (as we noticed within the Unit Vector part), so:

PR→=|a→|cos⁡θ⏟scalar projection⋅b^⏟route of b→massive start{array} hline hspace{10pt} vec{PR} = underbrace{|vec{a}| cos theta}_{textual content{scalar projection}} cdot underbrace{hat{b}}_{textual content{route of } vec{b}} hspace{20pt} hline finish{array}

That is already the vector projection components. We are able to rewrite it by substituting b^=b→|b→|largehat{b} = frac{vec{b}}{|vec{b}|} , and recognizing that |a→||b→|cos⁡θ=a→⋅b→massive|vec{a}||vec{b}|costheta = vec{a} cdot vec{b}

The vector projection of 𝒂→massive boldsymbol{vec{a}} within the route of 𝒃→massive boldsymbol{vec{b}} is:

projb→(a→)=(|a→|cos⁡θ)b^=(a→⋅b→|b→|2)b→=(a→⋅b^)b^Massive renewcommand{arraystretch}{1.5} start{array} hline start{aligned} textual content{proj}_{vec{b}}(vec{a}) &= (|vec{a}|costheta)hat{b} &= left(frac{vec{a} cdot vec{b}}{|vec{b}|^2}proper)vec{b} &= (vec{a} cdot hat{b})hat{b} finish{aligned} hline finish{array}


  • A unit vector isolates a vector’s route by stripping away its magnitude.

𝐯^=𝐯→|𝐯→|LARGE start{array} hline mathbf{hat{v}} = frac{mathbf{vec{v}}}{|mathbf{vec{v}}|} hline finish{array}

  • The dot product multiplies corresponding elements and sums them. It’s also equal to the product of the magnitudes of the 2 vectors multiplied by the cosine of the angle between them.

 a→⋅b→=a1b1+⋯+anbn=∑i=1naibi=|a→||b→|cos⁡θ renewcommand{arraystretch}{2} start{array} hline vec{a} cdot vec{b} = a_1 b_1+ dots + a_n b_n = sum_{i=1}^{n} a_i b_i = |vec{a}||vec{b}|costheta hline finish{array}

  • Scalar projection makes use of the dot product to measure how far one vector reaches alongside one other’s route - a single quantity, just like the size of a shadow

|a→|cos⁡θ=a→⋅b^=a→⋅b→|b→|Massive start{array} hline |vec{a}|costheta = vec{a} cdot hat{b} = frac{vec{a} cdot vec{b}}{|vec{b}|} hline finish{array}

  • Vector projection goes one step additional, returning an precise arrow alongside that route: the scalar projection occasions the unit vector.

(|a→|cos⁡θ)b^=(a→⋅b^)b^Massive renewcommand{arraystretch}{2} start{array} hline (|vec{a}|costheta)hat{b} = (vec{a} cdot hat{b})hat{b} hline finish{array}

Within the subsequent half, we’ll use the instruments we realized on this article to really perceive the dot product.

Tags: DotGeometryIntuitionproductProjectionsUnitVectors
Previous Post

Construct AI-powered worker onboarding brokers with Amazon Fast

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • Greatest practices for Amazon SageMaker HyperPod activity governance

    Greatest practices for Amazon SageMaker HyperPod activity governance

    405 shares
    Share 162 Tweet 101
  • How Cursor Really Indexes Your Codebase

    404 shares
    Share 162 Tweet 101
  • Construct a serverless audio summarization resolution with Amazon Bedrock and Whisper

    403 shares
    Share 161 Tweet 101
  • Speed up edge AI improvement with SiMa.ai Edgematic with a seamless AWS integration

    403 shares
    Share 161 Tweet 101
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    403 shares
    Share 161 Tweet 101

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • The Geometry Behind the Dot Product: Unit Vectors, Projections, and Instinct
  • Construct AI-powered worker onboarding brokers with Amazon Fast
  • Conduct is the New Credential
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.