is a generally used metric for operationalizing duties corresponding to semantic search and doc comparability within the discipline of pure language processing (NLP). Introductory NLP programs typically present solely a high-level justification for utilizing cosine similarity in such duties (versus, say, Euclidean distance) with out explaining the underlying arithmetic, leaving many information scientists with a reasonably obscure understanding of the subject material. To deal with this hole, the next article lays out the mathematical instinct behind the cosine similarity metric and exhibits how this may also help us interpret leads to observe with hands-on examples in Python.
Observe: All figures and formulation within the following sections have been created by the writer of this text.
Mathematical Instinct
The cosine similarity metric is predicated on the cosine operate that readers might recall from highschool math. The cosine operate displays a repeating wavelike sample, a full cycle of which is depicted in Determine 1 beneath for the vary 0 <= x <= 2*pi. The Python code used to supply the determine can be included for reference.
import numpy as np
import matplotlib.pyplot as plt
# Outline the x vary from 0 to 2*pi
x = np.linspace(0, 2 * np.pi, 500)
y = np.cos(x)
# Create the plot
plt.determine(figsize=(8, 4))
plt.plot(x, y, label='cos(x)', shade='blue')
# Add notches on the x-axis at pi/2 and three*pi/2
notch_positions = [0, np.pi/2, np.pi, 3*np.pi/2, 2*np.pi]
notch_labels = ['0', 'pi/2', 'pi', '3*pi/2', '2*pi']
plt.xticks(ticks=notch_positions, labels=notch_labels)
# Add customized horizontal gridlines solely at y = -1, 0, 1
for y_val in [-1, 0, 1]:
plt.axhline(y=y_val, shade='grey', linestyle='--', linewidth=0.5)
# Add vertical gridlines at specified x-values
for x_val in notch_positions:
plt.axvline(x=x_val, shade='grey', linestyle='--', linewidth=0.5)
# Customise the plot
plt.xlabel("x")
plt.ylabel("cos(x)")
# Ultimate structure and show
plt.tight_layout()
plt.present()

The operate parameter x denotes an angle in radians (e.g., the angle between two vectors in an embedding house), the place pi/2, pi, 3*pi/2, and a couple of*pi, are 90, 180, 270, and 360 levels, respectively.
To know why the cosine operate can function a helpful foundation for designing a vector similarity metric, discover that the essential cosine operate, with none practical transformations as proven in Determine 1, has maxima at x = 2*a*pi, minima at x = (2*b + 1)*pi, and roots at x = (c + 1/2)*pi for some integers a, b, and c. In different phrases, if x denotes the angle between two vectors, cos(x) returns the most important worth when the vectors level in the identical course, the smallest worth when the vectors level in reverse instructions, and nil when the vectors are orthogonal to one another.
This conduct of the cosine operate neatly captures the interaction between two key ideas in NLP: semantic overlap (conveying how a lot which means is shared between two texts) and semantic polarity (capturing the oppositeness of which means in texts). For instance, the texts “I favored this film” and “I loved this movie” would have excessive semantic overlap (they specific basically the identical which means regardless of utilizing completely different phrases) and low semantic polarity (they don’t specific reverse meanings). Now, if the embedding vectors for 2 phrases occur to encode each semantic overlap and polarity, then we might anticipate synonyms to have cosine similarity approaching 1, antonyms to have cosine similarity approaching -1, and unrelated phrases to have cosine similarity approaching 0.
In observe, we’ll sometimes not know the angle x instantly. As an alternative, we should derive the cosine worth from the vectors themselves. Given two vectors U and V, every with n parts, the cosine of the angle between these vectors — equal to the cosine similarity metric — is computed because the dot product of the vectors divided by the product of the vector magnitudes:

The above components for the cosine of the angle between two vectors may be derived from the so-called Cosine Rule, as demonstrated within the section between minutes 12 and 18 of this video:
A neat proof of the Cosine Rule itself is offered on this video:
The next Python implementation of cosine similarity explicitly operationalizes the formulation offered above, with out counting on any black-box, third-party packages:
import math
def cosine_similarity(U, V):
if len(U) != len(V):
elevate ValueError("Vectors have to be of the identical size.")
# Compute dot product and magnitudes
dot_product = sum(u * v for u, v in zip(U, V))
magnitude_U = math.sqrt(sum(u ** 2 for u in U))
magnitude_V = math.sqrt(sum(v ** 2 for v in V))
# Zero vector dealing with to keep away from division by zero
if magnitude_U == 0 or magnitude_V == 0:
elevate ValueError("Can not compute cosine similarity for zero-magnitude vectors.")
return dot_product / (magnitude_U * magnitude_V)
readers can check with this article for a extra environment friendly Python implementation of the cosine distance metric (outlined as 1 minus cosine similarity) utilizing the NumPy and SciPy packages.
Lastly, it’s value evaluating the mathematical instinct of cosine similarity (or distance) with that of Euclidean distance, which measures the linear distance between two vectors and may also function a vector similarity metric. Specifically, the decrease the Euclidean distance between two vectors, the upper their semantic similarity is prone to be. The Euclidean distance between two vectors U and V (every of size n) may be computed utilizing the next components:

Under is the corresponding Python implementation:
import math
def euclidean_distance(U, V):
if len(U) != len(V):
elevate ValueError("Vectors have to be of the identical size.")
# Compute sum of squared variations
sum_squared_diff = sum((u - v) ** 2 for u, v in zip(U, V))
# Take the sq. root of the sum
return math.sqrt(sum_squared_diff)
Discover that, because the elementwise variations within the Euclidean distance components are squared, the ensuing metric will at all times be a non-negative quantity — zero if the vectors are equivalent, optimistic in any other case. Within the NLP context, this means that Euclidean distance is not going to mirror semantic polarity in fairly the identical manner as cosine distance does. Furthermore, so long as two vectors level in the identical course, the cosine of the angle between them will stay the identical whatever the vector magnitudes. Against this, the Euclidean distance metric is affected by variations in vector magnitude, which can result in deceptive interpretations in observe (e.g., two texts of various lengths might yield a excessive Euclidean distance regardless of being semantically related). As such, cosine similarity is the popular metric in lots of NLP eventualities, the place figuring out vector — or semantic — directionality is the first concern.
Concept versus Apply
In a sensible NLP situation, the interpretation of cosine similarity hinges on the extent to which the vector embedding encodes polarity in addition to semantic overlap. Within the following hands-on instance, we’ll examine the similarity between two given phrases utilizing a pretrained embedding mannequin that doesn’t encode polarity (all-MiniLM-L6-v2) and one which does (distilbert-base-uncased-finetuned-sst-2-english). We may also use extra environment friendly implementations of cosine similarity and Euclidean distance by leveraging capabilities supplied by the SciPy package deal.
from scipy.spatial.distance import cosine as cosine_distance
from sentence_transformers import SentenceTransformer
from transformers import AutoTokenizer, AutoModel
import torch
# Phrases to embed
phrases = ["movie", "film", "good", "bad", "spoon", "car"]
# Load a pre-trained embedding mannequin from Hugging Face
model_1 = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
model_2_name = "distilbert-base-uncased-finetuned-sst-2-english"
model_2_tokenizer = AutoTokenizer.from_pretrained(model_2_name)
model_2 = AutoModel.from_pretrained(model_2_name)
# Generate embeddings for mannequin 1
embeddings_1 = dict(zip(phrases, model_1.encode(phrases)))
# Generate embeddings for mannequin 2
inputs = model_2_tokenizer(phrases, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
outputs = model_2(**inputs)
embedding_vectors_model_2 = outputs.last_hidden_state.imply(dim=1)
embeddings_2 = {phrase: vector for phrase, vector in zip(phrases, embedding_vectors_model_2)}
# Compute and print cosine similarity (1 - cosine distance) for each embedding fashions
print("Cosine similarity for embedding mannequin 1:")
print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_1["movie"], embeddings_1["film"]))
print("good", "t", "dangerous", "t", 1 - cosine_distance(embeddings_1["good"], embeddings_1["bad"]))
print("spoon", "t", "automotive", "t", 1 - cosine_distance(embeddings_1["spoon"], embeddings_1["car"]))
print()
print("Cosine similarity for embedding mannequin 2:")
print("film", "t", "movie", "t", 1 - cosine_distance(embeddings_2["movie"], embeddings_2["film"]))
print("good", "t", "dangerous", "t", 1 - cosine_distance(embeddings_2["good"], embeddings_2["bad"]))
print("spoon", "t", "automotive", "t", 1 - cosine_distance(embeddings_2["spoon"], embeddings_2["car"]))
print()
Output:
Cosine similarity for embedding mannequin 1:
film movie 0.8426464702276286
good dangerous 0.5871497042685934
spoon automotive 0.22919675707817078
Cosine similarity for embedding mannequin 2:
film movie 0.9638281550070811
good dangerous -0.3416433451550165
spoon automotive 0.5418748837234599
The phrases “film” and “movie”, that are sometimes used as synonyms, have cosine similarity near 1, suggesting excessive semantic overlap as anticipated. The phrases “good” and “dangerous” are antonyms, and we see this mirrored within the unfavourable cosine similarity consequence when utilizing the second embedding mannequin identified to encode semantic polarity. Lastly, the phrases “spoon” and “automotive” are semantically unrelated, and the corresponding orthogonality of their vector embeddings is indicated by their cosine similarity outcomes being nearer to zero than for “film” and “movie”.
The Wrap
The cosine similarity between two vectors is predicated on the cosine of the angle they kind, and — in contrast to metrics corresponding to Euclidean distance — isn’t delicate to variations in vector magnitudes. In idea, cosine similarity ought to be near 1 if the vectors level in the identical course (indicating excessive similarity), near -1 if the vectors level in reverse instructions (indicating excessive dissimilarity), and near 0 if the vectors are orthogonal (indicating unrelatedness). Nevertheless, the precise interpretation of cosine similarity in a given NLP situation relies on the character of the embedding mannequin used to vectorize the textual information (e.g., whether or not the embedding mannequin encodes polarity along with semantic overlap).