Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

What PyTorch Actually Means by a Leaf Tensor

admin by admin
June 19, 2025
in Artificial Intelligence
0
What PyTorch Actually Means by a Leaf Tensor
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


isn’t one more rationalization of the chain rule. It’s a tour by way of the weird facet of autograd — the place gradients serve physics, not simply weights

I initially wrote this tutorial for myself throughout the first 12 months of my PhD, whereas navigating the intricacies of gradient calculations in PyTorch. Most of it’s clearly designed with commonplace backpropagation in thoughts — and that’s positive, since that’s what most individuals want.

However Physics-Knowledgeable Neural Community (PINN) is a moody beast and it wants a special form of gradient logic. I spent a while feeding it and I figured it may be value sharing the findings with the group, particularly with fellow PINN practitioners — perhaps it’ll save somebody a couple of complications. However if in case you have by no means heard of PINNs, don’t fear! This submit remains to be for you — particularly if you happen to’re into issues like gradients of gradients and all that enjoyable stuff.

Fundamentals phrases

Tensor within the laptop world means merely a multidimensional array, i.e. a bunch of numbers listed by a number of integers. To be exact, there exist additionally zero-dimensional tensors, that are simply single numbers. Some individuals say that tensors are a generalization of matrices to greater than two dimensions.

You probably have studied normal relativity earlier than, you’ll have heard that mathematical tensors have things like covariant and contravariant indices. However overlook about it — in PyTorch tensors are simply multidimensional arrays. No finesse right here.

Leaf tensor is a tensor that could be a leaf (within the sense of a graph concept) of a computation graph. We are going to take a look at these under, so this definition will make a bit extra sense.

The requires_grad property of a tensor tells PyTorch whether or not it ought to bear in mind how this tensor is utilized in additional computations. For now, consider tensors with requires_grad=True as variables, whereas tensors with requires_grad=False as constants.

Leaf tensors

Let’s begin by creating a couple of tensors and checking their properties requires_grad and is_leaf.

import torch

a = torch.tensor([3.], requires_grad=True)
b = a * a

c = torch.tensor([5.])
d = c * c

assert a.requires_grad is True and a.is_leaf is True
assert b.requires_grad is True and b.is_leaf is False
assert c.requires_grad is False and c.is_leaf is True
assert d.requires_grad is False and d.is_leaf is True  # sic!
del a, b, c, d

a is a leaf as anticipated, and b is just not as a result of it’s a results of a multiplication. a is ready to require grad, so naturally b inherits this property.

c is a leaf clearly, however why d is a leaf? The rationale d.is_leaf is True stems from a particular conference: all tensors with requires_grad set to False are thought of leaf tensors, as per PyTorch’s documentation:

All Tensors which have requires_grad which is False will probably be leaf Tensors by conference.

Whereas mathematically, d is just not a leaf (because it outcomes from one other operation, c * c), gradient computation won’t ever prolong past it. In different phrases, there received’t be any spinoff with respect to c. This enables d to be handled as a leaf.

In a nutshell, in PyTorch, leaf tensors are both:

  • Straight inputted (i.e. not calculated from different tensors) and have requires_grad=True. Instance: neural community weights which might be randomly initialized.
  • Don’t require gradients in any respect, no matter whether or not they’re immediately inputted or computed. Within the eyes of autograd, these are simply constants. Examples:
    • any neural community enter information,
    • an enter picture after imply elimination or different operations, which entails solely non-gradient-requiring tensors.

A small comment for many who wish to know extra. The requires_grad property is inherited as illustrated right here:

a = torch.tensor([5.], requires_grad=True)
b = torch.tensor([5.], requires_grad=True)
c = torch.tensor([5.], requires_grad=False)

d = torch.sin(a * b * c)

assert d.requires_grad == any((x.requires_grad for x in (a, b, c)))

Code comment: all code snippets needs to be self-contained apart from imports that I embrace solely once they seem first time. I drop them to be able to reduce boilerplate code. I belief that the reader will be capable to care for these simply.

Grad retention

A separate challenge is gradient retention. All nodes within the computation graph, which means all tensors used, have gradients computed in the event that they require grad. Nonetheless, solely leaf tensors retain these gradients. This is smart as a result of gradients are sometimes used to replace tensors, and solely leaf tensors are topic to updates throughout coaching. Non-leaf tensors, like b within the first instance, should not immediately up to date; they modify because of modifications in a, so their gradients could be discarded. Nonetheless, there are eventualities, particularly in Physics-Knowledgeable Neural Networks (PINNs), the place you would possibly wish to retain the gradients of those intermediate tensors. In such circumstances, you have to to explicitly mark non-leaf tensors to retain their gradients. Let’s see:

a = torch.tensor([3.], requires_grad=True)
b = a * a
b.backward()

assert a.grad is just not None
assert b.grad is None  # generates a warning

You most likely have simply seen a warning:

UserWarning: The .grad attribute of a Tensor that isn't a leaf Tensor is being 
accessed. Its .grad attribute will not be populated throughout autograd.backward(). 
If you happen to certainly need the .grad discipline to be populated for a non-leaf Tensor, use 
.retain_grad() on the non-leaf Tensor. If you happen to entry the non-leaf Tensor by 
mistake, be sure to entry the leaf Tensor as a substitute. 
See github.com/pytorch/pytorch/pull/30531 for extra informations. 
(Triggered internally at atensrcATen/core/TensorBody.h:491.)

So let’s repair it by forcing b to retain its gradient

a = torch.tensor([3.], requires_grad=True)
b = a * a
b.retain_grad()  # <- the distinction
b.backward()

assert a.grad is just not None
assert b.grad is just not None

Mysteries of grad

Now let’s take a look at the well-known grad itself. What’s it? Is it a tensor? If that’s the case, is it a leaf tensor? Does it require or retain grad?

a = torch.tensor([3.], requires_grad=True)
b = a * a
b.retain_grad()
b.backward()

assert isinstance(a.grad, torch.Tensor)
assert a.grad.requires_grad is False and a.grad.retains_grad is False and a.grad.is_leaf is True
assert b.grad.requires_grad is False and b.grad.retains_grad is False and b.grad.is_leaf is True

Apparently:

– grad itself is a tensor,
– grad is a leaf tensor,
– grad doesn’t require grad.

Does it retain grad? This query doesn’t make sense as a result of it doesn’t require grad within the first place. We are going to come again to the query of the grad being a leaf tensor in a second, however now we are going to check a couple of issues.

A number of backwards and retain_graph

What’s going to occur after we calculate the identical grad twice?

a = torch.tensor([3.], requires_grad=True)
b = a * a
b.retain_grad()
b.backward()
attempt:
    b.backward()
besides RuntimeError:
    """
    RuntimeError: Attempting to backward by way of the graph a second time (or 
    immediately entry saved tensors after they've already been freed). Saved 
    intermediate values of the graph are freed if you name .backward() or 
    autograd.grad(). Specify retain_graph=True if you might want to backward by way of 
    the graph a second time or if you might want to entry saved tensors after 
    calling backward.
    """

The error message explains all of it. This could work:

a = torch.tensor([3.], requires_grad=True)
b = a * a
b.retain_grad()

b.backward(retain_graph=True)
print(a.grad)  # prints tensor([6.])

b.backward(retain_graph=True)
print(a.grad)  # prints tensor([12.])

b.backward(retain_graph=False)
print(a.grad)  # prints tensor([18.])

# b.backward(retain_graph=False)  # <- right here we'd get an error, as a result of in 
# the earlier name we didn't retain the graph.

Aspect (however necessary) be aware: you may as well observe, how the gradient accumulates in a: with each iteration it’s added.

Highly effective create_graph argument

make grad require grad?

a = torch.tensor([5.], requires_grad=True)
b = a * a
b.retain_grad()
b.backward(create_graph=True)

# Right here an fascinating factor occurs: now a.grad would require grad! 
assert a.grad.requires_grad is True
assert a.grad.is_leaf is False

# Then again, the grad of b doesn't require grad, as beforehand. 
assert b.grad.requires_grad is False
assert b.grad.is_leaf is True

The above may be very helpful: a.grad which mathematically is [frac{partial b}{partial a}] is just not a continuing (leaf) anymore, however a daily member of the computation graph that may be additional used. We are going to use that reality in Half 2.

Why the b.grad doesn’t require grad? As a result of spinoff of b with respect to b is just 1.

If the backward feels counterintuitive for you now, don’t fear. We are going to quickly change to a different technique known as nomen omen grad that enables to exactly select substances of the derivatives. Earlier than, two facet notes:

Aspect be aware 1: If you happen to set create_graph to True, it additionally units retain_graph to True (if not explicitly set). Within the pytorch code it seems to be precisely like
this:

    if retain_graph is None:
        retain_graph = create_graph

Aspect be aware 2: You most likely noticed a warning like this:

    UserWarning: Utilizing backward() with create_graph=True will create a reference 
    cycle between the parameter and its gradient which may trigger a reminiscence leak. 
    We advocate utilizing autograd.grad when creating the graph to keep away from this. If 
    you must use this operate, be sure to reset the .grad fields of your 
    parameters to None after use to interrupt the cycle and keep away from the leak. 
    (Triggered internally at C:cbpytorch_1000000000000worktorchcsrcautogradengine.cpp:1156.)
      Variable._execution_engine.run_backward(  # Calls into the C++ engine to 
    run the backward go

And we are going to observe the recommendation and use autograd.grad now.

Taking derivatives with autograd.grad operate

Now let’s transfer from the someway high-level .backward() technique to decrease degree grad technique that explicitly calculates spinoff of 1 tensor with respect to a different.

from torch.autograd import grad

a = torch.tensor([3.], requires_grad=True)
b = a * a * a
db_da = grad(b, a, create_graph=True)[0]
assert db_da.requires_grad is True

Equally, as with backward, the spinoff of b with respect to a could be handled as a operate and differentiated additional. So in different phrases, the create_graph flag could be understood as: when calculating gradients, preserve the historical past of how they had been calculated, so we will deal with them as non-leaf tensors that require grad, and use additional.

Specifically, we will calculate second-order spinoff:

d2b_da2 = grad(db_da, a, create_graph=True)[0]
# Aspect be aware: the grad operate returns a tuple and the primary component of it's what we'd like.
assert d2b_da2.merchandise() == 18
assert d2b_da2.requires_grad is True

As mentioned earlier than: that is really the important thing property that enables us to do PINN with pytorch.

Wrapping up

Most tutorials about PyTorch gradients deal with backpropagation in classical supervised studying. This one explored a special perspective — one formed by the wants of PINNs and different gradient-hungry beasts.

We learnt what leaves are within the PyTorch jungle, why gradients are retained by default just for leaf nodes, and the best way to retain them when wanted for different tensors. We noticed how create_graph turns gradients into differentiable residents of the autograd world.

However there are nonetheless many issues to uncover — particularly why gradients of non-scalar features require further care, the best way to compute second-order derivatives with out utilizing your complete RAM, and why slicing your enter tensor is a foul thought if you want an elementwise gradient.

So let’s meet in Half 2, the place we’ll take a more in-depth take a look at grad 👋

Tags: LeafMeansPytorchTensor
Previous Post

Constructing a customized text-to-SQL agent utilizing Amazon Bedrock and Converse API

Next Post

Construct a scalable AI video generator utilizing Amazon SageMaker AI and CogVideoX

Next Post
Construct a scalable AI video generator utilizing Amazon SageMaker AI and CogVideoX

Construct a scalable AI video generator utilizing Amazon SageMaker AI and CogVideoX

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    401 shares
    Share 160 Tweet 100
  • Diffusion Mannequin from Scratch in Pytorch | by Nicholas DiSalvo | Jul, 2024

    401 shares
    Share 160 Tweet 100
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    401 shares
    Share 160 Tweet 100
  • Proton launches ‘Privacy-First’ AI Email Assistant to Compete with Google and Microsoft

    401 shares
    Share 160 Tweet 100
  • Streamlit fairly styled dataframes half 1: utilizing the pandas Styler

    400 shares
    Share 160 Tweet 100

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Construct a scalable AI video generator utilizing Amazon SageMaker AI and CogVideoX
  • What PyTorch Actually Means by a Leaf Tensor
  • Constructing a customized text-to-SQL agent utilizing Amazon Bedrock and Converse API
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.