Derivatives, Gradients, Jacobians and Hessians

143 ibobev 37 8/17/2025, 2:08:18 PM blog.demofox.org ↗

Comments (37)

GistNoesis · 9m ago
The way that really made me understand gradients and derivative was when visualizing them as Arrow Maps. I even made a small tool https://github.com/GistNoesis/VisualizeGradient . This visualization helps understand optimization algorithm.

Jacobians can be understood as a collection of gradients when considering each coordinates of the output independently.

My mental picture for Hessian is to associate each point with the shape of a parabola (or saddle), which best match the function locally. It's easy to visualize once you realize it's the shape of what you see when you zoom-in on the point. (Technically this mental picture is more of a hessian + gradient tangent plane simultaneously multivariate Taylor expansion but I find them hard to mentally separate the slope from the curvature).

sestep · 6h ago
A bit more advanced than this post, but for calculating Jacobians and Hessians, the Julia folks have done some cool work recently building on classical automatic differentiation research: https://iclr-blogposts.github.io/2025/blog/sparse-autodiff/

No comments yet

ziofill · 6h ago
Mmh, this is a bit sloppy. The derivative of a function f::a -> b is a function Df::a -> a -o b where the second funny arrow indicates a linear function. I.e. the derivative Df takes a point in the domain and returns a linear approximation of f (the jacobian) at that point. And it’s always the jacobian, it’s just that when f is R -> R we conflate the jacobian (a 1x1 matrix in this case) with the number inside of it.
matheist · 4h ago
Sorry to actually your actually, but the derivative of a function f from a space A to a space B at the point a is a linear function Df_a from the tangent space of A at a to the tangent space of B at b = f(a).

When the spaces are Euclidean spaces then we conflate the tangent space with the space itself because they're identical.

By the way, this makes it easy to remember the chain rule formula in 1 dimension. There's only one logical thing it could be between spaces of arbitrary dimensions m, n, p: composition of linear transformations from T_a A to T_f(a) B to T_g(f(a)) C. Now let m = n = p = 1, and composition of linear transformations just becomes multiplication.

(Only half kidding)

beng-nl · 2h ago
Why, I’m sure you could come up with a succinct explanation of a monad :-)
ndriscoll · 6h ago
A perhaps nicer way to look at things[0] is to hold onto your base points explicitly and say Df:: a -> (b, a -o b) = (f(p),A(p)) where f(p+v)≈f(p)+A(p)v. Then you retain the information you need to define composition Dg∘Df=D(g∘f)=(Dg._1∘Df._1, Dg(Df._1)_.2∘Df._2). i.e the chain rule.

[0] which I learned from this talk https://youtube.com/watch?v=17gfCTnw6uE

esafak · 1h ago
It's deplorable that we can't write in Latex or something similar here in 2025, and have to resort to the gobbledygook above.
ziofill · 5h ago
Yes! I love Conal Eliot’s work. The one you wrote is the compositional derivative which augments the regular derivative by also returning the function itself (otherwise composition won’t work well). For anyone interested look up “the simple essence of automatic differentiation”.
dbacar · 4h ago
I respect the time you spent to write such a post with all those limited input alternatives (bowes).
ndriscoll · 4h ago
You can do ≈ by long holding = on Android/Gboard. The only way I know to get ∘ is to copy/paste it from a Unicode reference. Likewise with ⊸, which I was too lazy to look up and didn't know the name of, but now I know is MULTIMAP (U+22B8).
tomsmeding · 3h ago
It's also \multimap in TeX. The name never made sense to me because while I've seen it used for a variety of linear functions in math, I've never seen it used for a multimap, and indeed the math name in common use for it seems to be "lollipop".
divbzero · 3h ago
Would love to see div and curl added to this post.
flufluflufluffy · 6h ago
Fantastic post! As short as it needs to be while still communicating its points effectively. I love walking up the generalization levels in math.
whatever1 · 7h ago
I can look around me and find the minimum of anything without tracing its surface and following the gradient. I can also identify immediately global minima instead of local ones.

We all can do it in 2-3D. But our algorithms don’t do it. Even in 2D.

Sure if I was blindfolded, feeling the surface and looking for minimization direction would be the way to go. But when I see, I don’t have to.

What are we missing?

shoo · 22m ago
Many practical optimisation problems are less like "let's go hiking and climb a literal hill which we can see in front of us" and more like "find the best design in this space of possible designs that maximises some objective"

Here are some alternative example problems, that are a lot more high dimensional, and also where the dimensions are not spatial dimensions so your eyes give you absolutely no benefit.

(a) Your objective is to find a recipe that produces a maximally tasty meal, using the ingredients you have in your kitchen cupboard. To sample one point in recipe-space, you need to (1) devise a recipe, (2) prep and cook a candidate meal following the recipe, and (3) evaluate the candidate recipe, say by serving it to a bunch of your friends and family. That gets you one sample point. Maybe there are 1 trillion possible "recipes" you could make. Are you going to brute-force cook and serve them all to find a meal that maximises tastiness, or is there a more efficient way that requires fewer plan recipe->prep&cook->serve->evaluate cycles?

(b) Your objective is to find the most efficient design of a bridge, that can support the required load and stresses, while minimising the construction cost.

ks2048 · 7h ago
When you look at a 2D surface, you directly observe all the values on that surface.

For a loss-function, the value at each point must be computed.

You can compute them all and "look at" the surface and just directly choose the lowest - that is called a grid search.

For high dimensions, there's just way too many "points" to compute.

samsartor · 5h ago
And remember, optimization problems can be _incredibly_ high-dimensional. A 7B parameter LLM is a 7-billion-dimensional optimization landscape. A grid-search with a resolution of 10 (ie 10 samples for each dimension) would requre evaluating the loss function 10^(7*10^9) times. That is, the number of evaluations is a number with 7B digits.
Chinjut · 7h ago
You're thinking of situations where you are able to see a whole object at once. If you were dealing with an object too large to see all of, you'd have to start making decisions about how to explore it.
3eb7988a1663 · 4h ago
The mental image I like: imagine you are lost in a hilly region with incredibly dense fog such that you can only see one foot directly in front of you. How do you find the base of the valley?

Gradient descent: take a step in the steepest downward direction. Look around and repeat. When you reach a level area, how do you know you are at the lowest point?

jpeloquin · 6h ago
Evaluating a function using a densely spaced grid and plotting it does work. This is brute-force search. You will see the global minima immediately in the way you describe, provided your grid is dense enough to capture all local variation.

It's just that when the function is implemented on the computer, evaluating so many points takes a long time, and using a more sophisticated optimization algorithm that exploits information like the gradient is almost always faster. In physical reality all the points already exist, so if they can be observed cheaply the brute force approach works well.

Edit: Your question was good. Asking superficially-naive questions like that is often a fruitful starting point for coming up with new tricks to solve seemingly-intractable problems.

whatever1 · 4h ago
Thanks!

It does feels to me that we do some sort of sampling, definitely is not a naive grid search.

Also I find it easier to find the minima in specific directions (up, down, left, right) rather than let’s say a 42 degree one. So some sort of priors are probably used to improve sample efficiency.

nwallin · 4h ago
When you look at, for instance, a bowl, or even one of those egg carton mattress things, and you want to find the global minimum, you are looking at a surface which is 2 dimensions in and 1 dimension out. It's easy enough for your brain to process several thousand points and say ok the bottom of the bowl is right here.

When a computer has a surface which is 2 dimensions in and 1 dimension out, you can actually just do the same thing. Check like 100 values in the x/y directions and you only have to check like 10000 values. A computer can do that easy peasy.

When a computer does ML with a deep neural network, you don't have 2 dimensions in and 1 dimension out. You have thousands to millions of dimensions in and thousands to millions of dimensions out. If you have 100000 inputs, and you check 1000 values for each input, the total number of combinations is 1000^100000. Then remember that you also have 100000 outputs. You ain't doin' that much math. You ain't.

So we need fancy stuff like Jacobians and backtracking.

whatever1 · 3h ago
I don’t think it’s that simple. For the egg carton your eye will not spend almost any time looking at its top. You will spend most of the time sampling the bottom. I don’t know what we do, but it does not feel like a naive grid search.
cvoss · 3h ago
I really don't think you have the ability to use self-reflection to discern an algorithm that occurs in your unconscious visual cortex in a split second. You wouldn't feel like you were doing a naive grid search even if a naive grid search is exactly what you were doing.

You have suggested that the process in your mind to find a global minimum is immediate, apparently to contrast this with a standard computational algorithm. But such comparison fails. I don't know whether you mean "with few computational steps" or "in very little time"; the former is not knowable to you; the latter is not relevant since the hardware is not the same.

zoogeny · 3h ago
People here are giving you mathematical answers which is what you are asking for, but I want to challenge your intuition here.

In construction, grading a site for building is a whole process involving surveying. If you dropped a person on a random patch of earth that hasn't previously been levelled and gave them no tools, it would be a significant challenge for that person to level the ground correctly.

What I'm saying is, your intuition that "I can look around me and find the minimum of anything" is almost certainly wrong, unless you have a superpower that no other person has.

whatever1 · 3h ago
That is true we are only good at doing it for specific directions of the objective function. The one that we perceive as the minimizing direction. If you tell me find the minimum with a direction of 53 degrees likely I will fail, because I can’t easily visualize where this direction points towards
GuB-42 · 6h ago
Your eyes compute gradients, as part of the shitton of visual processing your brain does to get an estimate of where the local and global minima are.

It is not perfect though, see the many optical illusions.

But we follow gradients all the time, consciously or not. You know you are at the bottom of the hole when all the paths go up for instance.

raffael_de · 3h ago
well, first of all ... you can't. and it is very easy to come up with all sorts of (not even special) cases where you simply couldn't for literally obvious reasons. what you are imagining is some sort of stereoscopic ray tracing. that is anyway much more compute intensive then calculating a derivative.
i_am_proteus · 6h ago
Without looking up the answer (because someone has already computed this for you), how would you find the highest geographic point (highest elevation) in your country?
cinntaile · 6h ago
What if you're trying to find the minimum of something that you can't see? Or what if the differences are so small that you can't perceive them with your eyes even though you can see?
adrianN · 7h ago
The inputs you can process visually are of trivial size even for naive algorithms, and probably also simple instances. I certainly can’t find global minima in 2d for any even slightly adversarial function.
hackinthebochs · 7h ago
You're ignoring all the calculations that go on unconsciously that realize your conscious experience of "immediately" apprehending the global minima.
fancyfredbot · 7h ago
Your visual cortex is a massively parallel processor.
pestatije · 7h ago
touch and sight sense essentially the same...the difference is in the magnitudes involved
nickpsecurity · 2h ago
"What I just described is an iterative optimization method that is similar to gradient descent. Gradient descent simulates a ball rolling down hill to find the lowest point that we can, adjusting step size, and even adding momentum to try and not get stuck in places that are not the true minimum."

That is so much easier to understand than most descriptions. The whole opening was.

amelius · 6h ago
> (...) The derivative of w with respect to x. Another way of saying that is “If you added 1 to x before plugging it into the function, this is how much w would change

Incorrect!

dang · 1h ago
Ok, but a good HN post should explain what is correct, so those who don't know can learn.