Stochastic gradient Langevin dynamics: can someone explain?

While studying about classifier free guidance, this blogpost was cited Guidance: a cheat code for diffusion models – Sander Dieleman (
It mentions something known as Stochastic gradient Langevin dynamics (SGLD) in the context of diffusers. Can someone explain this in simple terms?


1 Like

Stochastic gradient Langevin dynamics (SGLD) is just a fancy term for the basic diffusion sampling.

It’s basically what Jeremy talks about in the first lesson about how if we have the gradient of the data distribution, we can perform gradient ascent to reach a sample from the data distribution.

SGLD is just a very specific formulation with the math done to have theoretical guarantees and such but that’s not that important. The basic concept is exactly the same.