One nit, a misspelling in the Appendix: derivce -> derive
thingamarobert · 14m ago
This is very well made, and so nostalgic to me! My whole PhD between 2012-16 was based on RBMs and I learned so much about generative ML through these models. Research has come so far and one doesn't hear much about them these days but they were really at the heart of the "AI Spring" back then.
The key takeaways are that there are lots of people involved with making these breakthroughs.
The value of grad students is often overlooked, they contribute so much and then later on advance the research even more.
Why does America look on research as a waste, when it has move everything so far?
itissid · 2h ago
IIUC, we need gibbs sampling(to compute the weight updates) instead of using the gradient based forward and backward passes with today's NNetworks that we are used to. Any one understand why that is so?
nonrandomstring · 3h ago
This takes me back. 1990, building Boltzman machines and Perceptrons
from arrays of void pointers to "neurons" in plain C. What did we use
"AI" for back then? To guess the next note in a MIDI melody, and to
recognise the shape of a scored note, minim, crotchet, quaver on a 5 x
9 dot grid. 85% accuracy was "good enough" then.
bwestergard · 3h ago
Did the output sound musical?
nonrandomstring · 2h ago
For small values of "music"? Really, no. But tbh, neither have more
advanced "AI" composition experiments I've encountered over the years,
Markov models, linear predictive coding, genetic/evolutionary algs,
rule based systems, and now modern diffusion and transormers... they
all lack the "spirit of jazz" [0]
Oh, this is a neat demo. I took Geoff Hinton's neural networks course in university 15 years ago and he did spend a couple of lectures explaining Boltzmann machines.
> A Restricted Boltzmann Machine is a special case where the visible and hidden neurons are not connected to each other.
This wording is wrong; it implies that visible neurons are not connected to hidden neurons.
The correct wording is: visible neurons are not connected to each other and hidden neurons are not connected to each other.
Alternatively: visible and hidden neurons do not have internal connections within their own type.
CamperBob2 · 2h ago
Alternatively: visible and hidden neurons do not have internal connections within their own type.
I'm a bit unclear on how that isn't just an MLP. What's different about a Boltzmann machine?
Edit: never mind, I didn't realize I needed to scroll up to get to the introductory overview.
What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.
nayuki · 1h ago
> I'm a bit unclear on how that isn't just a multi-layer perceptron. What's different about a Boltzmann machine?
In a Boltzmann machine, you alternate back and forth between using visible units to activate hidden units, and then use hidden units to activate visible units.
> What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.
The page should be considered a slideshow that is paged discretely and not scrollable continuously. And there should definitely be no scrolling inertia.
nickvec · 1h ago
Great site! Would be cool to be able to adjust the speed at which the simulation runs as well.
vanderZwan · 4h ago
Lovely explanation!
Just FYI: mouse-scrolling is much too sensitive for some reason (I'm assuming it swipes just fine in mobile contexts, have not checked that). The result is that it jumped from first to last "page" and back whenever I tried scrolling. Luckily keyboard input worked so I could still read the whole thing.
rollulus · 1h ago
Now the real question: is it you enjoying that nice page or is it a Boltzmann Brain?
It's Decartes demon all over again. Problem solved centuries ago. You can skin it however you want, it's the same problem.
bbstats · 3h ago
anyone got an archived link?
antidumbass · 1h ago
The section after the interactive diagrams has no left padding and thus runs off the screen on iOS.
No comments yet
BigParm · 1h ago
That font with a bit of margin looks fantastic on my phone specifically. Really nailing the minimalist look. What font is that?
mac9 · 1h ago
"font-family: ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";"
from the css so odds are it's whatever your browser or OS's default sans font is, in my case it's SF Pro which is an Apple font though it may vary if you use a non Apple device.
nickvec · 1h ago
> Here we introduce introduction to Boltzmann machines and present a Tiny Restricted Boltzmann Machine that runs in the browser.
nit: should "introduction" be omitted?
No comments yet
tambourine_man · 3h ago
Typo
“They can be used for generating new data that…”
munchler · 3h ago
Another typo (or thinko) in the very first sentence:
"Here we introduce introduction to Boltzmann machines"
croemer · 2h ago
More typos (LLMs are really good at finding these):
"Press the "Run Simulation" button to start traininng the RBM." ("traininng" -> "training")
"...we want to derivce the contrastive divergence algorithm..." ("derivce" -> "derive")
"A visisble layer..." ("visisble" -> "visible")
djulo · 4h ago
that's soooo coool
pawanjswal · 2h ago
Love how this breaks down Boltzmann Machines—finally makes this 'energy-based model' stuff click!
One nit, a misspelling in the Appendix: derivce -> derive
Do check out his T2 Tile Project.
The value of grad students is often overlooked, they contribute so much and then later on advance the research even more.
Why does America look on research as a waste, when it has move everything so far?
[0] https://i.pinimg.com/originals/e4/84/79/e484792971cc77ddff8f...
> A Restricted Boltzmann Machine is a special case where the visible and hidden neurons are not connected to each other.
This wording is wrong; it implies that visible neurons are not connected to hidden neurons.
The correct wording is: visible neurons are not connected to each other and hidden neurons are not connected to each other.
Alternatively: visible and hidden neurons do not have internal connections within their own type.
I'm a bit unclear on how that isn't just an MLP. What's different about a Boltzmann machine?
Edit: never mind, I didn't realize I needed to scroll up to get to the introductory overview.
What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.
In a Boltzmann machine, you alternate back and forth between using visible units to activate hidden units, and then use hidden units to activate visible units.
> What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.
The page should be considered a slideshow that is paged discretely and not scrollable continuously. And there should definitely be no scrolling inertia.
Just FYI: mouse-scrolling is much too sensitive for some reason (I'm assuming it swipes just fine in mobile contexts, have not checked that). The result is that it jumped from first to last "page" and back whenever I tried scrolling. Luckily keyboard input worked so I could still read the whole thing.
https://en.m.wikipedia.org/wiki/Boltzmann_brain
It's Decartes demon all over again. Problem solved centuries ago. You can skin it however you want, it's the same problem.
No comments yet
from the css so odds are it's whatever your browser or OS's default sans font is, in my case it's SF Pro which is an Apple font though it may vary if you use a non Apple device.
nit: should "introduction" be omitted?
No comments yet
“They can be used for generating new data that…”
"Here we introduce introduction to Boltzmann machines"
"Press the "Run Simulation" button to start traininng the RBM." ("traininng" -> "training")
"...we want to derivce the contrastive divergence algorithm..." ("derivce" -> "derive")
"A visisble layer..." ("visisble" -> "visible")