I dunno -- generally speaking, the Wayback Machine is a much better time travel experience than trying to recover a website from an old git commit.
Especially since it's not limited to only sites I've created...
And in this particular case, all the creator was looking for was old badge images, and they'd generally be in an images directory somewhere no matter whether the site was static or dynamic.
beeandapenguin · 2h ago
Was wondering the same thing. Couldn't they just load https://web.archive.org/save/{site_url} once a month in their Github action instead of managing the storage of these images?
jiehong · 2h ago
Could be useful for company internals sites?
3036e4 · 6h ago
Plain text files and version control win again.
DeepYogurt · 6h ago
KISS
marcosdumay · 5h ago
Version control isn't really "simple". That said, neither is plain text nowadays.
It may make sense to change a "S" there for "standard".
jszymborski · 4h ago
Well, it's "relatively simple", as the alternatives either demand a superset of the requirements of static sites or replacements that are more complex.
naniwaduni · 27m ago
Keep it simple, standard,
Amorymeltzer · 5h ago
Not strictly the topic, but don't miss or sleep on the blog (self-)gamification links[1][2], excellent whimsy.
I have a bit of an internal struggle here. I use a site generator too but I struggle with the question, should I? I recently wrote about why I’m writing pure html and CSS in 2025.
I'm not sure how using a static site generator would run counter to any of those points. You can simply generate the same website that you've written by hand.
EDIT: Well perhaps the "build steps" one, but building my Hugo site just involves me running "hugo" in my directory.
My initial thought was that the title was referring to web archive services like the Wayback Machine or archive.is , but the actual topic was equally relevant. I think time travel should work as long as all content is archived / checked in: no reliance on external services (is this the definition of "static site"?)
01HNNWZ0MV43FF · 5h ago
Static site also means no backend. Each request just serves a file unmodified from disk.
No comments yet
cosmicgadget · 6h ago
From the title I thought this was about taking trips down memory lane or seeing historical posts by others. But it seems to be more about seeing design (rather than content) from one's own site in years past. I hope I'm not the only one who would prefer not to see my embarrassing old designs and rather see my archive content rendered in the current (least cringe) template.
is there an decentralized org to ensure that all of the js css we use today remain backward compatible decades from now? or are we just at the whim of these browser vendors?
mananaysiempre · 5h ago
For some part, W3C is supposed to serve this role, so to the extent that WHATWG controls the web platform, yes, yes we are. Part of the problem is, it’s not clear who exactly is supposed to participate in that hypothetical “decentralized” organization—browser vendors do consult website operators, but on occasion[1] it becomes clear that they only care about the largest ones, whose interests are naturally quite different from the long tail (to the extent that it still exists these days). And this situation is in part natural, economically speaking, because of course the largest operators are the ones that are going to have the most resources to participate in that kind of thing, so money will inevitably end up being the largest determinant of influence.
That's an unfair characterization. WHATWG doesn't version the spec like W3C did, but it's no less backwards compatible. See their FAQ [1], or just load the 1996 Space Jam site [2] in your modern browser.
Thus far, WHATWG has mostly behaved benevolently, true. But because they have stayed benevolent for now doesn’t mean we’re going to be any less at their mercy the moment they decide not to. As the recent XSLT discussion aptly demonstrates, both browser vendors and unaffiliated people are quite willing to do the “pay up or shut up” thing for old features, which is of course completely antithetical to backwards compatibility.
The browsers and standards groups do prioritize backwards compatibility and have done a very good job at it. The only real web compatibility breakages I know of have to do with pre-standardized features or third-party plugins like Flash.
hypeatei · 5h ago
The engines are open source, no? I don't think we should break websites on purpose but keeping everything backwards compatible does seem untenable for decades to come.
curtisblaine · 1h ago
If it builds. Author mentions he/she uses Eleventy, so there's always a possibility that current node / npm versions won't work with some ancient dependencies or with old style peer dependencies. Then it's a long bisection with nvm until you get the right version combo.
sedatk · 4h ago
Why do you need such a granular capability, especially when Internet Archive exists. What purpose does it serve?
plorkyeran · 4h ago
> I mentioned this to Varun who asked if I had any screenshots of what it looked like on my website. My initial answer was “no”, then I looked at Wayback Machine but there were not pictures of the badges.
zoul · 4h ago
A safe rollback on a Friday afternoon is a nice thing for sure :)
luxuryballs · 5h ago
interesting idea: a browser plugin that will cache and upload the final html/css of a page, with some logic to avoid duplicates and “extras” it could be a client side distributed archival system that captures the historical web in always static content
algo_lover · 5h ago
I don't get this? I can checkout an old commit of my dynamic server rendered blog written in go and do the same thing.
Sure I won't have the actual content, but I can see the pages and designs with dummy data. But then I can also load up one of several backups of the sqlite file and most likely everything will still work.
laurentlb · 4h ago
Building old code and getting the same result is not always trivial to do.
Potential issues:
- If you have content in a database, can you able to restore the database at any point in time?
- If you code has dependencies, were all the dependencies checked in the repository? If not, can you still find the same version you were using.
- What about your tools, compilers, etc.? Sure some of them like Go are pretty good with backward compatibility, but not all of them. Maybe you used a beta version of a tool? You might need to find the same version of the tools you were using. By the way, did you keep track of the versions of your tools, or do you need to guess?
Even with static websites, you can get into trouble if you referenced e.g. a JS file stored somewhere else. But the point is: going back in time is often much easier with static websites.
(Related topic: reproducible builds.)
inetknght · 5h ago
> Sure I won't have the actual content, but I can see the pages and designs with dummy data. But then I can also load up one of several backups of the sqlite file and...
Especially since it's not limited to only sites I've created...
And in this particular case, all the creator was looking for was old badge images, and they'd generally be in an images directory somewhere no matter whether the site was static or dynamic.
It may make sense to change a "S" there for "standard".
1: https://hamatti.org/posts/i-gamified-my-own-blog/
2: https://varunbarad.com/blog/blogging-achievements
https://joeldare.com/why-im-writing-pure-html-and-css-in-202...
EDIT: Well perhaps the "build steps" one, but building my Hugo site just involves me running "hugo" in my directory.
See “Static Sites” section. And realize that DNS caching your pages is essentially making your site “static”.
https://news.ycombinator.com/front?day=2025-08-31
(available via 'past' in the topbar)
No comments yet
[1] https://github.com/w3c/webauthn/issues/1255
[1]: https://whatwg.org/faq#change-at-any-time
[2]: https://www.spacejam.com/1996/
Very rarely used, so two better examples:
* http://
Now mostly unusable.
* Quirks mode
Netscape navigator or Internet Explorer compatibility (no <!doctype html>). Still supported by browsers for rendering old pages. Must be annoying to maintain. https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Qui...
Sure I won't have the actual content, but I can see the pages and designs with dummy data. But then I can also load up one of several backups of the sqlite file and most likely everything will still work.
Potential issues:
- If you have content in a database, can you able to restore the database at any point in time?
- If you code has dependencies, were all the dependencies checked in the repository? If not, can you still find the same version you were using.
- What about your tools, compilers, etc.? Sure some of them like Go are pretty good with backward compatibility, but not all of them. Maybe you used a beta version of a tool? You might need to find the same version of the tools you were using. By the way, did you keep track of the versions of your tools, or do you need to guess?
Even with static websites, you can get into trouble if you referenced e.g. a JS file stored somewhere else. But the point is: going back in time is often much easier with static websites.
(Related topic: reproducible builds.)
... so it's useless to anyone except you, then?