Show HN: Vaev – A browser engine built from scratch (It renders google.com)
85 monax 33 5/18/2025, 5:54:12 PM github.com ↗
We’ve been working on Vaev, a minimal web browser engine built from scratch. It supports HTML/XHTML, the CSS cascade, @page rules for pagination, and print-to-PDF rendering. It even handles calc(), var(), and percentage units—and yes, it renders Google.com (mostly).
This is an experimental project focused on learning and exploration. Networking is basic (http:// and file:// only), and grid layouts aren’t supported yet, but we’re making progress fast.
We’d love your thoughts and feedback.
it would be great to standardize alternative browsers on a consistent subset of web standards and document them so that "smolweb" enthusiasts can target that when building their websites and alternative browsers makers can target something useful without boiling the ocean
i personally prefer this approach to brand new protocols like Gemini, because it retains backward compatibility with popular browsers while offering an off ramp.
(My opinion as another one who has been slowly working on my own browser engine.)
care to tell us more?
I just want one of these browsers to give me a proper ComboBox (text, search and drop-down thing)
I think something like a reference implementation (Ladybird, Servo or even Vaev maybe?) getting picked up as the small-web living standard feels like the best bet for me since that still lets browser projects get the big-time funding for making the big-web work in their browser too. "It's got to look good in Ladybird/Vaev/etc".
An idea: a web authoring tool built around libweb from Ladybird! (Or any other new web implementation that's easily embeddable) The implied standard-ness of whatever goes in that slot would just come for free. (Given enough people are using it!)
A "standard" should mean there is a clear goal to work towards to for authors and browser vendors. For example, if a browser implements CSS 2.1 (the last sanely defined CSS version), its vendor can say "we support CSS 2.1", authors who care enough can check their CSS using a validator, and users can report if a CSS 2.1 feature is implemented incorrectly.
With a living standard (e.g. HTML5), all you get is a closed circle of implementations which must add a feature before it is specified. Restricting the number of implementations to one and omitting the descriptive prose sounds even worse than the status quo.
The phrase "living standard" is an oxymoron, invented by the incumbents who want to pretend they're supporting standards while weaponising constant change to keep themselves incumbent.
[1] https://github.com/odoo/paper-muncher
I've made posts about it on HN before but they've never gained traction. I hope that this takes off.
You guys make neat software.
At a previous company we moved off of wkhtmltopdf to a nodejs service which received static html and rendered it to pdf using phantomjs. These days you probably use puppeteer.
The trick was keeping the page context open to avoid chrome startup costs and recreating `page`. The node service would initialize a page object once with a script inside which would communicate with the server via a named Linux pipe. Then, for each request:
1. node service sends the static html to the page over the pipe
2. the page script receives the html from the pipe, inserts it into the DOM, and sends an “ack” back over the pipe
3. the node service receives the “ack” and calls the pdf rendering method on the page.
I don’t remember why we chose the pipe method: I’m sure there’s a better way to pass data to headless contexts these days.
The whole thing was super fast(~20ms) compared to WK, which took at least 30 seconds for us, and would more often than not just time out.
I know there is Lynx but having a non-terminal based browser which could do it would be cool.
> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it. [0]
I know you wanted something other than lynx, but you could do this with EWW (Emacs web browser or any graphical browser, provided that your proxy wget dropped the images.
[0] https://www.stallman.org/stallman-computing.html