Using Typst Today: When You Need LaTeX (lee-phillips.org)
1 points by leephillips 13m ago 0 comments
Installing UEFI Firmware on ARM SBCs (interfacinglinux.com)
14 points by aaronday 48m ago 2 comments
The On-Line Encyclopedia of Integer Sequences (OEIS) (oeis.org)
3 points by tzury 56m ago 0 comments
Sea power turned into energy at Los Angeles port (techxplore.com)
2 points by geox 1h ago 0 comments
Sniffly – Claude Code Analytics Dashboard
32 rand_num_gen 16 8/31/2025, 9:13:13 AM github.com ↗
Who cares who actually "typed" it? Shit code will be shit code regardless of author, there is just more of it now compared to before, just like there was more 10 years ago compared to 20 years, as the barriers for getting started is lowered time and time again. Hopefully, it'll be a net-positive, just like previous times, it's never been easier to write code to solve your own specific personal problems.
Developers who have strict requirements on the code they "produce" will make the LLM fit with their requirements when needed, and "sloppy" developers will continue to publish spaghetti code, regardless of LLMs existence.
I don't get the whole "vibe-coding" thing because clearly most of the code LLMs produce is really horrible, but with good prompting, strict reviews and not accepting bad changes just to move forward lets you mold the code into something acceptable.
(I have not looked at this specific project's code, so not sure this applies to this project, but is more of a general view obviously)
But now the signals are much harder to read. People post polished looking libraries / tools with boastful convincing claims about what they can do for you, yet they didn't even bother to check if they work at all, just wasting everyone's time.
It's a weird new world.
if anything the smelly readme's that look all like claude wrote them, are a good tell to check deeper
However, not all code requires the same quality standards (think perfectionism). The tools in this project are like blog posts written by an individual that haven’t been reviewed by others, while an ASF open-source project is more like a peer-reviewed article. I believe both types of projects are valid.
Moreover, this kind of project is like a cache. If no one else writes it, I might want to quickly vibe-code it myself. In fact, without vibe coding, I might not even do it at all due to time constraints. It's totally reasonable to treat this project as a rough draft of an idea. Why should we apply the same standards to every project?
In fact, their approach to using vibe coding in production comes with many restrictions and requirements. For example: 1. Acting as Claude's product manager (e.g., asking the right questions) 2. Using Claude to implement low-dependency leaf nodes, rather than core infrastructure systems that are widely relied upon 3. Verifiability (e.g., testing)
BTW, their argument for the necessity of vibe coding does make some sense:
As AI capabilities grow exponentially, the traditional method of reviewing code line by line won’t scale. We need to find new ways to validate and manage code safely in order to harness this exponential advantage.
On the whole meta discussion thing, i have been reading HN for at least 15 years. Posts with lots of comments are meta discussions. HN is not really a place to discuss technics of a project.
Nobody should publish slop code AI assisted or not tbh
For example, error type distribution or intervention rates. This can tell me how efficient I am when using Claude.
But currently, the error type is a bit too broad, and I haven’t discovered much yet.