A Linux version of the Procmon Sysinternals tool (github.com)
7 points by LelouBil 2h ago 1 comments
Installing UEFI Firmware on ARM SBCs (interfacinglinux.com)
62 points by aaronday 5h ago 20 comments
Survey: a third of senior developers say over half their code is AI-generated
82 Brajeshwar 120 8/31/2025, 2:55:56 PM fastly.com ↗
Also I tend to get more done at a time, it makes it easier to get started on "gruntwork" tasks that I would have procrastinated on. Which in turn can lead to burnout quite quickly.
I think in the end it's just as much "work", just a different kind of work and with more quantity as a result.
For me, this is the biggest benefit of AI coding. And it's energy saved that I can use to focus on higher level problems e.g. architecture thereby increasing my productivity.
And at this point it's not just a productivity booster, it's as essential as using a good IDE. I feel extremely uncomfortable and slow writing any code without auto-completion.
When the AI tab completion fills in full functions based on the function definition you have half typed, or completes a full test case the moment you start type - mock data values and all, that just feels mind-reading magical.
It breaks flow. It has no idea my intention, but very eagerly provides suggestions I have to stop and swat away.
it's so [great to have auto-complete]
annoying to constantly [have to type]
have tons of text dumped into your text area. Sometimes it looks plausibly right, but with subtle little issues. And you have to carefully analyze whatever it output for correctness (like constant code review).
There's literally no way I can see that resulting in better quality, so either that is not what is happening or we're in for a rude awakening at some point.
But your approach sounds familiar to me. I find sometimes it may be slower and lower quality to use AI, but it requires less mental bandwidth from me, which is sometimes a worthwhile trade off.
I teach at an internship program and the main problem with interns since 2023 has been their over reliance on AI tools. I feel like I have to teach them to stop using AI for everything and think through the problem so that they don't get stuck.
Meanwhile many of the seniors around me are stuck in their ways, refusing to adopt interactive debuggers to replace their printf() debug habits, let alone AI tooling...
When I was new to the business, I used interactive debugging a lot. The more experienced I got, the less I used it. printf() is surprisingly useful, especially if you upgrade it a little bit to a log-level aware framework. Then you can leave your debugging lines in the code and switch it on or off with loglevel = TRACE or INFO, something like that.
And obviously when you can't hook the debugger, logs are mandatory. Doesn't have to be one or the other.
As for tooling, I really love AI coding. My workflow is pasting interfaces in ChatGPT and then just copy pasting stuff back. I usually write the glue code by hand. I also define the test cases and have AI take over those laborious bits. I love solving problems and I genuinely hate typing :)
Printf gives you an entire trace or log you can glance at, giving you a bird's eye view of entire processes.
Found myself having 3-4 different sites open for documentation, context switching between 3 different libraries. It was a lot to take in.
So I said, why not give AI a whirl. It helped me a lot! And since then I have published at least 6 different projects with the help of AI.
It refactors stuff for me, it writes boilerplate for me, most importantly it's great at context switching between different topics. My work is pretty broadly around DevOps, automation, system integration, so the topics can be very wide range.
So no I don't mind it at all, but I'm not old. The most important lesson I learned is that you never trust the AI. I can't tell you how often it has hallucinated things for me. It makes up entire libraries or modules that don't even exist.
It's a very good tool if you already know the topic you have it work on.
But it also hit me that I might be training my replacement. Every time I correct its mistakes I "teach" the database how to become a better AI and eventually it won't even need me. Thankfully I'm very old and will have retired by then.
Last line: "Thankfully I'm very old"
Hmm.....
That's called refresh token rotation and is a valid security practice.
While I understand that <Enter model here> might produce the meaty bits as well, I believe that having a truck factor of basically 0 (since no-one REALLY understands the code) is a recipe for a disaster and I dare say long term maintainability of a code base.
I feel that you need to have someone in any team that needs to have that level of understanding to fix non trivial issues.
However, by all means, I use the LLM to create all the scaffolding, test fixtures, ... because that is mental energy that I can use elsewhere.
… then you are not a senior software engineer
Java: https://docs.parasoft.com/display/JTEST20232/Creating+a+Para...
C# (nunit, but xunit has this too): https://docs.nunit.org/articles/nunit/technical-notes/usage/...
Python: https://docs.pytest.org/en/stable/example/parametrize.html
cpp: https://google.github.io/googletest/advanced.html
A belief that the ability of LLMs to generate parameterizations is intrinsically helpful to a degree which cannot be trivially achieved in most mainstream programming languages/test frameworks may be an indicator that an individual has not achieved a substantial depth of experience.
There is a dramatic difference between unreliable in the sense of S3 or other services and unreliable as in "we get different sets of logical outputs when we provide the same input to a LLM". In the first, you can prepare for what are logical outcomes -- network failures, durability loss, etc. In the latter, unless you know the total space of outputs for a LLM you cannot prepare. In the operational sense, LLMs are not a system component, they are a system builder. And a rather poor one, at that.
> And the integration tests should 100% include times when the network flakes out and drops 1/2 of replies and corrupts msgs and the like.
Yeah, it's not that hard to include that in modern testing.
They surveyed 791 developers (:D) and "a third of senior developers" do that. That's... generiously, what... 20 people?
It's amazing how everyone can massage numbers when they're trying to sell something.
And of course, its an article based on a source article based on a survey (of a single company), with the source article written by a "content marketing manager", and the raw data of the survey isn't released/published, only some marketing summary of what the results (supposedly) were. Very trustworthy.
Meanwhile, try as I might I couldnt prevent it from being useless.
I know of no better metaphor than that of what it's like being a developer in 2025.
One imagines Leadership won't be so pleased after the inevitably price hike (which, given the margins software uses, is going to be in the 1-3 thousands a day) and the hype wears off enough for them to realize they're spending a full salary automating a partial FTE.
But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.
Feels like a similar situation to self driving where companies want to insist that you should be fully aware and ready to take over in an instant when things go wrong. That's just not how your brain works. You either want to fully disengage, or be actively doing the work.
This is exactly my experience, but I guess generating code with depreciated methods is useful for some people.
I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.
I say all this as a massive AI skeptic dating back to the 1980s.
That makes sense, as you're breaking the task into smaller achievable tasks. But it takes an already experienced developer to think like this.
Instead, a lot of people in the hype train are pretending an AI can work an idea to production from a "CEO level" of detail – that probably ain't happening.
this is the part that I would describe as engineering in the first place. This is the part that separates a script kiddie or someone who "knows" one language and can be somewhat dangerous with it, from someone who commands a $200k/year salary, and it is the important part
and so far there is no indication that language models can do this part at. all.
for someone who CAN do the part of breaking down a problem into smaller abstractions, though, some of these models can save you a little time, sometimes, in cases where it's less effort to type an explanation to the problem than it is to type the code directly..
which is to say.. sometimes.
Also, green coding? That's new to me. I guess we'll see optional carbon offset purchasing in our subs soon.
But I’ve come full circle and have gone back to hand coding after a couple years of fighting LLMs. I’m tired of coaxing their style and fixing their bugs - some of which are just really dumb and some are devious.
Artisanal hand craft for me!
Usually it isn't, though - I just want to pump out code changes ASAP (but not sooner).
It’s just not worth it anymore for anything that is part of an actual product.
Occasionally I will still churn out little scripts or methods from scratch that are low risk - but anything that gets to prod is pretty much hand coded again.
https://github.com/BeehiveInnovations/zen-mcp-server/blob/ma...
It basically uses multiple different LLMs from different providers to debate a change or code review. Opus 4.1, Gemini 2.5 Pro, and GPT-5 all have a go at it before it writes out plans or makes changes.
I just can't fathom shipping a big percentage of work using LLMs.
I'm not a coder but a sysadmin. 35 years or so. I'm conversant with Perl, Python, (nods to C), BASIC, shell, Powershell, AutoIT (int al)
I muck about with CAD - OpenSCAD, FreeCAD, and 3D printing.
I'm not a senior developer - I pay them.
LLMs are handy in the same way I still have my slide rules and calculators (OK kids I use a calc app) but I do still have my slide rules.
ChatGPT does quite well with the basics for a simple OpenSCAD effort but invents functions within libraries. That is to be expected - its a next token decider function and not a real AI.
I find it handy for basics, very basic.
While I'll say it got me started, it wasn't a snap of the fingers and a quick debug to get something done. Took me quite a while to figure out why something worked but really it didn't (LLM using command line commands where Bash doesn't interpret the results the same).
If its something I know, probably wont use LLM (as it doesn't do my style). If it's something I don't know, might use it to get me started but I expect that's all I'll it for.
Older devs are not letting the AI do everything for them. Assuming they're like me, the planning is mostly done by a human, while the coding is largely done by the AI, but in small sections with the human giving specific instructions.
Then there's debugging, which I don't really trust the AI to do very well. Too many times I've seen it miss the real problem, then try to rewrite large sections of the code unnecessarily. I do most of the debugging myself, with some assistance from the AI.
I've largely settled on the opposite. AI has become very good at planning what to do and explaining it in plain English, but its command of programming languages still leaves a lot to be desired.
And remains markably better than when AI makes bad choices while writing code. That is much harder to catch and requires pouring over the code with a fine tooth comb to the point that you may as well have just written it yourself, negating all the potential benefits of using it to generate code in the first place.
I feel no shame in doing the later. I've also learned enough about LLMs that I know how to write that CLAUDE.md so it sticks to best practices. YMMV.
Could you share some examples / tips about this?
If I don't know how to structure functions around a problem, I will also use the LLM, but I am asking it to write zero code in this case. I am just having a conversation about what would be good paths to consider.
I thought vibe coding meant very little direct interaction with the code, mostly telling the LLM what you want and iterating using the LLM. Which is fun and worth trying, but probably not a valid professional tool.
And then, more people saw these critics using "vibe coding" to refer to all LLM code creation, and naturally understood it to mean exactly that. Which means the recent articles we've seen about how good vibe coding starts with a requirements file, then tests that fail, then tests that pass, etc.
Like so many terms that started out being used pejoratively, vibe coding got reclaimed. And it just sounds cool.
Also because we don't really have any other good memorable term for describing code built entirely with LLM's from the ground up, separate from mere autocomplete AI or using LLM's to work on established codebases.
I’m willing to vibe code a spike project. That is to say, I want to see how well some new tool or library works, so I’ll tell the LLM to build a proof of concept, and then I’ll study that and see how I feel about it. Then I throw it away and build the real version with more care and attention.
E.g one tool packages a debug build of an iOS simulator app with various metadata and uploads it to a specified location.
Another tool spits out my team's github velocity metrics.
These were relatively small scripting apps, that yes, I code reviewed and checked for security issues.
I don't see why this wouldn't be a valid professional tool? It's working well, saves me time, is fun, and safe (assuming proper code review, and LLM tool usage).
With these little scripts it creates it's actually pretty quick to validate their safety and efficacy. They're like validating NP problems.
This is complicated by the fact that some people use “vibe coding” to mean any kind of LLM-assisted coding.
From Karpathy's original post I understood it to be what you're describing. It is getting confusing.
We have got to stop. In a universe of well over 25 million programmers a sample of 791 is not significant enough to justify such headlines.
We’ve got to do better than this, whatever this is.
But statistically speaking, at a 95% confidence level you'd be within a +/- 3.5% margin of error given the 791 sample size, irrespective of whether the population is 30k or 30M.
From another perspective: we've deduced a lot of things about how atoms work without any given experiment inspecting more than an insignificant fraction of all atoms.
TL;DR: The population size (25e6 total devs, 1e80 atoms in observable universe) is almost entirely irrelevant to hypothesis testing.
If you find it is quicker not to use it then you might hate it, but I think it is probably better in some cases and worse in other cases.
("Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize." - https://news.ycombinator.com/newsguidelines.html)
I strongly disagree. Struggling with a problem creates expertise. Struggle is slow, and it's hard. Good developers welcome it.
I think we'll find a middle ground though. I just think it hasn't happened yet. I'm cautiously optimistic.
https://www.fastly.com/products/ai
https://www.fastly.com/products/fastly-ai-bot-management
https://www.fastly.com/documentation/guides/compute/about-th...
For me, success with LLM-assisted coding comes when I have a clear idea of what I want to accomplish and can express it clearly in a prompt. The relevant key business and technical concerns come into play, including complexities like balancing somewhat conflicting shorter and longer term concerns.
Juniors are probably all going to have to be learning this kind of stuff at an accelerated rate now (we don't need em cranking out REST endpoints or whatever anymore), but at this point this takes a senior perspective and senior skills.
Anyone can get an LLM and agentic tool to crank out code now. But you really need to have them crank out code to do something useful.
A third? I would expect at least a majority based on the headline and tone of the article... Isn't this saying 66% are down on vibe coding?