That is a very strange calculation for me or I missed something. This is an open-source project, so all human contributors cost zero. He does not count himself as a cost, ok fine and understandable if you don't wanna earn from this project it is kind of an ok look at cost.
But if I see it in this relation, because of Copilot his "team" costs now $41.73 a month more than before.
But the real cost that would be interesting is time value: Does he really spends less time for the same feature?
otoolep · 3h ago
Post author here. Few things.
You are right that when someone (a human) submits a PR it didn't cost me anything (short of my time to review it). But those folks are not a team, not someone I could rely on or direct. Open-source projects -- successful ones -- often turn into a company, and then hire a dev team. We all know this.
I have no plans to commercialize rqlite, and I certainly couldn't afford a team of human developers. But I've got Copilot (and Gemini when I use it) now. So, in a sense, I now do have a team. And it's allowed me to fix bugs and add small features I wouldn't have bothered to in the past. It's definitely faster (20 mins to fire up my computer, write the code, push the PR vs. 5 mins to create the GitHub issue, assign to Copilot, review, and merge).
Case in point: I'm currently adding change-data-capture to rqlite. Development is going faster, but it's also more erratic because I'm reviewing more, and coding less. It reminds me of when I've been a TL of a software team.
mjr00 · 3h ago
> So, in a sense, I now do have a team.
In another, more accurate sense: no, you have a tool, not a team. A very useful tool, but a tool nonetheless.
If you believe you have a team, try taking a two week vacation and see how much work your team does while you're gone.
Nevermark · 2h ago
There is a new continuum. "Team" is just a convenient word to emphasize that "Tools" are moving significantly in the "Teams" direction.
The post emphasizes the degree this is true/not.
Different people are going to emphasize changing attributes of new situations using different pre-existing words/concepts. That's sensible use of language.
mjr00 · 2h ago
No, it's clickbait and that's why this submission got flagged, sorry.
A team is comprised of people. Being able to prompt an LLM to create a pull request based on specifications is very useful, but it's not a team member, the same way that VSCode isn't a team member even though autocomplete is a massive productivity increase, the same way that pypi isn't a team member even though a central third party dependency repository makes development significantly faster than not having one.
If this article were "I get a massive productivity boost from $41.73/month in developer tools" it'd be honest. As it is, it's dishonest clickbait.
As the saying goes, there is no "AI" in "Team".
Nevermark · 2h ago
That is not a clickbait title. It is normal use of language, and the articles contents are not surprising or misleading relative to the title.
Titles don't need to be pedantic.
otoolep · 2h ago
>There is a new continuum. "Team" is just a convenient word to emphasize that "Tools" are moving significantly in the "Teams" direction.
Exactly.
patchymcnoodles · 2h ago
Ok, that's cool that you can develop faster now, but as the other comment: it is a tool, not the cost of a team. It still for me a very strange comparison.
But nonetheless, thanks for the explanation :).
AnotherGoodName · 4h ago
>It doesn’t remember that last week we made a small refactor to make future development easier, or that I abandoned a particular idea as a dead end
Sometimes you need to keep the context and sometimes you need to reset it.
An example of needing to reset. Asking for X, later realizing you meant Y and having the LLM oscillates between them, on an unrelated request it adds X back in, removing Y. Etc.
Clearing the context solves the above. I currently do this by restarting the IDE in Intellij since there isn't a simple button to do it. It's a 100% required feature and knowing LLM contexts and managing them is going to be a basic part of working with LLMs in the future. Yet the concept of the need to do this hasn't quite sunk in yet. It's like the first cars not actually having brakes and the drivers and passengers used to get out and put their feet down. We're at that stage.
What we really need is detailed context history of the AI and a way to manage it well. "Forget i ever asked this prompt". "Keep this prompt in mind next time i restart the IDE" are both examples of extremely important and obvious functionality that just doesn't exist right now.
jokethrowaway · 3h ago
Claude Code has md files with tech notes but it doesn't work too well.
They still need a baby sitter.
indigodaddy · 3h ago
I also submitted this earlier today prior to this submission, but it was flagged, which I was confused by, so glad this one got through.
This was an interesting article and brought some good points around the fact that the AI never has a continuing backward/forward-looking context about one's project. Perhaps these ideas are being thought about to potentially add as features of LLMs somehow without making it unfeasible from token/context perspective.
nirolo · 3h ago
This is exactly the idea behind the concept of a memory bank that I think Cline introduced first. It serves as a goto for project overview and current scope and goals the project has.
If the headline is true, this guy values the AI at $41.73, but his own time at $0/hr. If that's how we're measuring things, then my development team costs $0 a month.
otoolep · 4h ago
Blog post author here.
From my perspective I didn't have a development team before. I have one now. I guess I am a member of that team now. But I hadn't thought of it like that -- another strange dimension to working Copilot (and its ilk).
the__alchemist · 3h ago
> I have [a development team now] now.
This is disconnected enough from how these words are normally used that the statement, and its downstream conclusions don't have a clear interpretation.
otoolep · 4h ago
Also, I don't value Copilot at $41.73. What actually happens is that GitHub charges me $41.73. I value it at way more. The consumer surplus here is substantial, IMHO.
bee_rider · 3h ago
I wonder what their profit margin is, on an inference. Wonder if it is positive or negative.
otoolep · 3h ago
I wonder the same thing myself, wouldn't surprise me if it's heavily subsidized. So much compute being given away for free.
ohdeargodno · 2h ago
Your software is a wrapper around an already existing, widely used, extremely documented project and basically just extends SQLite with what could have been a regular extension.
No shit it's easy. So is a CRUD PHP service.
dmitrygr · 4h ago
I think we need a mandatory disclosure on software "was >1% vibecoded" same as we have on allergens for food. This'll prevent its use in any safety-critical place.
_fzslm · 4h ago
If we do, we need to draw clear distinctions between different kinds of AI-driven development.
What % of human intervention was there? A module written for me by AI, that was tightly specced with function signatures and behaviour cases, is going to be far more reliable (and arguably basically is human developed) than something an AI just wrote and filled in all the blanks with.
bee_rider · 3h ago
This shouldn’t really matter, software can also be written by very bad coders.
If you care about safety, you care about the whole process—coding, sure, but also: code review, testing, what the design specs were, and what the failure-path is for when a bug (inevitably) makes it through.
Big companies produce lots of safety critical code, and it is inevitable that some incompetent people will sneak into the gaps, there. So, it is necessary to design a process that accounts for commits written by incompetent people.
bobsomers · 3h ago
Everything you said is 100% correct.
However, part of designing and upholding a safety-critical software development process is looking for places to reduce or eliminate the introduction of bugs in the first place.
Strong type systems, for example, eliminate entire classes of errors, so mandating that code is written in X language is a pro-active process decision to reduce the introduction of certain types of bugs.
Restricting the use of AI tools could very much be viewed the same way.
paulddraper · 3h ago
So you would suggest ">1% in dynamically typed language" disclaimers as well?
kangalioo · 3h ago
If someone made that happen I'd be ecstatic
tracker1 · 2h ago
Github shows the breakdown of languages in a project... you can, for the most part already do this... at least for floss on github.
tracker1 · 2h ago
I think it's even more likely at a lot of big companies, especially when a lot of upper managers feel like a developer is an interchangeable cog and there isn't any variance in terms of value beyond output.
uncircle · 2h ago
> This shouldn’t really matter, software can also be written by very bad coders.
The issue is that there is a non-zero likelihood that a vibe coder pushes code he doesn’t even understand how it actually works. At least a bad coder had to have written the thing themselves in the first place.
bee_rider · 33m ago
For something safety critical, individual programmers shouldn’t be able to push code directly anyway. However, a vibe-coder spamming the process with bad code could cause it to jam up, and prevent forward progress (assuming the project has a well designed safe process).
I guess I did assume, though, that by “in any safety-critical place” they meant a place with a well-defined and rigorous process (surely there’s some safety-critical code out there written by seat-of-the-pants cowboys, but that is just a catastrophe waiting to happen).
wiseowise · 3h ago
Given performance of an average SE, that would be an improvement. So
I don’t know what you’re saying.
dmitrygr · 3h ago
Failure modes of human coders are well-understood. Failure modes of LLMs are not yet as well understood.
darth_avocado · 4h ago
What if the software was not vibe coded but upstream packages were?
sejje · 4h ago
Then the software was vibe coded
dmitrygr · 4h ago
then they will have such stickers, and for each piece of SW we consider the sticker percentage of the transitive closure of dependencies.
delfinom · 3h ago
Technically for safety critical, safe code was to be audited and certified to the standard anyway. We don't just pull random open source packages.
Granted vibe coded junk will quickly get avoided if it is poorly written to the point that it makes auditing insufferable.
But the real cost that would be interesting is time value: Does he really spends less time for the same feature?
You are right that when someone (a human) submits a PR it didn't cost me anything (short of my time to review it). But those folks are not a team, not someone I could rely on or direct. Open-source projects -- successful ones -- often turn into a company, and then hire a dev team. We all know this.
I have no plans to commercialize rqlite, and I certainly couldn't afford a team of human developers. But I've got Copilot (and Gemini when I use it) now. So, in a sense, I now do have a team. And it's allowed me to fix bugs and add small features I wouldn't have bothered to in the past. It's definitely faster (20 mins to fire up my computer, write the code, push the PR vs. 5 mins to create the GitHub issue, assign to Copilot, review, and merge).
Case in point: I'm currently adding change-data-capture to rqlite. Development is going faster, but it's also more erratic because I'm reviewing more, and coding less. It reminds me of when I've been a TL of a software team.
In another, more accurate sense: no, you have a tool, not a team. A very useful tool, but a tool nonetheless.
If you believe you have a team, try taking a two week vacation and see how much work your team does while you're gone.
The post emphasizes the degree this is true/not.
Different people are going to emphasize changing attributes of new situations using different pre-existing words/concepts. That's sensible use of language.
A team is comprised of people. Being able to prompt an LLM to create a pull request based on specifications is very useful, but it's not a team member, the same way that VSCode isn't a team member even though autocomplete is a massive productivity increase, the same way that pypi isn't a team member even though a central third party dependency repository makes development significantly faster than not having one.
If this article were "I get a massive productivity boost from $41.73/month in developer tools" it'd be honest. As it is, it's dishonest clickbait.
As the saying goes, there is no "AI" in "Team".
Titles don't need to be pedantic.
Exactly.
But nonetheless, thanks for the explanation :).
Sometimes you need to keep the context and sometimes you need to reset it.
An example of needing to reset. Asking for X, later realizing you meant Y and having the LLM oscillates between them, on an unrelated request it adds X back in, removing Y. Etc.
Clearing the context solves the above. I currently do this by restarting the IDE in Intellij since there isn't a simple button to do it. It's a 100% required feature and knowing LLM contexts and managing them is going to be a basic part of working with LLMs in the future. Yet the concept of the need to do this hasn't quite sunk in yet. It's like the first cars not actually having brakes and the drivers and passengers used to get out and put their feet down. We're at that stage.
What we really need is detailed context history of the AI and a way to manage it well. "Forget i ever asked this prompt". "Keep this prompt in mind next time i restart the IDE" are both examples of extremely important and obvious functionality that just doesn't exist right now.
They still need a baby sitter.
This was an interesting article and brought some good points around the fact that the AI never has a continuing backward/forward-looking context about one's project. Perhaps these ideas are being thought about to potentially add as features of LLMs somehow without making it unfeasible from token/context perspective.
From my perspective I didn't have a development team before. I have one now. I guess I am a member of that team now. But I hadn't thought of it like that -- another strange dimension to working Copilot (and its ilk).
This is disconnected enough from how these words are normally used that the statement, and its downstream conclusions don't have a clear interpretation.
No shit it's easy. So is a CRUD PHP service.
What % of human intervention was there? A module written for me by AI, that was tightly specced with function signatures and behaviour cases, is going to be far more reliable (and arguably basically is human developed) than something an AI just wrote and filled in all the blanks with.
If you care about safety, you care about the whole process—coding, sure, but also: code review, testing, what the design specs were, and what the failure-path is for when a bug (inevitably) makes it through.
Big companies produce lots of safety critical code, and it is inevitable that some incompetent people will sneak into the gaps, there. So, it is necessary to design a process that accounts for commits written by incompetent people.
However, part of designing and upholding a safety-critical software development process is looking for places to reduce or eliminate the introduction of bugs in the first place.
Strong type systems, for example, eliminate entire classes of errors, so mandating that code is written in X language is a pro-active process decision to reduce the introduction of certain types of bugs.
Restricting the use of AI tools could very much be viewed the same way.
The issue is that there is a non-zero likelihood that a vibe coder pushes code he doesn’t even understand how it actually works. At least a bad coder had to have written the thing themselves in the first place.
I guess I did assume, though, that by “in any safety-critical place” they meant a place with a well-defined and rigorous process (surely there’s some safety-critical code out there written by seat-of-the-pants cowboys, but that is just a catastrophe waiting to happen).
Granted vibe coded junk will quickly get avoided if it is poorly written to the point that it makes auditing insufferable.