This is a very good point that has been proven over and over again in the industry. I recall being at Sun and having the argument over ONC and whether or not it should be "open" (which at the time meant everyone could get a copy of the code[1]) or "closed". Ed Zander was a big fan of keeping everything secret, after all anyone could reproduce it if they had the code right? And I used the same argument as the author, which is that if someone was a decent programmer and willing to invest the time, they could recreate it from scratch without the code so keeping the code secret merely slowed them down fractionally but letting our licensees read the code allowed them to better understand what worked and why and could release products that used it faster, which would contribute to its success in the marketplace.
I lost that battle and ONC+ was locked behind the wall until Open Solaris 20 years later. So many people in tech cannot (or perhaps will not) distinguish between "value" and "cost". Its like people who confuse "wealth" and "money". Closely related topics that are fundamentally talking about different things.
This is why you invest in people and expertise, not tools. Anyone can learn a new toolset, but only the people with expertise can create things of value.
[1] So still licensed, but you couldn't use the trademark if you didn't license it and of course there was no 'warranty' because of course the trademark required an interoperability test.
RajT88 · 4h ago
There is a feeling that releasing the code is "giving it away for free". But, being able to compile and deploy it is not the whole story. Enterprises need support from the people who built the thing, and so without that it is not a very attractive proposition.
It could be true in some scenarios though.
Microsoft doesn't open source Windows. A big enough company could fork it and offer enterprise support at a fraction of the cost. It would take them years to get there, and probably would be subpar to what large Windows customers get in support from Microsoft. Yes I know y'all hate dealing with Microsoft support - imagine that but worse. Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
ChuckMcM · 4h ago
> Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
That has not been shown to be the case. There is ample evidence that other companies would run this 'off market' or 'pirate' version, and zero evidence that if those choices had been unavailable that they would have legitimately licensed Windows.
You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue. If you "ask" for more than your product is "valued" then it won't be purchased but it may be stolen. And if you make it "impossible" to steal you will reduce its value to legitimate customers and have zero gain in revenue from those who had stolen it before (they still won't buy it).
The "value" in Windows is the number of things that run on it and the fact that compatibility issues are "bugs" which get fixed by the supplier. We are rapidly reaching the point where it will add value to have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS) which allows you to get a copy of the source when you license it, and has an application binary interface (ABI) that other software developers can count on to exist, not change out from under them, and last for 10+ years.
As Microsoft (and Apple) add more and more spurious features which enrich themselves and enrage their users the "value" becomes less and less. That calculus will flip and when it does enterprises will switch to the new operating system that is just an operating system and not a malware delivery platform.
antihipocrat · 3h ago
>>have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS)
Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
A huge amount of enterprise tooling is now being run on the cloud through the browser or via electron - for a large number of businesses, their staff would only need the equivalent of a Chromebook style GUI to perform their work.
Native software is still essential for a small % of users.. is this what you're suggesting needs to be solved? A single alternative open source system (OS or VM?) that the software dev company can target.
ChuckMcM · 3h ago
>Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
No. Ask yourself, if I install distro <pick one>, can I run a complex binary from 2015 on it? To pull of that kind of stunt you need to ensure you have control over changes not only in the kernel, but also in all of the associated user libraries and management tools. There are change paths for everything from how daemons get started to how graphics are rendered and sound is produced that are incompatible with themselves, much less other versions from 10 years ago. That is not a support burden that someone selling a specialized piece of software can easily take on. It makes their cost of development higher and so their price higher which loses them business.
rescbr · 2h ago
Yeah, if it uses the Win32 API!
Thanks to Wine, it’s the most stable API/ABI Linux has!
I’m kind of joking, but the main issue probably lies with the libc rather than with Linux itself.
ChuckMcM · 47m ago
I get that you're kind of joking but your right! Because nobody can "change" the Win32 ABI except Microsoft you don't get contributors pushing various "feature improvements" on it, not that there aren't a bunch of things one might do differently than the way the Win32 API does them, right? It's that externally enforced control that isn't possible with Linux/FOSS ecosystems. The 'why' of that is because people like Canonical can't afford to pay enough engineers to 'own' the whole system, and their user base gets bent out of shape when they do. It breaks the social contract that Linux has established.
The only way to change that is to start with a new social contract which is "You pay us to license a copy of this OS and we'll keep it compatible for all your apps that run on it."
bruce511 · 14m ago
While I sympathize with your need, I don't think we'll see a new OS fill this space.
Firstly, there's the obvious "all the apps you run on it". Your new OS has no apps, and even if a few emerged no business really wants to commit to running on a new OS with only a couple apps.
I mean, if you want a stable OS there's always BSD, or BeOS or whatever. Which we ignore because, you know, Windows. (And I know it's fun to complain about ads on windows and Microsoft in general, but there's a reason they own the market.) OH, and business users don't see the things folk complain about anyway.
Personally I have utilities on windows that were last compiled over 20 years ago that still run fine.
Secondly no OS operates in a vacuum. You need to store data, (database) browse the web, communicate, secure traffic and so on. Those are very dynamic. And again (by far) the most stable place to run those things is Windows. Like Postgres 9, from 15 years ago, is still used in production.
Of course it's also possible to freeze any OS and apps at any time and it will "run forever " - or at least until the version of TLS supports dies.
So no, I don't believe there will be a new OS. Windows Phone died because there were no apps. Your new OS will have the same problem.
RajT88 · 3h ago
> You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue
An astute reader would find I am not in fact making that argument, and I suspect if we got into the weeds with it, we would find we agree with each other.
travisgriggs · 3h ago
Couple months back, someone posted how they lost a days work due to hard drive crash and had to redo it. It took them roughly 30 minutes.
Their point was the same as this article with a shorter time window. Knowing what to do, not how to do it, is 90% of the battle.
But that is counterintuitive to the lay observer of software. They think they know what to do, because they’ve got ideas, but feel inhibited because they don’t yet know how to achieve them. So they assume that their immediate hurdle must be the hard part of software development.
Sebb767 · 5h ago
> And I’d go further than that. I’d suggest that, contrary to what intuition might tell you, refactoring might be better achieved by throwing the code away and starting again.
I don't think this applies in most situations. If you have been part of the original core team and are rewriting the app in the same way, this might be true - basically a lost code situation, like the author was in.
However, if you are doing so because you lack understanding of the original code or you are switching the stack, you will inevitably find new obstacles and repeat mistakes that were fixed in the original prototype. Also, in a real world situation, you probably also have to handle fun things like data import/migration, upgrading production instances and serving customers (and possibly fixing bugs) while having your rewrite as a side project. I'm not saying that a rewrite is never the answer, but the authors situation was pretty unique.
OccamsMirror · 4h ago
Anyone truly considering this should weigh up this post with the timeless wisdom in Joel Spolsky's seminal piece, 'Things You Should Never Do'[1]. Rewriting from scratch can often be a very costly mistake. Granted, it's not as simple as "never do this" but it's not a decision one should make lightly.
Fifteen years ago I agreed with his point. Today I do not.
zahlman · 2h ago
I'm trying to make a small, efficient alternative to Pip. I could never realistically get there by starting with Pip and trimming it down, dropping dependencies, reworking the caching strategy etc. etc. But because I've studied Pip (and the problems it solves), I have a roadmap to taking advantage of a new caching strategy (incidentally similar to uv's), etc. - and I'll (probably) never have to introduce (most of) the heavyweight dependencies in the first place.
Understanding doesn't have to come from "being part of the original core team". Although if you aim to be feature-complete and interface-compatible, I'm sure it helps an awful lot.
degamad · 4h ago
You've hit on an important point in the article:
> if you are doing so because you lack understanding of the original code
As I understood it, the key point of the article is that the understanding is the value. If you don't understand the code, then you've lost the value. That's why rebuilds by new folk who don't understand the solution don't work.
fragmede · 4h ago
Large sweeping software initiatives that go nowhere and are replaced by a product from a more agile team aspect isn't that unique, though the author being on both teams is.
donatj · 1h ago
We got bought out a number of years ago. We'd been pretty liberal with our code up until this point as I'm sure many tiny companies are, but our new owners were very insistent on locking down "Intellectual Property" to exclusivly company controlled hardware locked behind SSO ready to lock anyone out at a moments notice...
You are putting a pretty basic CRUD app in Fort Knox. We're not building anything super proprietary or patentable, it's not rocket science. Anyone could rebuild something roughly analogous to our app in a matter of weeks.
The code isn't the value. Our connections, contracts and content are our value. Our people and our know how is the value.
The code is almost worthless on its own. The time and thus money we've spent has been far more in finding and fine tuning the user experience than in "writing code". These are things exposed to anyone who uses our app.
You could genuinely email all our code to our direct competitor and it wouldn't help them at all.
Terr_ · 57m ago
One step crazier: Companies that advertise a product and then lock down their API docs so that nobody can see them without being a current customer with need-to-know.
marcus_holmes · 4h ago
I have a problem that I test new UI frameworks on; generating celtic knotwork. I've written three of these now (React, VueJS, and Go manipulating SVGs). The first was hard because I had to learn how to solve the problem. The others were hard because I had to learn how that solution changed because of the framework.
There's a joy in rewriting software, it is obviously better the second time around. As the author says, the mistakes become apparent in hindsight and only by throwing it all away can we really see how to do it better.
I also sketch (badly) and the same is true there; sketching the same scene multiple times is the easiest way of improving and getting better.
mncharity · 3h ago
A different perspective is that there is a vast body of result-of-thought-and-experience associated with developed software. That is then lossily encoded in many forms. Memory, judgment, skill, team, contacts, customers, docs, test suite, other assorted software, etc. It's much easier to reimplement a language when you have a great language test suite. Easier to create a product if you already have a close relationship with its customers. Easier to implement something for the 3rd, 4th, 10th time. Etc. And the assorted forms have results they encode well and not so much, and assorted strengths and weaknesses as mechanism. Memories decay, and aren't great for details. Judgment transfers well; leaves. Teams shuffle. Tests become friction. Software works; ossifies.
Insightful synthesis around even a single form isn't exactly common. The art of managing test suites for instance. An insightful synthesis of many forms... I've not yet seen.
Liftyee · 5h ago
From my little experience I find this article's point to be true. I've been in a few organisations where the flow of people (students) is constant, so experience constantly leaves and mistakes get repeated (we haven't figured out a proper knowledge base). All one can do is try their best to transfer the experience before graduating, but taught lessons don't stick as deeply as firsthand learnings (with the associated toil...)
mncharity · 3h ago
Hmm. We've often discussed AI as programmer codegen tool, and as vibe coder. But there have been other roles over the decades associated with programming. Perhaps AI could serve as a team Librarian? Historian? Backup Programmer (check-and-balance to a programmer)? A kibitzing greybeard institutional memory? Team Lead for humans? Coach/Mentor? Something else? Mob programming participant?
perrygeo · 4h ago
> The design was in my head, and with hindsight I could see all its flaws. I was therefore able to [re]create a much more efficient and effective design based on that learning. All the mistakes had been made, so I was able to get this version of the code right the first time.
Notice the critically important difference of recreating an existing design, vs using the rewrite as an opportunity to experiment on the design and the implementation (and the language, and the ...).
Vetting a new design takes time, consensus, and subjective judgement. Re-implementing an existing design is laser focused and objective.
ww520 · 4h ago
While people and organizational knowledge are important, I have to disagree with the article. Code has value, tremendous value in fact. It’s the only record of truth of a software product. The code of a working product records the decisions, the designs, solved problems and solved mistakes during the development. Software development is not just writing the code. The code is the end product of the development process which can be long and arduous. Yes. It can be reproduced with skill, time and money, but it can be prohibitively expensive. Thus lays the value of the code.
Edit: case in point, Sybase created the SQL Server. During one of the due diligence of business partnerships with Microsoft, Microsoft “borrowed” a copy of the source code (not sure about the details). After much legal wrangling, Sybase was forced to license it to Microsoft due to the loss of leverage. Microsoft released it as MS SQL Server. It took years and years of work for Microsoft to finally replace the code piece by piece.
GianFabien · 2h ago
>The code of a working product records the decisions, the designs, solved problems and solved mistakes during the development.
Our experiences apparently differ. I've worked on dozens of large scale systems and due to the lack of up to date documentation and comments in the code the developers have had to re-engineer most of those details in order to make even minor changes as the requirements evolve over the years. The code might work, but the knowledge of how and why is generally lost to entropy.
ww520 · 2h ago
Yes sure, the code might be unreadable but it’s the working copy that any changes based on and run against. Throwing it away and recreating the changes in vacuum would be very difficult.
jasonthorsness · 5h ago
“All the value is stored up in the team, the logic and the design, and very little of it is in the code itself.”
This is a key reason it’s so important to knowledge-share within teams. For all kinds of reasons, people move on, and any knowledge they hold without redundancy becomes unavailable.
Also a good reason why commenting can help: then maybe a bit more of the value IS in the code.
PlunderBunny · 5h ago
The software development company that I worked at for 20 years, made a specialised practice management system that was (at the time) years ahead of the competition - a real Windows experience with a real database, where the competition was all DOS based or using Access. At one point, they sold the rights to develop their software in a particular country to another company (they had no plans to enter that market, and were a bit strapped for cash at the time). So the other company got the source code, and - in the spirit of the time - insisted it was printed out on paper too!
The other company never managed to do anything with it in the end - having the source code for the entire product was not enough!
robocat · 2h ago
> web portal I was involved in developing as part of an all remote team back just before the turn of the millennium. > I can conclude that of the 6 months of time spent by 7 people creating this solution, hardly any of it related to the code. It could be completely discarded and rebuilt by one person in under two weeks.
I bet he did it recently and that undermines his whole thesis. He would need to have redeveloped it before 2000, to support his argument. I would also suspect he only made a toy 80% working example and that it only needed the other 90% to be completed (e.g. administrative or developer focused features). I'm pattern matching with other developers I've heard say similar things.
Information that articles ignore is often critical; moreover we judge articles based on the meta-decision of "what critical information was ignored". The article severely misses some key points.
A better example that a developer is more valuable than the code: when a key member of a company goes off and greenfield develops a competitor that wins (but still not an independent measure due to confoundering effects).
In some situations I would agree with the thesis, but unfortunately the article poorly argues the point.
xarope · 3h ago
I can say that in learning go several years ago, rewriting a ~10k LOC app that I had done in python, I definitely learnt some new paradigms in go that allowed me new perspective in what I had done in python.
I would have gone back to fix my python code, but I'm happy with the rewrite in go (runs faster, has far more test cases, which allowed me to add more functionality easier)
And yes, the rewrite took me ~50% of the time, and most of that was due to it being an exercise in learning go (including various go footguns).
bikamonki · 3h ago
The value is in keeping the code running. Unless you are doing something very complex from scratch, you are mostly writing code for hire or for your startup. Nowadays, that code does not really take that long to develop. You will be wrapping, mixing, and orchestrating several paid, free, and OSS APIs and frameworks to work together with your solution to the problem. This will take six to eighteen months to complete. Then the value starts. That code is making you or someone else money. If you stop efficiently and correctly managing the code, someone loses money.
coolcase · 2h ago
I want to hear the perspective of someone who lost 100kloc and had to rewrite it all!
There is a lot of value in code. It works in prod. It is continuously regression tested by its load, so when there is a problem you figure out a tiny delta to fix it.
If you rewrite from memories you'll get a lot of bugs you need to fix again.
Code being worthless and "must keep PRs small" seem to be in tension.
freetime2 · 4h ago
Another big piece I would add to this is the processes that enable organizations to ship code. Not just processes that are directly related to the product and code, but other organizational processes like hiring, sales, support, etc.
Efficient processes require a lot of thought to develop and implement. When a badly-run organization acquires a good piece of code, it will eventually start to stagnate and bloat.
johngossman · 2h ago
"Programming as Theory Building" 1985 by Peter Naur makes the same case and works out some of the implications. One of my favorite computer engineering papers.
> finally, there’s the code. That also takes time, but that time is small in comparison to all the others ... The developer’s answer to all of this is “refactoring”. For those of you who don’t code, refactoring is ... This takes time.
Yeah, please never manage a software team. Thanks.
arscan · 4h ago
I very much agree with this. Find a job where you can meaningfully contribute to a product that is important to your company (generates a meaningful amount of revenue), and after a little while the natural thing that happens is you become irreplaceable because of that retained value in your head.
It doesn’t have to happen, and with some effort can be somewhat avoided, but it’s the default outcome. Depending on your goals and career aspirations, this can be a wonderful thing, or it can be a bit of a curse.
anonu · 2h ago
Corollary: The value is in vibe coding. AKA - knowing how to prompt well.
wagwang · 4h ago
This is the classic, very domain specific wisdom that does not extrapolate to all software. Some codebases are narrow in focus and rely on solving a problem in a smart way. Other applications just need to hold thousands of data points that come from variety of sources. Here the code holds tens of thousands of priceless details that you will inevitably forget like "this integration sends local time but doesn't adjust for daylight savings", and the value of holding these details will persist especially after all the tribal knowledge dissipates.
90s_dev · 4h ago
> No. I achieved this because the code contained very little of the real value. That was all stored in my head. The design was in my head, and with hindsight I could see all its flaws. I was therefore able to create a much more efficient and effective design based on that learning.
Exactly. Code is cheap to write. Even a lot of it. What's hard is understanding a problem thoroughly enough to model a correct solution. Once you have that, you've done 90% of the work.
neuroelectron · 37m ago
Absolute nonsense. There is little division between the ideas and the code. Saying they're completely different isn't just myopic, it's dangerous.
SoftTalker · 5h ago
This is what people mean when they say build the first version to throw away.
jasonthorsness · 4h ago
And often the second :)
cwoolfe · 4h ago
The code is valuable insofar as it maps to the real world.
AstroBen · 3h ago
what if you're coding a fantasy video game?
burnt-resistor · 4h ago
Who's time is money^n where n varies by effectiveness. It's not simply a linear relationship. And building a team can make the n's increase together.
tayo42 · 42m ago
> The design was in my head, and with hindsight I could see all its flaws. I was therefore able to create a much more efficient and effective design based on that learning. All the mistakes had been made, so I was able to get this version of the code right the first time.
I feel like you need to be careful with this. Ive seen a lot of times the rewrite ends up being more complicated then the original, even stuck in some rewrite hell and never takes off. I think its called the "2nd system effect"
stevage · 5h ago
Mostly I just keep seeing all the references to "ajax" in the title image and wondering how old that stock photo is.
mkoubaa · 4h ago
It's actually the inverse. Code is a liability, software is value. That's why we spent all our time splitting hairs to get more elegant ways to try and produce more software with less code.
rowanG077 · 4h ago
I'm not so sure this is that black and white. Software without code is also a liability, I would avoid basing important system on software whose code you have no access to for example.
mkoubaa · 2h ago
If you lose your credit card you still have to make payments
fragmede · 4h ago
The data model in the code is what's important, but more important than that is the discovery and uncovered details that's happened while writing the code. That whole refactoring thing is just a sideshow to the underlying data model changing to better match the real world conditions the software has to run under.
morkalork · 5h ago
It's a nice thought to have for when I'm mentally dooming about vibe coding killing my job and entire field.
nwlotz · 3h ago
I mean code ages quickly, so the value of software must include the skillset needed to support and maintain it. Which is why enterprise software contracts exist, and are expensive. You're not paying for the binary. You're paying for the team supporting it.
I lost that battle and ONC+ was locked behind the wall until Open Solaris 20 years later. So many people in tech cannot (or perhaps will not) distinguish between "value" and "cost". Its like people who confuse "wealth" and "money". Closely related topics that are fundamentally talking about different things.
This is why you invest in people and expertise, not tools. Anyone can learn a new toolset, but only the people with expertise can create things of value.
[1] So still licensed, but you couldn't use the trademark if you didn't license it and of course there was no 'warranty' because of course the trademark required an interoperability test.
Microsoft doesn't open source Windows. A big enough company could fork it and offer enterprise support at a fraction of the cost. It would take them years to get there, and probably would be subpar to what large Windows customers get in support from Microsoft. Yes I know y'all hate dealing with Microsoft support - imagine that but worse. Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
That has not been shown to be the case. There is ample evidence that other companies would run this 'off market' or 'pirate' version, and zero evidence that if those choices had been unavailable that they would have legitimately licensed Windows.
You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue. If you "ask" for more than your product is "valued" then it won't be purchased but it may be stolen. And if you make it "impossible" to steal you will reduce its value to legitimate customers and have zero gain in revenue from those who had stolen it before (they still won't buy it).
The "value" in Windows is the number of things that run on it and the fact that compatibility issues are "bugs" which get fixed by the supplier. We are rapidly reaching the point where it will add value to have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS) which allows you to get a copy of the source when you license it, and has an application binary interface (ABI) that other software developers can count on to exist, not change out from under them, and last for 10+ years.
As Microsoft (and Apple) add more and more spurious features which enrich themselves and enrage their users the "value" becomes less and less. That calculus will flip and when it does enterprises will switch to the new operating system that is just an operating system and not a malware delivery platform.
Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
A huge amount of enterprise tooling is now being run on the cloud through the browser or via electron - for a large number of businesses, their staff would only need the equivalent of a Chromebook style GUI to perform their work.
Native software is still essential for a small % of users.. is this what you're suggesting needs to be solved? A single alternative open source system (OS or VM?) that the software dev company can target.
No. Ask yourself, if I install distro <pick one>, can I run a complex binary from 2015 on it? To pull of that kind of stunt you need to ensure you have control over changes not only in the kernel, but also in all of the associated user libraries and management tools. There are change paths for everything from how daemons get started to how graphics are rendered and sound is produced that are incompatible with themselves, much less other versions from 10 years ago. That is not a support burden that someone selling a specialized piece of software can easily take on. It makes their cost of development higher and so their price higher which loses them business.
Thanks to Wine, it’s the most stable API/ABI Linux has!
I’m kind of joking, but the main issue probably lies with the libc rather than with Linux itself.
The only way to change that is to start with a new social contract which is "You pay us to license a copy of this OS and we'll keep it compatible for all your apps that run on it."
Firstly, there's the obvious "all the apps you run on it". Your new OS has no apps, and even if a few emerged no business really wants to commit to running on a new OS with only a couple apps.
I mean, if you want a stable OS there's always BSD, or BeOS or whatever. Which we ignore because, you know, Windows. (And I know it's fun to complain about ads on windows and Microsoft in general, but there's a reason they own the market.) OH, and business users don't see the things folk complain about anyway.
Personally I have utilities on windows that were last compiled over 20 years ago that still run fine.
Secondly no OS operates in a vacuum. You need to store data, (database) browse the web, communicate, secure traffic and so on. Those are very dynamic. And again (by far) the most stable place to run those things is Windows. Like Postgres 9, from 15 years ago, is still used in production.
Of course it's also possible to freeze any OS and apps at any time and it will "run forever " - or at least until the version of TLS supports dies.
So no, I don't believe there will be a new OS. Windows Phone died because there were no apps. Your new OS will have the same problem.
An astute reader would find I am not in fact making that argument, and I suspect if we got into the weeds with it, we would find we agree with each other.
Their point was the same as this article with a shorter time window. Knowing what to do, not how to do it, is 90% of the battle.
But that is counterintuitive to the lay observer of software. They think they know what to do, because they’ve got ideas, but feel inhibited because they don’t yet know how to achieve them. So they assume that their immediate hurdle must be the hard part of software development.
I don't think this applies in most situations. If you have been part of the original core team and are rewriting the app in the same way, this might be true - basically a lost code situation, like the author was in.
However, if you are doing so because you lack understanding of the original code or you are switching the stack, you will inevitably find new obstacles and repeat mistakes that were fixed in the original prototype. Also, in a real world situation, you probably also have to handle fun things like data import/migration, upgrading production instances and serving customers (and possibly fixing bugs) while having your rewrite as a side project. I'm not saying that a rewrite is never the answer, but the authors situation was pretty unique.
1: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
Understanding doesn't have to come from "being part of the original core team". Although if you aim to be feature-complete and interface-compatible, I'm sure it helps an awful lot.
> if you are doing so because you lack understanding of the original code
As I understood it, the key point of the article is that the understanding is the value. If you don't understand the code, then you've lost the value. That's why rebuilds by new folk who don't understand the solution don't work.
You are putting a pretty basic CRUD app in Fort Knox. We're not building anything super proprietary or patentable, it's not rocket science. Anyone could rebuild something roughly analogous to our app in a matter of weeks.
The code isn't the value. Our connections, contracts and content are our value. Our people and our know how is the value.
The code is almost worthless on its own. The time and thus money we've spent has been far more in finding and fine tuning the user experience than in "writing code". These are things exposed to anyone who uses our app.
You could genuinely email all our code to our direct competitor and it wouldn't help them at all.
There's a joy in rewriting software, it is obviously better the second time around. As the author says, the mistakes become apparent in hindsight and only by throwing it all away can we really see how to do it better.
I also sketch (badly) and the same is true there; sketching the same scene multiple times is the easiest way of improving and getting better.
Insightful synthesis around even a single form isn't exactly common. The art of managing test suites for instance. An insightful synthesis of many forms... I've not yet seen.
Notice the critically important difference of recreating an existing design, vs using the rewrite as an opportunity to experiment on the design and the implementation (and the language, and the ...).
Vetting a new design takes time, consensus, and subjective judgement. Re-implementing an existing design is laser focused and objective.
Edit: case in point, Sybase created the SQL Server. During one of the due diligence of business partnerships with Microsoft, Microsoft “borrowed” a copy of the source code (not sure about the details). After much legal wrangling, Sybase was forced to license it to Microsoft due to the loss of leverage. Microsoft released it as MS SQL Server. It took years and years of work for Microsoft to finally replace the code piece by piece.
Our experiences apparently differ. I've worked on dozens of large scale systems and due to the lack of up to date documentation and comments in the code the developers have had to re-engineer most of those details in order to make even minor changes as the requirements evolve over the years. The code might work, but the knowledge of how and why is generally lost to entropy.
This is a key reason it’s so important to knowledge-share within teams. For all kinds of reasons, people move on, and any knowledge they hold without redundancy becomes unavailable.
Also a good reason why commenting can help: then maybe a bit more of the value IS in the code.
I bet he did it recently and that undermines his whole thesis. He would need to have redeveloped it before 2000, to support his argument. I would also suspect he only made a toy 80% working example and that it only needed the other 90% to be completed (e.g. administrative or developer focused features). I'm pattern matching with other developers I've heard say similar things.
Information that articles ignore is often critical; moreover we judge articles based on the meta-decision of "what critical information was ignored". The article severely misses some key points.
A better example that a developer is more valuable than the code: when a key member of a company goes off and greenfield develops a competitor that wins (but still not an independent measure due to confoundering effects).
In some situations I would agree with the thesis, but unfortunately the article poorly argues the point.
I would have gone back to fix my python code, but I'm happy with the rewrite in go (runs faster, has far more test cases, which allowed me to add more functionality easier)
And yes, the rewrite took me ~50% of the time, and most of that was due to it being an exercise in learning go (including various go footguns).
There is a lot of value in code. It works in prod. It is continuously regression tested by its load, so when there is a problem you figure out a tiny delta to fix it.
If you rewrite from memories you'll get a lot of bugs you need to fix again.
Code being worthless and "must keep PRs small" seem to be in tension.
Efficient processes require a lot of thought to develop and implement. When a badly-run organization acquires a good piece of code, it will eventually start to stagnate and bloat.
https://pages.cs.wisc.edu/~remzi/Naur.pdf
Paper discussed multiple times on HN, notably:
https://news.ycombinator.com/item?id=10833278
Yeah, please never manage a software team. Thanks.
It doesn’t have to happen, and with some effort can be somewhat avoided, but it’s the default outcome. Depending on your goals and career aspirations, this can be a wonderful thing, or it can be a bit of a curse.
Exactly. Code is cheap to write. Even a lot of it. What's hard is understanding a problem thoroughly enough to model a correct solution. Once you have that, you've done 90% of the work.
I feel like you need to be careful with this. Ive seen a lot of times the rewrite ends up being more complicated then the original, even stuck in some rewrite hell and never takes off. I think its called the "2nd system effect"