This approach has been my central philosophy for years, and is why I dislike central app stores so much: they put significant downward pressure on hobby coders releasing their work, which leaves the store with a bunch of primarily commercial software, which aligns incentives around extracting data or money from users. F-Droid is a great counter-example that highlights this.
I think my decisions have panned out pretty well, in a /r/stallmanwasright sort of way. I caught a lot of side glances for using Linux back in the late 90s and early 00s (I wanted to load it onto some computers on LHD-4 during my tour aboard, but the command said it was a "hacker" operating system), but these days, looking at what Apple and Microsoft are doing, I'm thrilled to be using System76 machines for me and my kids.
Emacs has been a consistent friend over the years, and I still go back to it for anything text-centric. It's made the transition to the LLM-era quite gracefully. Tiddlywiki has also been a reliable source of value over the years.
I tend to not install apps for sites on my phones. They offer less control than a browser I can add uBlock to and just visit the site. Not always (I use the Amazon app, for example), but mostly.
In general, I've cultivated an attitude of reverse-entitlement: sometimes I really want things, but I have to stay real with myself that I don't need them. Some examples that folks will probably argue with, but are good illustrations of the idea:
I'm a huge fan of VR, and have had amazing times in Beat Saber and a few other games. I bought Quest and Quest 2, but when Meta locked me out due to a SNAFU with the Oculus/FB account mess up, and I was unable to file a ticket to get the account unlocked (because I couldn't log in), I lost $1000 in hardware and a couple thousand in VR software, but I just walked away. I realize the relationship was abusive, and that I didn't need Meta in my life. That was 2 years ago, and I still miss Beat Saber, but it was a good decision.
I had a LinkedIn account, and gave my name and email when I signed up. When my phone fried and I didn't have backup MFA, they demanded my state-issued ID to let me back in (rather than, say, verifying by email). I don't trust MS with my ID - they said they would delete it, but I didn't believe them (prior data breaches at ID vendors motivated me). But more importantly, it was an escalation: they didn't verify my identity when I signed up. So they should be trying to confirm that I'm the person who signed up. But they instead wanted me to verify I'm rpdillon, which is moving the goalposts. They're doing it as a transparent data grab. So I walked away. That was a few years ago; turns out I don't need LinkedIn!
There are probably dozens of examples like this, but I'll stop here, since this is already too long.
My core point here is: it turns out I don't need most of the stuff these companies offer, and they do seem to be getting increasingly abusive. I read about the WebRTC backdoor in Meta's apps last night, but I quit Facebook in 2009, because the writing was on the wall. I think the article offers a good perspective. This is quite at adds with opinions I read here all the time ("Libreoffice is a useless replacement for Excel", "It's literally impossible to program unless I have my liquid retina display", "Unless I'm rendering at 144Hz, it's like a slideshow", etc.), so it might be a _highly_ individual thing, but I thought it was worth mentioning, since it might be a fun discussion about how folks think of these tradeoffs.
tempodox · 12h ago
> they do seem to be getting increasingly abusive.
Positively, I can confirm. Cory Doctorow's Lecture on Enshittification describes it perfectly:
intereatingly, for apps you mention '..they offer less control ..'
Maybe for the user, but for the corporation, they offer more control..
GMoromisato · 21h ago
I'm sure this article resonates with many people; it doesn't resonate with me.
I get value out of (and even enjoy) lots of software, commercial and otherwise (except for Microsoft Teams--that's an abomination).
Ultimately, everything (not just software) is a trade-off. It has benefits and hazards. As long as the benefits outweigh the hazards, I use it. [The one frustration is, of course, when an employer forces a negative-value trade-off on you--that sucks.]
I'm suspicious of articles that talk about drawbacks in isolation, without weighing the benefits: "vaccines have side-effects", "police arrest the wrong people", "electric cars harm the environment".
Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs. The future everyone wants is, I think, one where users can ask the computer to do anything, and the computer immediately complies. Will that bring about a software paradise free from the buggy, one-size-fits-none, extractive software of today? I don't know. I guess we'll see.
We live in interesting times.
maegul · 19h ago
> Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs.
Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …
Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?
And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?
GMoromisato · 19h ago
> Surely it matters where the LLM sits against these values, no?
Yes, I agree, but it's all trade-offs. The core problem is this:
1. Software is very expensive to write
2. So, you need to sell to as many people as possible
3. So, you need to add lots of features to attract as many people as possible
4. And you need to monetize it with ads, data-selling, and SaaS subscriptions.
5. But that makes software complicated, brittle, and frustrating.
LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.
But aren't we trading one master of another? Instead of bowing down to Microsoft/Meta/Google, we bow down to OpenAI/Anthropic/Meta/Google? Maybe, but when an LLM writes code for you, you own the code. The code runs outside of the LLM (usually) on an open platform.
But what if you have to modify the code? Then you ask an LLM (maybe not the original LLM) to modify the code. That's far easier than asking Google to modify Gmail.
If you believe in the suggestions of the author, then I don't think there is a better answer than LLMs. We don't live in a world where everyone can solve their software problems by forking some code, much less modifying it themselves.
And the reason I think it's ironic is because I suspect the author hates LLMs.
akkartik · 16h ago
> 1. Software is very expensive to write
I disagree with this, right at the start. I think software is cheap to write but expensive to maintain when you try to sell to as many people as possible. It's the OpEx that kills you, not the CapEx. I go into this more in the current state of https://akkartik.name/about
So I wrote OP to encourage more exploration of the alternative path. If you build something and don't keep adding features to it in a futile attempt at land-grabbing "users" who will for the most part fail to pay you back for the over-investment your current VC-based milieu causes you to think is the only way to feel a sense of meaning from creating software -- if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive.
I plan to just put small durable things out into the world, and to take a small measure of satisfaction in their accumulation over the course of my life. The ideal is a rock: it just sits inert until you pick it up, and it remains true to its nature when you do pick it up.
> LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.
That's the critical question, isn't it. Will LLMs yield custom software that does exactly what you need and stabilizes? Or will they addict people to needing to endlessly tweak their output so AI companies can juice their revenue streams?
What skills does it take to nudge an LLM to create something durable for you? How much do people need to know, what skills do they need to develop? I don't know, but I feel certain that we will need new skills most people don't currently have.
Another way to rephrase the critical question: do you trust the real AIs here, the tech companies selling LLMs to you. Will the LLMs they peddle continue to work in 10 years time as well as they do today? If they enshittify, will you be prepared? Me, I'm deeply cynical about these companies even as LLMs themselves feel like a radical advance. I hope the world will not suffer from the value capture of AI companies the way it has suffered from the value capture of internet companies.
GMoromisato · 4h ago
Lots of interesting questions here.
> I think software is cheap to write but expensive to maintain
OK, but I think you're agreeing with me. Regardless of why it is expensive, it drives companies to bloat their products (to increase their market) and to exploit dark patterns (to increase unit revenue).
If software were very cheap to create and maintain, then it would break that cycle.
> if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive
In the 90s Microsoft found that people only used 10% of the features of Microsoft Excel. Unfortunately, everyone used a different 10%. At the limit, you would have to create a separate product for each feature permutation to cover the whole market.
And of course, creating and maintaining 10 different products is more expensive than 1 product with all the features.
> I plan to just put small durable things out into the world
This is great! Actions speak louder than words and you'll learn a lot in the process.
> Will LLMs yield custom software that does exactly what you need and stabilizes?
I agree that this is the critical question. No one knows (certainly I don't). But let's say the goal is to create custom software that does exactly what you need. Is there a practical path to that other than via LLMs? I don't think so.
> do you trust the real AIs here, the tech companies selling LLMs to you
I think this is orthogonal to whether the tech works at all. But, in general, yes, I trust most tech companies to provide value greater than the cost of their products. Pretty much by definition, for all the software I pay for, I trust the companies to deliver greater value. When that changes, I stop paying and switch.
And, of course, I support all the usual government regulators and public/private watchdogs to hold corporations accountable.
akkartik · 2h ago
I think the differing stances towards tech companies might be the crucial axiomatic difference between our positions. I've just lived through 30 years of reduced regulation of Tech, and it's hard to imagine a world that reliably prevents that from recurring.
> In the 90s Microsoft found that people only used 10% of the features of Microsoft Excel. Unfortunately, everyone used a different 10%. At the limit, you would have to create a separate product for each feature permutation to cover the whole market.
They were approaching this from the other side, though, of already having built a ton of features and then trying to fragment a unified market. It doesn't work because from Microsoft's perspective the goal of Excel is market control at the cheapest price, and giving each user their 10% is more expensive.
But if you shift perspective to the users of Excel, you don't need to care about market control. If everyone starts out focusing on just the 10% they care about, it might be tractable to just build that for themselves. The total cost in the market is greater, particularly because I'm not imagining everyone using the same 10% is banding together in a single fork. But that becomes this totally fake metric that nobody cares about.
My approach involves throwing an order of magnitude more attention at the problem than people currently devote to computing. But a single order of magnitude feels doable and positive ROI. If everyone needs to become a programmer, that's many orders of magnitude and likely negative ROI. That's not what I'm aiming for.
bevr1337 · 20h ago
> As long as the benefits outweigh the hazards, I use it.
You and the author may be in agreement but with differing risk tolerance.
> Ironically, the best answer to many of the article's suggestions... is to write your own software with LLMs.
I don't think it's ironic but I do think it's false. How do LLMs satisfy a single requirement from the author's punchline list?
nothrabannosir · 20h ago
LLMs don’t but the problems you write for yourself using LLMs, do.
GMoromisato · 19h ago
> You and the author may be in agreement but with differing risk tolerance.
I agree. And I'm not saying the author is wrong--they have their preferences. I'm just saying that for me the benefits outweigh the risks, and I'm betting most people are like me (at least outside HN).
> I don't think it's ironic but I do think it's false. How do LLMs satisfy a single requirement from the author's punchline list?
The main suggestion from the author is to write your own custom software tuned to your needs instead of relying on a mass-market, one-size-fits-all piece of complex, expensive software that has to be monetized by dark patterns.
I guess in a world where everyone knows how to program, and has the time and desire to do so, that would work. But in the real world, the only way to get the bulk of humanity to write their own software is with LLMs.
I think it's ironic because I bet the author does not like LLMs.
akkartik · 16h ago
Don't bet too much, because I'm still undecided on LLMs.
Interestingly, I have no memory of ever thinking about LLMs as I wrote this. Part of it is I slaved over this talk a lot more than my usual blog posts, for about six months, after starting the initial draft in Dec 2022 (https://akkartik.name/post/roundup22). ChatGPT came out in Nov 2022. So I was following (and starting to get annoyed by) the AI conversations in parallel with working on this talk, but perhaps they felt like separate threads in my head and I hadn't yet noticed that they can impact one another.
These days I've noticed the connections, and I feel the pressure to try to rationalize one in terms of the other. But I still don't feel confident enough to do so. And my training has always emphasized living with ambiguity until one comes up with a satisfying resolution.
It took us 200 years from the discovery of telescopes[1] to attain some measure of closure on all the questions they raised. There's no reason to think the discovery of LLMs will take any less time to work through. They'll still be remembered and debated in a hundred years. Your hot takes or mine about LLMs will be long forgotten. In the meantime, it seems a good idea for me to focus on what I seem uniquely qualified to talk about.
> And my training has always emphasized living with ambiguity until one comes up with a satisfying resolution.
I wish more people had this attitude, particularly towards LLMs. We're at an interesting point in time where we don't know how LLMs will evolve. Maybe we've hit the plateau and LLMs won't get any better. Even in that case, it will take decades for their effect to be felt in the entire industry, much less the world. Or maybe they will continue to improve on their way to ASI.
My point, though, is that if you're worried about user control over their computing environment (which I 100% understand), then LLMs might be the best solution. I think they could be the only practical solution, as all others seem like pipe dreams.
akkartik · 2h ago
It's not clear to me why you consider my approach a pipe dream. The only criticism I've heard is that people won't adopt it. Is that the only one?
One critical question for you is: can someone rebuild a piece of software for themselves, from scratch, using LLMs, without changing radically in terms of how many neurons in their brain are devoted to programming?
If they need to have real control here, they'll need to understand a lot about programming, build up a certain skeptical mindset in reviewing code, etc. That sort of requirement feels like it'll also affect adoption.
If they do so without learning much about programming, then I'd argue they don't have much control. It's not them rebuilding the thing for themselves, it's the LLM rebuilding the thing for them. And that comes with the same sorts of Principal Agent problems as depending on other sorts of AIs like tech companies to do so. They'll find themselves awash in eldritch bugs, etc.
So I think LLMs can't square this circle. If they want to not devote a lifetime to programming, user control using LLMs feels like more of a pipe dream than my approach. Because my approach depends crucially on personal relationships. There's no illusion that each person is getting personally customized software. Instead we're banding together in computational space as if we're travelling through medieval Asia -- using a caravan. Caravans had a natural limit in size, because larger caravans were easier to infiltrate with bandits. Managing those risks in a large caravan requires the geopolitical skills of a king to constantly see all the angles of who can screw you over at each point in time.
akkartik · 16h ago
> I'm betting most people are like me (at least outside HN).
Oh yes. If you want to be like most people, you should stay right where you are.
The question I'm interested in is: where should most people be? Are they where they should be? Is the current world the best we can do?
I have no illusions about converting a large following. I just have different priorities than you, it seems.
GMoromisato · 5h ago
> Oh yes. If you want to be like most people, you should stay right where you are.
I don't really have a choice. As a great philosopher said, I am what I am and that's all that I am.
> The question I'm interested in is: where should most people be? Are they where they should be? Is the current world the best we can do?
These questions can only be answered by actions. You're doing the right thing: you're creating the kind of software that you want to see in the world. That's really the highest service that a programmer can perform.
I actually think we probably agree on how crappy most software is. But whereas your answer seems to be "depend less on software", my answer is to rebuild the whole (software) world from scratch. We'll see which one of us is crazier.
safety1st · 19h ago
The fundamental tradeoff a lot of consumer software seems to be based on these days is they will offer you stuff you want (i.e. be feature rich), in exchange for stuff you don't (i.e. steal your data, show you ads). Whereas a FOSS author is a lot more likely to take the "do one thing well" approach.
What I settled for was an approach where I try to minimize the use of commercial software in my personal life, but in my business if we need what the commercial software does we'll just license it and get on with things. For the most part in my life I don't really NEED some feature or another, it might be nice to have, but with any type of commercial software or service there's always going to be the risk that they'll push some update that shoves ads down my throat or introduces microtransactions or something, so I'm OK to just go without and use the FOSS alternative.
In business though, we'll be at a competitive disadvantage if I force everyone to use only FOSS. There are many times where I've looked at the open source equivalent of some big SaaS and it was just going to be more work to set up and maintain a less featureful open source equivalent. So, I'm more inclined to do a deal with the devil because at the end of the day our time and resources need to be focused elsewhere.
gsf_emergency_2 · 18h ago
That's because (on paper) B2Bs get much more out of their clients (sorry!)
gsf_emergency · 19h ago
>I get value out of
That's the rub. Does it benefit the developers much more than the users, in aggregate?
It's not an easy question to answer even for vaccines..
But I'd wager it's a no, because, e.g. Pfizer(plus or minus BioNTech) probably could not have learnt enough* from their deployment..
I think my decisions have panned out pretty well, in a /r/stallmanwasright sort of way. I caught a lot of side glances for using Linux back in the late 90s and early 00s (I wanted to load it onto some computers on LHD-4 during my tour aboard, but the command said it was a "hacker" operating system), but these days, looking at what Apple and Microsoft are doing, I'm thrilled to be using System76 machines for me and my kids.
Emacs has been a consistent friend over the years, and I still go back to it for anything text-centric. It's made the transition to the LLM-era quite gracefully. Tiddlywiki has also been a reliable source of value over the years.
I tend to not install apps for sites on my phones. They offer less control than a browser I can add uBlock to and just visit the site. Not always (I use the Amazon app, for example), but mostly.
In general, I've cultivated an attitude of reverse-entitlement: sometimes I really want things, but I have to stay real with myself that I don't need them. Some examples that folks will probably argue with, but are good illustrations of the idea:
I'm a huge fan of VR, and have had amazing times in Beat Saber and a few other games. I bought Quest and Quest 2, but when Meta locked me out due to a SNAFU with the Oculus/FB account mess up, and I was unable to file a ticket to get the account unlocked (because I couldn't log in), I lost $1000 in hardware and a couple thousand in VR software, but I just walked away. I realize the relationship was abusive, and that I didn't need Meta in my life. That was 2 years ago, and I still miss Beat Saber, but it was a good decision.
I had a LinkedIn account, and gave my name and email when I signed up. When my phone fried and I didn't have backup MFA, they demanded my state-issued ID to let me back in (rather than, say, verifying by email). I don't trust MS with my ID - they said they would delete it, but I didn't believe them (prior data breaches at ID vendors motivated me). But more importantly, it was an escalation: they didn't verify my identity when I signed up. So they should be trying to confirm that I'm the person who signed up. But they instead wanted me to verify I'm rpdillon, which is moving the goalposts. They're doing it as a transparent data grab. So I walked away. That was a few years ago; turns out I don't need LinkedIn!
There are probably dozens of examples like this, but I'll stop here, since this is already too long.
My core point here is: it turns out I don't need most of the stuff these companies offer, and they do seem to be getting increasingly abusive. I read about the WebRTC backdoor in Meta's apps last night, but I quit Facebook in 2009, because the writing was on the wall. I think the article offers a good perspective. This is quite at adds with opinions I read here all the time ("Libreoffice is a useless replacement for Excel", "It's literally impossible to program unless I have my liquid retina display", "Unless I'm rendering at 144Hz, it's like a slideshow", etc.), so it might be a _highly_ individual thing, but I thought it was worth mentioning, since it might be a fun discussion about how folks think of these tradeoffs.
Positively, I can confirm. Cory Doctorow's Lecture on Enshittification describes it perfectly:
https://doctorow.medium.com/my-mcluhan-lecture-on-enshittifi...
Maybe for the user, but for the corporation, they offer more control..
I get value out of (and even enjoy) lots of software, commercial and otherwise (except for Microsoft Teams--that's an abomination).
Ultimately, everything (not just software) is a trade-off. It has benefits and hazards. As long as the benefits outweigh the hazards, I use it. [The one frustration is, of course, when an employer forces a negative-value trade-off on you--that sucks.]
I'm suspicious of articles that talk about drawbacks in isolation, without weighing the benefits: "vaccines have side-effects", "police arrest the wrong people", "electric cars harm the environment".
Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs. The future everyone wants is, I think, one where users can ask the computer to do anything, and the computer immediately complies. Will that bring about a software paradise free from the buggy, one-size-fits-none, extractive software of today? I don't know. I guess we'll see.
We live in interesting times.
Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …
Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?
And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?
Yes, I agree, but it's all trade-offs. The core problem is this:
1. Software is very expensive to write
2. So, you need to sell to as many people as possible
3. So, you need to add lots of features to attract as many people as possible
4. And you need to monetize it with ads, data-selling, and SaaS subscriptions.
5. But that makes software complicated, brittle, and frustrating.
LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.
But aren't we trading one master of another? Instead of bowing down to Microsoft/Meta/Google, we bow down to OpenAI/Anthropic/Meta/Google? Maybe, but when an LLM writes code for you, you own the code. The code runs outside of the LLM (usually) on an open platform.
But what if you have to modify the code? Then you ask an LLM (maybe not the original LLM) to modify the code. That's far easier than asking Google to modify Gmail.
If you believe in the suggestions of the author, then I don't think there is a better answer than LLMs. We don't live in a world where everyone can solve their software problems by forking some code, much less modifying it themselves.
And the reason I think it's ironic is because I suspect the author hates LLMs.
I disagree with this, right at the start. I think software is cheap to write but expensive to maintain when you try to sell to as many people as possible. It's the OpEx that kills you, not the CapEx. I go into this more in the current state of https://akkartik.name/about
So I wrote OP to encourage more exploration of the alternative path. If you build something and don't keep adding features to it in a futile attempt at land-grabbing "users" who will for the most part fail to pay you back for the over-investment your current VC-based milieu causes you to think is the only way to feel a sense of meaning from creating software -- if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive.
I plan to just put small durable things out into the world, and to take a small measure of satisfaction in their accumulation over the course of my life. The ideal is a rock: it just sits inert until you pick it up, and it remains true to its nature when you do pick it up.
> LLMs can break the cycle if they make it cheap to write software. Instead of buying a mass-market product with 10x more features than you need, you create custom software that does exactly what you need and no more.
That's the critical question, isn't it. Will LLMs yield custom software that does exactly what you need and stabilizes? Or will they addict people to needing to endlessly tweak their output so AI companies can juice their revenue streams?
What skills does it take to nudge an LLM to create something durable for you? How much do people need to know, what skills do they need to develop? I don't know, but I feel certain that we will need new skills most people don't currently have.
Another way to rephrase the critical question: do you trust the real AIs here, the tech companies selling LLMs to you. Will the LLMs they peddle continue to work in 10 years time as well as they do today? If they enshittify, will you be prepared? Me, I'm deeply cynical about these companies even as LLMs themselves feel like a radical advance. I hope the world will not suffer from the value capture of AI companies the way it has suffered from the value capture of internet companies.
> I think software is cheap to write but expensive to maintain
OK, but I think you're agreeing with me. Regardless of why it is expensive, it drives companies to bloat their products (to increase their market) and to exploit dark patterns (to increase unit revenue).
If software were very cheap to create and maintain, then it would break that cycle.
> if you don't keep adding features to it and you build on a substrate that's similarly not adding features and putting you on a perpetual treadmill of autoupdates, then software can be much less expensive
In the 90s Microsoft found that people only used 10% of the features of Microsoft Excel. Unfortunately, everyone used a different 10%. At the limit, you would have to create a separate product for each feature permutation to cover the whole market.
And of course, creating and maintaining 10 different products is more expensive than 1 product with all the features.
> I plan to just put small durable things out into the world
This is great! Actions speak louder than words and you'll learn a lot in the process.
> Will LLMs yield custom software that does exactly what you need and stabilizes?
I agree that this is the critical question. No one knows (certainly I don't). But let's say the goal is to create custom software that does exactly what you need. Is there a practical path to that other than via LLMs? I don't think so.
> do you trust the real AIs here, the tech companies selling LLMs to you
I think this is orthogonal to whether the tech works at all. But, in general, yes, I trust most tech companies to provide value greater than the cost of their products. Pretty much by definition, for all the software I pay for, I trust the companies to deliver greater value. When that changes, I stop paying and switch.
And, of course, I support all the usual government regulators and public/private watchdogs to hold corporations accountable.
> In the 90s Microsoft found that people only used 10% of the features of Microsoft Excel. Unfortunately, everyone used a different 10%. At the limit, you would have to create a separate product for each feature permutation to cover the whole market.
They were approaching this from the other side, though, of already having built a ton of features and then trying to fragment a unified market. It doesn't work because from Microsoft's perspective the goal of Excel is market control at the cheapest price, and giving each user their 10% is more expensive.
But if you shift perspective to the users of Excel, you don't need to care about market control. If everyone starts out focusing on just the 10% they care about, it might be tractable to just build that for themselves. The total cost in the market is greater, particularly because I'm not imagining everyone using the same 10% is banding together in a single fork. But that becomes this totally fake metric that nobody cares about.
My approach involves throwing an order of magnitude more attention at the problem than people currently devote to computing. But a single order of magnitude feels doable and positive ROI. If everyone needs to become a programmer, that's many orders of magnitude and likely negative ROI. That's not what I'm aiming for.
You and the author may be in agreement but with differing risk tolerance.
> Ironically, the best answer to many of the article's suggestions... is to write your own software with LLMs.
I don't think it's ironic but I do think it's false. How do LLMs satisfy a single requirement from the author's punchline list?
I agree. And I'm not saying the author is wrong--they have their preferences. I'm just saying that for me the benefits outweigh the risks, and I'm betting most people are like me (at least outside HN).
> I don't think it's ironic but I do think it's false. How do LLMs satisfy a single requirement from the author's punchline list?
The main suggestion from the author is to write your own custom software tuned to your needs instead of relying on a mass-market, one-size-fits-all piece of complex, expensive software that has to be monetized by dark patterns.
I guess in a world where everyone knows how to program, and has the time and desire to do so, that would work. But in the real world, the only way to get the bulk of humanity to write their own software is with LLMs.
I think it's ironic because I bet the author does not like LLMs.
Interestingly, I have no memory of ever thinking about LLMs as I wrote this. Part of it is I slaved over this talk a lot more than my usual blog posts, for about six months, after starting the initial draft in Dec 2022 (https://akkartik.name/post/roundup22). ChatGPT came out in Nov 2022. So I was following (and starting to get annoyed by) the AI conversations in parallel with working on this talk, but perhaps they felt like separate threads in my head and I hadn't yet noticed that they can impact one another.
These days I've noticed the connections, and I feel the pressure to try to rationalize one in terms of the other. But I still don't feel confident enough to do so. And my training has always emphasized living with ambiguity until one comes up with a satisfying resolution.
It took us 200 years from the discovery of telescopes[1] to attain some measure of closure on all the questions they raised. There's no reason to think the discovery of LLMs will take any less time to work through. They'll still be remembered and debated in a hundred years. Your hot takes or mine about LLMs will be long forgotten. In the meantime, it seems a good idea for me to focus on what I seem uniquely qualified to talk about.
[1] https://web.archive.org/web/20140310031503/http://tofspot.bl... is a fantastic resource.
I wish more people had this attitude, particularly towards LLMs. We're at an interesting point in time where we don't know how LLMs will evolve. Maybe we've hit the plateau and LLMs won't get any better. Even in that case, it will take decades for their effect to be felt in the entire industry, much less the world. Or maybe they will continue to improve on their way to ASI.
My point, though, is that if you're worried about user control over their computing environment (which I 100% understand), then LLMs might be the best solution. I think they could be the only practical solution, as all others seem like pipe dreams.
One critical question for you is: can someone rebuild a piece of software for themselves, from scratch, using LLMs, without changing radically in terms of how many neurons in their brain are devoted to programming?
If they need to have real control here, they'll need to understand a lot about programming, build up a certain skeptical mindset in reviewing code, etc. That sort of requirement feels like it'll also affect adoption.
If they do so without learning much about programming, then I'd argue they don't have much control. It's not them rebuilding the thing for themselves, it's the LLM rebuilding the thing for them. And that comes with the same sorts of Principal Agent problems as depending on other sorts of AIs like tech companies to do so. They'll find themselves awash in eldritch bugs, etc.
So I think LLMs can't square this circle. If they want to not devote a lifetime to programming, user control using LLMs feels like more of a pipe dream than my approach. Because my approach depends crucially on personal relationships. There's no illusion that each person is getting personally customized software. Instead we're banding together in computational space as if we're travelling through medieval Asia -- using a caravan. Caravans had a natural limit in size, because larger caravans were easier to infiltrate with bandits. Managing those risks in a large caravan requires the geopolitical skills of a king to constantly see all the angles of who can screw you over at each point in time.
Oh yes. If you want to be like most people, you should stay right where you are.
The question I'm interested in is: where should most people be? Are they where they should be? Is the current world the best we can do?
I have no illusions about converting a large following. I just have different priorities than you, it seems.
I don't really have a choice. As a great philosopher said, I am what I am and that's all that I am.
> The question I'm interested in is: where should most people be? Are they where they should be? Is the current world the best we can do?
These questions can only be answered by actions. You're doing the right thing: you're creating the kind of software that you want to see in the world. That's really the highest service that a programmer can perform.
I actually think we probably agree on how crappy most software is. But whereas your answer seems to be "depend less on software", my answer is to rebuild the whole (software) world from scratch. We'll see which one of us is crazier.
What I settled for was an approach where I try to minimize the use of commercial software in my personal life, but in my business if we need what the commercial software does we'll just license it and get on with things. For the most part in my life I don't really NEED some feature or another, it might be nice to have, but with any type of commercial software or service there's always going to be the risk that they'll push some update that shoves ads down my throat or introduces microtransactions or something, so I'm OK to just go without and use the FOSS alternative.
In business though, we'll be at a competitive disadvantage if I force everyone to use only FOSS. There are many times where I've looked at the open source equivalent of some big SaaS and it was just going to be more work to set up and maintain a less featureful open source equivalent. So, I'm more inclined to do a deal with the devil because at the end of the day our time and resources need to be focused elsewhere.
That's the rub. Does it benefit the developers much more than the users, in aggregate?
It's not an easy question to answer even for vaccines..
But I'd wager it's a no, because, e.g. Pfizer(plus or minus BioNTech) probably could not have learnt enough* from their deployment..
*I.e. gain nonfungible knowhow