This is a little too drawn out, especially at the beginning, but I see some good points.
> LLMs are an affordance for producing more text, faster. How is that going to shape us?
In essence I agree here. Currently LLMs do not guarantee correctness and are clearly not constructing syntax based on logical deduction. LLMs are generating a lot of predicted text, much of which is correct but a lot of which is not, hence why it's good at boilerplate and domains that are well represented in the training data. This means that what it's mainly doing is helping us write text faster, whether it's natural or computer language. The main contention here is that writing down the syntax of a solution is the easy part of programming, where the hard part is understanding the problem and engineering a solution. Once you have the technical requirements and a potential solution, typing in the literal syntax is the least amount of effort of the entire process.
Even with hugely verbose languages like Java, you can get the boilerplate out of the way quickly and then you're left with problems that end up being a few lines of code to solve (unless you need a large feature, in which case the process repeats fractally, where you need to add some boilerplate code and then the hard part of the solution ends up being a few lines of code).
> The first form of genAI resistance to experiment is that every discussion is a motte-and-bailey.
I can see this as well, I think a lot of people are encountering the idea of an actual AGI for the first time and we are also closer than ever, despite this not being as close as it appears. Personally, I think there is a good amount of both hype and real world application. There might be a crash but it's probably just going to be an adjustment once we realize humans are still better at solving novel problems, which is where most of the useful work is. GenAI has already created a huge impact and there is also a lot of low hanging fruit optimizations that will happen, but whether or not it will fulfill the hype is yet to be seen.
> LLMs are an affordance for producing more text, faster. How is that going to shape us?
In essence I agree here. Currently LLMs do not guarantee correctness and are clearly not constructing syntax based on logical deduction. LLMs are generating a lot of predicted text, much of which is correct but a lot of which is not, hence why it's good at boilerplate and domains that are well represented in the training data. This means that what it's mainly doing is helping us write text faster, whether it's natural or computer language. The main contention here is that writing down the syntax of a solution is the easy part of programming, where the hard part is understanding the problem and engineering a solution. Once you have the technical requirements and a potential solution, typing in the literal syntax is the least amount of effort of the entire process.
Even with hugely verbose languages like Java, you can get the boilerplate out of the way quickly and then you're left with problems that end up being a few lines of code to solve (unless you need a large feature, in which case the process repeats fractally, where you need to add some boilerplate code and then the hard part of the solution ends up being a few lines of code).
> The first form of genAI resistance to experiment is that every discussion is a motte-and-bailey.
I can see this as well, I think a lot of people are encountering the idea of an actual AGI for the first time and we are also closer than ever, despite this not being as close as it appears. Personally, I think there is a good amount of both hype and real world application. There might be a crash but it's probably just going to be an adjustment once we realize humans are still better at solving novel problems, which is where most of the useful work is. GenAI has already created a huge impact and there is also a lot of low hanging fruit optimizations that will happen, but whether or not it will fulfill the hype is yet to be seen.
As soon as I reached this line I was like "OK, I'm done!" LOL!