You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.
That's AI.
DavidPiper · 1h ago
We need to update Hanlon's Razor: Never attribute to AI that which is adequately explained by incompetence.
xerox13ster · 1h ago
And just like the original Hanlon’s Razor, this is not an excuse to be stupid or incompetent.
It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.
blibble · 1h ago
> You know that teammate
now imagine he can be scaled indefinitely
you thought software was bad today?
imagine Microsoft Teams in 5 years time
ThatMedicIsASpy · 41m ago
I only need to look at the past 5 years of Windows
Spooky23 · 46m ago
Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
bdangubic · 31m ago
smart people are reading comments like and going “I am glad I am in the same market as people making such comments” :)
bambax · 1h ago
I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.
0xEF · 1h ago
You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
cardanome · 1h ago
Generative AI is like micromanaging an talented Junior Dev that never improves. And I mean micromanaging to such a toxic degree that not human would ever put up with that.
It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.
On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.
Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.
scarecrowbob · 20m ago
"Gen AI is a great tool, if you approach it with the right mindset."
People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.
I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.
Hell, I have preferred ligature fonts for different languages.
Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.
threatripper · 1h ago
You sound bitter. Did you try using more AI for the bug fixing? It gets better and better.
ryeats · 1h ago
My interest tend to be bleeding edge where there is little training data. I do use AI to rubber duck but can rarely use it's output directly.
threatripper · 1h ago
I see. In my experience current LLMs are great for generating boilerplate code for basic UIs but fail at polishing UI and business logic. If it's important you need to rewrite the core logic completely because they may introduce subtle bugs due to misunderstandings or sloppiness.
ryeats · 1h ago
Yep you are also right, some amount of boilerplate code is perfectly reasonable since some problems are similar but just different enough and unique enough they don't merit designing an architecture that gets rid of the boilerplate. this is probably the most useful thing that AI could do for us. I think I am more worried as a maintainer that we won't see that we are copying all that boilerplate too often and it's subtle bugs are multiplied and now we have to maintain all that code because AI doesn't yet do that.
skydhash · 1h ago
Cognitive load are not related to the difficulty of a task. It’s about how much mental energy is spent monitoring it. To reduce cognitive load, you either boost confidence or avoid caring. You can’t have confidence in AI output and most people proposing it looks like they’re preaching to not care about quality (because quantity yay).
threatripper · 41m ago
But quality is going up a lot. Granted, it's not up to human levels yet, but it is going up fast. Also we will see more complex quality control in AI output, tailored to specific use cases and sold at a premium. Right now these don't exist and if they existed it would be too expensive to run 100x requests for the same amount of output. So humans are stuck in quality control, for now.
billy99k · 1h ago
You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.
andersmurphy · 1h ago
This reminds me of crypto’s “have fun being poor”. Except now it’s “have fun being left behind/being unemployed”. The more things change the more things stay the same.
ryeats · 1h ago
I was being a bit melodramatic, I'll use it occasionally and If AI gets better it can join my team again I don't love writing boilerplate I just know it's not good at writing maintainable code yet.
BrouteMinou · 1h ago
When all you got is pontificating...
sampl3username · 1h ago
Left behind what? Consumeristic trash?
dragontamer · 1h ago
Don't you see that the future is XML SOAP RPCs? If you don't master this new technology now, you'll be left behind!!
Then again, maybe I'm too old now and being left behind if I remember the old hype like this....
The entirety of the tech field is constantly hyping the current technology out of FOMO. Whether or not it works out in the future it's always the same damn argument.
rsynnott · 1h ago
I mean, the promoters of every allegedly productivity improving fad have been saying this sort of thing for all of the twenty-odd years I’ve been in the industry.
If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…
ants_everywhere · 1h ago
My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
bgwalter · 48m ago
Michelangelo worked alone on the David for more than two years:
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
netule · 33m ago
James Patterson comes to mind. He simply writes detailed outlines for the plots of his novels and has other authors write them for him. The books are then published under his name, which is more like a brand at that point.
BolexNOLA · 52m ago
At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
mrbluecoat · 1h ago
I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.
GlacierFox · 1h ago
My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually.
The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.
uludag · 1h ago
I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
ApeWithCompiler · 1h ago
True, but at least for me also true: Smartphones are a stable fixture in my life and by now I try to get rid of them as much as possible.
cheschire · 1h ago
smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
MCP will probably kill the web as we know it.
TheOtherHobbes · 1h ago
That's not what will happen. The ad tech companies will pivot and start selling these services as neutral helpers, when in fact they'll use their knowledge of your schedule, preferences, and income to spend money on goods and services you don't really want.
It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.
And the richer you are, the more freedom you'll have to opt out and manage your own decisions.
sampl3username · 1h ago
>smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.
This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.
wright-goes · 1h ago
Access to banking is indeed critical, but when? And for 2FA, which accounts, and when? As bank apps become more invasive and they also fail to offer substantive 2FA (e.g. the forcing of text messaging as a 2FA option falls outside my risk tolerance), I've segmented my devices' access.
The ability to transfer funds is something I'm now fine doing via a dedicated device with a dedicated password manager account, and I'm fine uninstalling banks' apps from my phone and dis-enrolling cell phone numbers.
Given the wanton collection and sale of my data by many entities I hadn't expected (naivety on my part), I've restricted access to critical services by device and or web browser only. It's had the added bonus of making me more purposeful in what I'm doing, albeit at the expense of a convenience. Ultimately, I'm not saying my approach is right for everyone, but for me it's felt great to take stock of historical behavior and act accordingly.
coliveira · 1h ago
If this happens I have an excellent business strategy. Human concierges that will help people with specific areas of their lives. Sell a premium service where paid humans will interact with all this noise so clients will never have to talk to machines.
threatripper · 1h ago
What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!
tolerance · 2h ago
For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
mobeets · 2h ago
I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.
jml78 · 2h ago
I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.
Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
moregrist · 2h ago
Have you looked for:
- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.
- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.
Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.
jml78 · 1h ago
Good feedback, we live a somewhat unusual lifestyle. We are digital nomads that live on a sailboat. I think some of that is possible and I will recommend he look for some online writing groups but the places we generally sail to are countries where schools/libraries aren’t going to have those types of things. It is challenge enough flying him back to the US to take AP exams
No comments yet
ryeats · 1h ago
The open question is will someone who learns this way actually develope taste and mastery. I think the answer is mixed because some will use it as a crutch but it will also be able to give them a little bit of insight beyond what they could learn by reading and inquisitive minds will be able to grow discerning.
zB2sj38WHAjYnvm · 2h ago
This is very sad.
endemic · 2h ago
Why? Seems like a good idea, relying on the LLM to write for you won’t develop your skills, but using it as an editor is a good middle ground. Also there’s no shame in saying an LLM is “better” than you at a task.
ryanblakeley · 1h ago
Creative expression is also about relationships with other people and connecting with an audience. Treating it like product optimization seems hollow and lonely. There's friction to asking another person to read and give feedback on something you wrote, but it's the kind of friction that helps you grow.
sampl3username · 1h ago
Art is fundamentally a human activity. No amount of artistic work can be delegated to a machine, or else the art is dehumanised.
strken · 45m ago
This seems like it would ban drawing tablets, musical instruments, and a lot of other things which seem silly to ban.
zaphod420 · 2h ago
It's not sad, it's using modern tools to learn. People that don't embrace the future get left behind.
DanHulton · 1h ago
You say that as if it's a justification, not an observation.
For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.
And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".
We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.
Be a little more discerning, and think for yourself before you lose the ability to.
jml78 · 1h ago
Dan,
Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.
My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.
I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.
LLMs have been trained on likely all published books, it IS more qualified than me.
If he continues to write and gets good enough should he seek a human editor sure.
But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.
stefanka · 36m ago
> But I never want me to be a reason he backs away from something because my feedback was wrong.
Do you want an LLM to be the reason? You can explain that your Feedback is opinionated or biased. And you know him better than any machine ever will
skydhash · 1h ago
> LLMs have been trained on likely all published books, it IS more qualified than me.
It has also be trained on worthless comments on the internet, so that’s not a great indicator.
jml78 · 2h ago
Exactly, I would rather read his stories and discuss them with him. My advice on anything outside of pure opinion is invalid
IanCal · 1h ago
Having something else help doesn’t preclude reading with them - it also may have better advice. Very rarely is anyone suggesting an all or nothing approach when talking about adding a tool.
SV_BubbleTime · 1h ago
Large Language Model, not Large Fact Model.
tombarys · 4h ago
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
esjeon · 21m ago
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.
But I still don't like that the same model struggles w/ my projects...
metalrain · 46m ago
Pretty similar view than others have expressed in veiks of "LLMs can be good, just not at my [area of expertise]".
esjeon · 41m ago
I'm pretty sure they were generally (if not completely) correct when they said that.
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
johnnyfived · 11m ago
What's interesting about thinking of code as art is that there rarely a variety of ways of implementing logic that's all optimal. So if you decide on the implementation and have a LLM code it, you likely won't need to make major changes given the right guidelines.
Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.
kelvinjps10 · 1h ago
What about grammar and spelling corrections?
shakna · 44m ago
Not the author, but another author here and...
Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.
And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.
I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.
mrits · 1h ago
I think there are a lot of good reasons to be cognitive lazy. Now might not be the time to learn about how something works.
That's AI.
It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.
now imagine he can be scaled indefinitely
you thought software was bad today?
imagine Microsoft Teams in 5 years time
In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.
AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.
When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.
It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.
When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.
Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.
It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.
On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.
Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.
People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.
I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.
Hell, I have preferred ligature fonts for different languages.
Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.
Then again, maybe I'm too old now and being left behind if I remember the old hype like this....
The entirety of the tech field is constantly hyping the current technology out of FOMO. Whether or not it works out in the future it's always the same damn argument.
If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…
I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.
I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.
The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.
I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.
[0] Aside from internet comments of course, which are mostly stream of consciousness.
https://en.wikipedia.org/wiki/David_(Michelangelo)#Process
Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).
Even research many authors simply could not afford.
When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha
Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.
I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.
As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"
People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.
MCP will probably kill the web as we know it.
It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.
And the richer you are, the more freedom you'll have to opt out and manage your own decisions.
This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.
The ability to transfer funds is something I'm now fine doing via a dedicated device with a dedicated password manager account, and I'm fine uninstalling banks' apps from my phone and dis-enrolling cell phone numbers.
Given the wanton collection and sale of my data by many entities I hadn't expected (naivety on my part), I've restricted access to critical services by device and or web browser only. It's had the added bonus of making me more purposeful in what I'm doing, albeit at the expense of a convenience. Ultimately, I'm not saying my approach is right for everyone, but for me it's felt great to take stock of historical behavior and act accordingly.
I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.
I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.
Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.
Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom
- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.
- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.
Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.
No comments yet
For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.
And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".
We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.
Be a little more discerning, and think for yourself before you lose the ability to.
Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.
My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.
I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.
LLMs have been trained on likely all published books, it IS more qualified than me.
If he continues to write and gets good enough should he seek a human editor sure.
But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.
Do you want an LLM to be the reason? You can explain that your Feedback is opinionated or biased. And you know him better than any machine ever will
It has also be trained on worthless comments on the internet, so that’s not a great indicator.
But I still don't like that the same model struggles w/ my projects...
It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.
Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.
Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.
And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.
I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.