I avoid using LLMs as a publisher and writer

152 tombarys 98 7/19/2025, 10:51:59 AM lifehacky.net ↗

Comments (98)

ryeats · 4h ago
You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.

That's AI.

Spooky23 · 3h ago
Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.

In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.

AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.

blibble · 4h ago
> You know that teammate

now imagine he can be scaled indefinitely

you thought software was bad today?

imagine Microsoft Teams in 5 years time

darthcircuit · 2h ago
I’m not even looking forward to Microsoft teams on Monday.
ThatMedicIsASpy · 3h ago
I only need to look at the past 5 years of Windows
DavidPiper · 4h ago
We need to update Hanlon's Razor: Never attribute to AI that which is adequately explained by incompetence.
xerox13ster · 4h ago
And just like the original Hanlon’s Razor, this is not an excuse to be stupid or incompetent.

It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.

chistev · 2h ago
Thank you.
bdangubic · 3h ago
smart people are reading comments like and going “I am glad I am in the same market as people making such comments” :)
ookblah · 1h ago
seriously, the near future is going to be:

1) people who reject it completely for whatever reason. 2) people who use it lazily and produce a lot of garbage (lets be honest, this is probably going to happen a lot which is why maybe group #1 hates this future. reminds me of the outsourcing era) 3) people who selectively use it to their advantage.

no point in groups 1 and 3 trying to convince each other of anything.

cgriswald · 50m ago
I think that has been the state of affairs for awhile now.

I think your explanation for group 1 is true to a degree but have two other additional explanations: (1) Some element of group 1 is ideologically opposed. It might be copyright, or Luddism, or some other concern for our fellow humans. (2) Some are deluded into thinking there are only two groups and that group 3 people are all delusional.

Although it is probably an uphill battle I do think both groups 1 and 3 have things to learn from each other.

IAmGraydon · 2h ago
I’m glad for now. Understanding how to utilize AI to your advantage is still an edge at the moment, but it won’t be long before almost everyone figures it out.
raincole · 1h ago
Yeah. Interestingly enough, I've found utilizing AI is a very shallow skill that anyone should be able to learn in days. But (luckily) people have some tendency preventing them from doing so.
bdangubic · 2h ago
it’ll be years because 87.93% of SWEs are subpar like the post I made comment on.
bambax · 4h ago
I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.
runiq · 1h ago
In the context of code, where review bandwidth is the bottleneck, I think it's spot on. In the arts, comparatively -- be they writing, drawing, or music -- you can feel almost at a glance that something is off. There's a bit of a vibe check thing going on, and if that doesn't pass, it's back to the drawing board. You don't inherit technical debt like you do with code.
0xEF · 4h ago
You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.

When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.

It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.

When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.

Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.

cardanome · 4h ago
Generative AI is like micromanaging an talented Junior Dev that never improves. And I mean micromanaging to such a toxic degree that not human would ever put up with that.

It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.

On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.

Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.

nathan_douglas · 1h ago
I don't disagree with anything you've said, but I _do_ think I'm starting to enjoy this workflow. I don't mind the micromanagement because it's usually the ideas that appeal most to me, not the line-level details of writing code. I suppose I fit in somewhere between the "love to code" and "love to manage" dichotomy you've presented. Perhaps I love to make it look like I have coded? :)

I set up SSH certificates in my homelab last night with Claude Code. It was a somewhat aggravating process - I had to remind it a couple times of some syntax issues, and I'm not sure that it actually took less time than I would've taken to do it myself. And it also locked me out of my cluster when it YOLO'ed some changes it should not have. On the whole, one of the worst AI experiences I've had recently.

But I'm thrilled with it, TBH, because it got done, it works, I didn't have to beat my head against the wall for each little increment of progress, and while Claude Code was beating its own head against the wall, I was able to relax and 1) practice my French, and 2) read my book (Steven Levy's _Artificial Life_, which I recently saw excerpted on HN).

The general state of things is probably still pretty terrible. I know there're no end of irritations that I have with Claude Code, and everything else I've looked at is even less pleasant. But I feel like this might be going in a good direction.

*EDIT*: It should go without saying though that I'd much rather be mentoring a junior person, though, as you said.

scarecrowbob · 3h ago
"Gen AI is a great tool, if you approach it with the right mindset."

People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.

I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.

Hell, I have preferred ligature fonts for different languages.

Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.

ninetyninenine · 2h ago
They write that sentence because gen ai has been effective for them.

We have intelligent people using ai and claiming it’s useful.

And we have other intelligent people who’s saying it’s not useful.

I’m inclined to believe the former. You can’t be deluded about positives usefulness. But you can be about the negative simply by using the LLM in a half assed way and picking the most convenient conclusion without nuance.

runiq · 1h ago
> You can’t be deluded about positives usefulness.

If you honestly believe that, I've got a bridge to sell you.

ninetyninenine · 38m ago
How can you be deluded? Everyone has used it. they literally see the positive results. It’s not speculative.

But you can miss the positive results if you haven’t used LLMs recently or used agentic ai like cursor. it’s easy to miss the positives

billy99k · 4h ago
You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.
andersmurphy · 4h ago
This reminds me of crypto’s “have fun being poor”. Except now it’s “have fun being left behind/being unemployed”. The more things change the more things stay the same.
billy99k · 1h ago
A bit different when you actually see the results.

A guy I went to highschool with complains endlessly about AI generated art and graphics (he's an artist) and like you, just wants to bury his head in the sand.

Consumers don't care if art is generated by AI or humans and in a short period of time, you won't be able to tell the difference.

With the money being poured into AI by all major tech companies, you will be unemployed if you don't keep up with AI.

abenga · 1h ago
We care. If I get a video recommendation on YouTube and it is AI-created, I blacklist the channel. I will never listen to AI music. Even articles, the only way I will keep reading someone's writing is if I never find out they don't use it. I consume media and art to commune with my fellow man, not to look at pretty bitmaps and read just strings of prose.
billy99k · 1h ago
You are not the average consumer.
andersmurphy · 1h ago
If the last few years of the AI hype cycle has taught me anything is there's massive late movers advantage.

Anyone who spent time learning the AI tools over that period of time has basically wasted their time. Working with agents is nothing like prompt engineering. I imagine whatever comes after will be nothing like agents etc. Sounds like those who try to keep up with AI will be equally unemployed.

billy99k · 1h ago
If the HN community is an example of this, they will be left behind regardless because they will avoid all tooling and the benefits that comes along with it.

I suppose I shouldn't care too much. Less competition for people like me that have embraced the change.

andersmurphy · 30m ago
Thing is short/medium term VC subsidies require lots of users to embrace AI. If they don't the money dries up and you end up paying the full price for these models. Which are currently heavily discounted (this is an understatement). How much are you currently paying for your usage 20$/m? 200$/m? How does that look when it's 2000$/m? 20000$/m?
billy99k · 13m ago
With all of the competition in big tech, prices will go down.
mwigdahl · 2h ago
Yes, and it was exactly the same with compilers. All hype and fad -- everyone who's serious about software development writes in assembly.
andersmurphy · 1h ago
It's false comparison compilers are deterministic. The only probabilistic behavior I've seen has been for performance (query planning/branch prediction).

I mean you're not wrong the serious people drop into assembly when they need too. Even if you work in a context where you can't or don't drop down into assembly being able to make your own compilers is incredibly useful.

sampl3username · 4h ago
Left behind what? Consumeristic trash?
dragontamer · 4h ago
Don't you see that the future is XML SOAP RPCs? If you don't master this new technology now, you'll be left behind!!

Then again, maybe I'm too old now and being left behind if I remember the old hype like this....

The entirety of the tech field is constantly hyping the current technology out of FOMO. Whether or not it works out in the future it's always the same damn argument.

ryeats · 4h ago
I was being a bit melodramatic, I'll use it occasionally and If AI gets better it can join my team again I don't love writing boilerplate I just know it's not good at writing maintainable code yet.
rsynnott · 4h ago
I mean, the promoters of every allegedly productivity improving fad have been saying this sort of thing for all of the twenty-odd years I’ve been in the industry.

If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…

BrouteMinou · 4h ago
When all you got is pontificating...
threatripper · 4h ago
You sound bitter. Did you try using more AI for the bug fixing? It gets better and better.
ryeats · 4h ago
My interest tend to be bleeding edge where there is little training data. I do use AI to rubber duck but can rarely use it's output directly.
threatripper · 4h ago
I see. In my experience current LLMs are great for generating boilerplate code for basic UIs but fail at polishing UI and business logic. If it's important you need to rewrite the core logic completely because they may introduce subtle bugs due to misunderstandings or sloppiness.
ryeats · 4h ago
Yep you are also right, some amount of boilerplate code is perfectly reasonable since some problems are similar but just different enough and unique enough they don't merit designing an architecture that gets rid of the boilerplate. this is probably the most useful thing that AI could do for us. I think I am more worried as a maintainer that we won't see that we are copying all that boilerplate too often and it's subtle bugs are multiplied and now we have to maintain all that code because AI doesn't yet do that.
skydhash · 4h ago
Cognitive load are not related to the difficulty of a task. It’s about how much mental energy is spent monitoring it. To reduce cognitive load, you either boost confidence or avoid caring. You can’t have confidence in AI output and most people proposing it looks like they’re preaching to not care about quality (because quantity yay).
threatripper · 3h ago
But quality is going up a lot. Granted, it's not up to human levels yet, but it is going up fast. Also we will see more complex quality control in AI output, tailored to specific use cases and sold at a premium. Right now these don't exist and if they existed it would be too expensive to run 100x requests for the same amount of output. So humans are stuck in quality control, for now.
Arainach · 59m ago
One of the biggest problems with AI is that it doesn't get better and better. It makes the same mistakes over and over instead of learning like a junior eng would.

AI is like the absolute worst outsourced devs I've ever worked with - enthusiastically saying "yes I can do that" to everything and then delivering absolute garbage that takes me longer to fix/convince them to do right than it would have taken for me to just do it myself.

ants_everywhere · 4h ago
My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.

I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.

I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.

The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.

I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.

[0] Aside from internet comments of course, which are mostly stream of consciousness.

bgwalter · 3h ago
Michelangelo worked alone on the David for more than two years:

https://en.wikipedia.org/wiki/David_(Michelangelo)#Process

Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).

Even research many authors simply could not afford.

ants_everywhere · 53m ago
Maybe Michelangelo was a bad choice, but I hope it's clear from my wording that I was using Michelangelo as an example and not saying anything specific his use of assistants compared to his peers. And David is a masterpiece not a minor work.

I don't see where the article says he worked alone on David. It does seem that he used a miniature (bozzetto) and then scaled up with a pointing machine. One possibility is he made the miniature and had assistants rough out the upscaled copy before doing the fine work himself. Essentially, using the assistants to do the work you'd do on a band saw if you were carving out of wood.

> I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).

Restricting to non-commercial authors would narrow it down since hiring assistants to write drafts probably only makes financial sense if the cost of the assistant is less than the cost of your time it would take drafting.

Alexander Dumas is maybe a bit higher brow than Stephen King

> He founded a production studio, staffed with writers who turned out hundreds of stories, all subject to his personal direction, editing, and additions. From 1839 to 1841, Dumas, with the assistance of several friends, compiled Celebrated Crimes, an eight-volume collection of essays on famous criminals and crimes from European history. https://en.wikipedia.org/wiki/Alexandre_Dumas

But in general I agree, drafts are often the heart of the work and it's where I'd expect masters to spend a lot of their time. Similarly with the statue miniatures.

netule · 3h ago
James Patterson comes to mind. He simply writes detailed outlines for the plots of his novels and has other authors write them for him. The books are then published under his name, which is more like a brand at that point.
BolexNOLA · 3h ago
At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.

When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha

mrbluecoat · 4h ago
I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.
GlacierFox · 4h ago
My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually. The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.
uludag · 4h ago
I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.

Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.

cheschire · 4h ago
smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.

I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.

As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"

People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.

MCP will probably kill the web as we know it.

TheOtherHobbes · 4h ago
That's not what will happen. The ad tech companies will pivot and start selling these services as neutral helpers, when in fact they'll use their knowledge of your schedule, preferences, and income to spend money on goods and services you don't really want.

It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.

And the richer you are, the more freedom you'll have to opt out and manage your own decisions.

sampl3username · 4h ago
>smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.

This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.

wright-goes · 4h ago
Access to banking is indeed critical, but when? And for 2FA, which accounts, and when? As bank apps become more invasive and they also fail to offer substantive 2FA (e.g. the forcing of text messaging as a 2FA option falls outside my risk tolerance), I've segmented my devices' access.

The ability to transfer funds is something I'm now fine doing via a dedicated device with a dedicated password manager account, and I'm fine uninstalling banks' apps from my phone and dis-enrolling cell phone numbers.

Given the wanton collection and sale of my data by many entities I hadn't expected (naivety on my part), I've restricted access to critical services by device and or web browser only. It's had the added bonus of making me more purposeful in what I'm doing, albeit at the expense of a convenience. Ultimately, I'm not saying my approach is right for everyone, but for me it's felt great to take stock of historical behavior and act accordingly.

Findecanor · 2h ago
I bought my first smartphone in 2020 after my old compact camera died, and I couldn't find a replacement to buy because they had been supplanted by smartphones.
coliveira · 4h ago
If this happens I have an excellent business strategy. Human concierges that will help people with specific areas of their lives. Sell a premium service where paid humans will interact with all this noise so clients will never have to talk to machines.
ApeWithCompiler · 4h ago
True, but at least for me also true: Smartphones are a stable fixture in my life and by now I try to get rid of them as much as possible.
threatripper · 4h ago
What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!
timeon · 2h ago
Analogies are not arguments.
magic_hamster · 35m ago
There have been quite a few skeptic blog posts recently about LLM. Some say they won't use it for coding, others for getting creative ideas, and others won't use it for editing and publishing. However, the silent issue all these posts have in common is that resistance is futile.

To be fair, I also don't like using Copilot when working on code. In many cases it turns into a weird experience when the agent generates the next line(s) and I basically become a discriminator judging if the thing really understands my problem and solution. To be honest, it's boring even if eventually it might make me turn in code faster.

With that said, I cannot ignore that LLMs are happening, and this is the future. The models keep improving but more importantly, the ecosystem keeps improving with things like MCP and better defined context for LLM tools.

We might be looking at a somewhat grim prospect. But like it or not, this is the future. Adapt and survive.

tolerance · 5h ago
For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.

I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.

I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.

Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.

mobeets · 5h ago
I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.
jml78 · 5h ago
I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.

Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom

moregrist · 5h ago
Have you looked for:

- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.

- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.

Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.

jml78 · 4h ago
Good feedback, we live a somewhat unusual lifestyle. We are digital nomads that live on a sailboat. I think some of that is possible and I will recommend he look for some online writing groups but the places we generally sail to are countries where schools/libraries aren’t going to have those types of things. It is challenge enough flying him back to the US to take AP exams

No comments yet

ryeats · 4h ago
The open question is will someone who learns this way actually develope taste and mastery. I think the answer is mixed because some will use it as a crutch but it will also be able to give them a little bit of insight beyond what they could learn by reading and inquisitive minds will be able to grow discerning.
zB2sj38WHAjYnvm · 5h ago
This is very sad.
endemic · 5h ago
Why? Seems like a good idea, relying on the LLM to write for you won’t develop your skills, but using it as an editor is a good middle ground. Also there’s no shame in saying an LLM is “better” than you at a task.
ryanblakeley · 4h ago
Creative expression is also about relationships with other people and connecting with an audience. Treating it like product optimization seems hollow and lonely. There's friction to asking another person to read and give feedback on something you wrote, but it's the kind of friction that helps you grow.
sampl3username · 4h ago
Art is fundamentally a human activity. No amount of artistic work can be delegated to a machine, or else the art is dehumanised.
strken · 3h ago
This seems like it would ban drawing tablets, musical instruments, and a lot of other things which seem silly to ban.
GeoAtreides · 23m ago
In this particular instance the medium is not the message. or the art.
zaphod420 · 5h ago
It's not sad, it's using modern tools to learn. People that don't embrace the future get left behind.
DanHulton · 4h ago
You say that as if it's a justification, not an observation.

For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.

And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".

We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.

Be a little more discerning, and think for yourself before you lose the ability to.

jml78 · 4h ago
Dan,

Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.

My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.

I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.

LLMs have been trained on likely all published books, it IS more qualified than me.

If he continues to write and gets good enough should he seek a human editor sure.

But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.

whoisyc · 2h ago
There is something deeply disturbing about your attitude towards making mistake.

You think you shouldn’t give advice because your feedback is not valuable and may even cause your son to give up writing, but you have so far given no reason why AI wouldn’t. From the entire ChatGPT “glazing” accident I can also argue that the AI can also give bad feedback. Heck most mainstream models are fine tuned to sounds like a secretary that never says no.

Sorry if this sounds rude, but it feels like the real reason you ask your son to get AI feedback is to avoid being personally responsible for mistakes. You are not using AI as a tool, you are using it as an scapegoat in case anything goes wrong.

stefanka · 3h ago
> But I never want me to be a reason he backs away from something because my feedback was wrong.

Do you want an LLM to be the reason? You can explain that your Feedback is opinionated or biased. And you know him better than any machine ever will

skydhash · 4h ago
> LLMs have been trained on likely all published books, it IS more qualified than me.

It has also be trained on worthless comments on the internet, so that’s not a great indicator.

jml78 · 5h ago
Exactly, I would rather read his stories and discuss them with him. My advice on anything outside of pure opinion is invalid
IanCal · 4h ago
Having something else help doesn’t preclude reading with them - it also may have better advice. Very rarely is anyone suggesting an all or nothing approach when talking about adding a tool.
SV_BubbleTime · 4h ago
Large Language Model, not Large Fact Model.
tombarys · 7h ago
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
esjeon · 3h ago
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.

But I still don't like that the same model struggles w/ my projects...

jdietrich · 1h ago
As a professional writer, the author of this post is likely a better writer than 99.99% of the population. A quick skim of his blog suggests that he's comfortably more intelligent than 99% of people. I think it's totally unsurprising that he isn't fully satisfied with the output of LLMs; what is remarkable is that someone in that position still finds plenty of reasons to use them.

Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".

nerevarthelame · 1h ago
I'm worried that an increasing number of people are relying on LLMs for things as fundamental to daily life as expressing themselves verbally or critical thinking.

Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.

antegamisou · 1h ago
Idk, LLM writing style somehow almost always ends up sounding like an insufferable smartass Redditor spiel. Maybe it's only appealing to the respective audience.
K0balt · 2h ago
Ai is useful in closed loop applications, often it can even do a decent job of closing the loop itself… but you need to understand that it is a fundamentally extractive, not creative, process. The body of human cultural knowledge is the underlying resource , and AI is the drill with which we pull out the parts we want.

Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.

Creative thought is not.

metalrain · 3h ago
Pretty similar view than others have expressed in veiks of "LLMs can be good, just not at my [area of expertise]".
esjeon · 3h ago
I'm pretty sure they were generally (if not completely) correct when they said that.

It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.

romarioj2h · 2h ago
AI is a tool like any other, and it can be used well or poorly, just like any other tool. It's important to know its limits. Being a tool, it must be studied for proper use.
kelvinjps10 · 4h ago
What about grammar and spelling corrections?
shakna · 3h ago
Not the author, but another author here and...

Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.

And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.

I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.

johnnyfived · 3h ago
What's interesting about thinking of code as art is that there rarely a variety of ways of implementing logic that's all optimal. So if you decide on the implementation and have a LLM code it, you likely won't need to make major changes given the right guidelines (I just mean like a single script, for the sake of comparison).

Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.

mrits · 4h ago
I think there are a lot of good reasons to be cognitive lazy. Now might not be the time to learn about how something works.