I just had an interesting experience this morning with my oldest child. He wanted to write a letter to the Splatoon development team to show his appreciation for the game. He printed it out and brought it to me with a proud expression on his face. It took about five seconds to catch the AI smell.
We then proceeded to have a conversation about how some people might feel if an appreciation letter was given to them that was clearly written by AI. I had to explain how it feels sort of cold and impersonal which leads to losing the effect he's looking to have.
To be honest, though, it really got me thinking about what the future is going to look like. What do I know? Maybe people will just stop caring about the human touch. It seems like a massive loss to me, but I'm also getting older.
I let him send the letter anyways. Maybe in an ironic twist an AI will respond to it.
klabb3 · 11h ago
> Maybe people will just stop caring about the human touch.
No they won’t. This has happened many times in the past already with live theatre -> movies, live music -> radio etc. They don’t replace, but break into different categories where the new thing is cheap and abundant. When a corporation writes you a shit letter with ”We miss you, Derek” all reasonable people know what they’re looking at.
Look, it’s about basic economics. It doesn’t matter how ”good” the generated song for someone’s birthday is. What matters is the time, money and effort. In some cases writing a prompt for one-time use is not bad. If you’re generating individual output at scale without any human attention, nobody will appreciate the ”gesture”.
What bothers me is the content farm for X shit-startups and tech-cos thinking they’re replacing humans without side effects. It’ll work just as well as those fake signatures by the CEO in junk mail: it’ll deceive only for a short time, and maybe older people who may be permanently screwed. It’ll just be yet another comms channel saturated with spam, which is entirely fungible from the heaps of all other spam. A classic race to the bottom.
disambiguation · 3h ago
There's the tool and then there's how you use it. We're all still learning how to live with rapidly changing tech, but it sounds like your kid tried to pass off someone (something) else's work as their own. Cold and impersonal is a problem too, but this situation touches on ethical concepts like fraud and deception by omission - not to make a big deal out of a kid's fan letter :) but seems like an opportunity to teach a moral lesson too.
jraph · 11h ago
> Maybe people will just stop caring about the human touch
I doubt it. We human beings seem intrinsically motivated and enthusiastic about human connections. I believe we are wired like this. I know things changesbut I would need some strong evidence before even playing with the idea that we'll stop caring about the human touch.
Now, as much as I hate AI, that doesn't necessarily mean AI-free. Or even handwritten. It just needs to be some human touch. I would enjoy a handwritten letter but wouldn't mind an email at all. But maybe someone else would find it lazy and tasteless, just as I would find an ai generated text lazy and tasteless.
Maybe the prompt you can guess the person who sent you some ai-generated text used can already be perceived as some human touch. Maybe there is a threshold though.
Now, could it be that your child wanted to impress you with a perfectly written letter, or even with their ai prompt mastering?
Anyway, good anecdote, good perspective, good for you to have had the conversation and let them proceed anyway. Thanks for sharing.
sepositus · 11h ago
> I believe we are wired like this
Absolutely, and I think because of this we'll never see the desire go away completely. However, I'm imagining some dystopian future where human touch is so rare that people _forget_ how much it means to them. It's like scrolling through the endless slop of Netflix and then coming to some rare gem of a film where you're reminded what genuine art is.
krick · 10h ago
I don't think that people forget, but it definitely gets normalized. It's disgusting, but many people obviously embrace it, and since to order ChatGPT to do some lame stuff for them is by design orders of magnitude faster than actually create something, the internet is getting filled up by lame stuff every passing minute now.
But it's not like this only happens because of LLMs. If you worked in corporate culture you most definitely received some automated HR emails congratulating you with spending half of your life at the workplace, or something like that. I always felt almost insulted by these, they are literally just spam at best. It's kinda mocking: these are generic depersonalized texts that no one actually wrote for you, yet they always speak about "gratitude", about you being "valued" and such. In fact, it's the only thing they are meant to express: you being valued. It's so cynical.
But, I mean, it's just me. Ostensibly, these folks in HR department do know their job? Maybe most people don't feel like vomiting when they get these emails? Maybe it brings them joy? I never stopped wondering about that. I cannot just ask the closest coworkers, because of course they feel the same as me. But maybe there are other ones? Another social bubble, where this thing is normal, and it is bigger than mine?
Anyway, everyone is kinda used to it. What I am trying to say is that the phenomena is not entirely new, and LLMs don't change the essence of it. Even back when people sent paper mail to each other, I remember these pre-printed birthday/christmas cards, which are ok, because the entire point is that they are not automated and that you remember to send it to someone, yet it was always considered a bit of a poor taste to not add a sentence of yours by hand.
xandrius · 11h ago
Handwritten letters are still better than typed. That's why we still get authors to actually sign their book and not just put a print of it.
So, no, there is no evidence that AI will change stuff. We had canned responses and template answers for a long time but people still like talking to a real human being.
P.S. I think you should have told them to write a thank you letter themselves as a fun game to compare with the AI one and send that one instead.
perching_aix · 10h ago
Maybe this is a generational difference, but I really don't like handwritten anything. Something being handwritten doesn't evoke anything inside me - if anything, it only brings frustration, since having to decipher someone's handwriting (especially mine) can be no mean feat. There are also countless examples of cards and such with blatantly printed-on signatures, and there are signature plotting machines (autopens [0]) that further make automated signatures impossible to tell apart.
AI has already changed stuff. I have already seen several related examples of distasteful AI use in corporate settings. One example was management promising that feedback received during a townhall will be reviewed, only to then later proudly announce that they AI-summarized it. I'll readily admit that doing that is actually a very sensible use of AI, just maybe the messaging around it should have been a bit less out of touch. Another example was my coworker expressing his gratitude to the team, while simultaneously passing the milestone of producing more than 10 consecutive words of coherent English for the first time in his life. He was awfully proud of it too.
And to finish it off, talking to real human beings on the internet is increasingly miserable by the day. Without going too far off into the weeds, let me give you a practical, older example. I've participated in a Discord server of a FOSS project, specifically in their support channels, for a couple years - walked away a very different person, with great appreciation for service workers. I'm sure the people coming there loved being able to torment, I mean ask help from, real human beings. By the end, this feeling was very much not reciprocated. I was not alone in this either of course, and the mask would fall off of people increasingly often. Those very real human beings looking for help were not too happy either, especially when said masks fell off. So it was mostly just miserable for everyone involved. AI can substitute in such situations very handily, and everyone is honestly plain better off. Having to explain the same thing over and over to varyingly difficult people is not something a lot of people are cut out for, but the need is ever present and expanding, and AI has zero problems with filling those shoes. It can provide people with the assistance they need and deserve. It can provide even those with that help that do not need it, nor deserve it. Everyone's happier.
We've concocted a lot of inhuman(e) systems and social dynamics for ourselves over time. I have some skepticisms towards the future of AI myself, but it has a very legitimate chance of counteracting many of these dynamics, and restoring some much needed balance.
When coding in a new environment, I like to go fast and break things - this is how I learn best (this is not a good way in general, but works well for me in my amateur dev).
I ask ChatGPT questions that would drive me crazy because they are a bit chaotic, a bit repetitive and give the impression of someone chaotic and slightly dumb (me, the asker, not the AI).
I worry that with time people may start to interact with other people the same way and that would be atrocious.
mulmen · 11h ago
I’m glad you let him send his message.
AI is the future whether you like it or not. Teaching him to use that tool effectively will serve him far better than shaming him for engaging the world in a way you find uncomfortable but is acceptable to society.
Consider if you would prefer he write the letter by hand to give the script that literal human touch. If not why is it ok for the computer to make the letters but not the words?
In this case the meaningful gesture is sending the message at all. He asked the AI to do a thing. That was his idea. AI just did the boring work of making it palatable to humans.
Much like driving and everything else automation takes away writing is something most people are profoundly bad at. Nothing is lost when an AI generates a message a human requested.
AaronAPU · 10h ago
The meaningful gesture isn’t clicking a button and pressing send. The meaningful gesture is taking time out of your day where you are focused and authentically thinking about the person and expressing those positive thoughts as they come with your own words.
It is a very sad and cynical view to equate these very different things.
swat535 · 9h ago
I'm a bit conflicted here.. I think we're mixing up the tool with the _intent_ behind it.
To me, this feels less like outsourcing creativity and more like using a writing assistant to shape your thoughts. Kind of like how we all rely on spellcheck or Grammarly now without thinking twice. People were saying the same thing back then too, that tools were "diluting" writing.
I personally don't see the harm. Not everyone is a native English speaker.
metalman · 10h ago
AI is creating a now, that is dull, productive, borring, predictable, profiable, and alluring in a smug snide gotcha kinda way. Which would be fine, except that it is doomed to be self referential and will force a universal adoption, rendering the whole thing a fancy spambot that consumes 25% of the worlds energy budget.
And the statement "like it or not" keeps some very uggly company in it's assosiations.
clumsily written by someine with a phone useing one finger, and zero reconsideration or evaluation
of whatever it is I just wrote
hope you like it:)
ash_091 · 11h ago
Last week I got an email from a manager about the number of free beverage taps in our office being reduced.
They'd clearly dumped the memo they got about the reduction into some AI with a prompt to write a "fun" announcement. The result was a mess of weird AI positivity and a fun "fact" which was so ludicrous that the manager can't have read it before sending.
I don't mind reading stuff that has been written with assistance from AI, but I definitely think it's concerning that people are apparently willing to send purely AI generated copy without at least reviewing for correctness and tone first.
pizzafeelsright · 10h ago
Tone policing? I'm fine with it although I too got an email from corporate about an event with the same type of fun energy. My impression changed of the event as it was no longer personal but mechanical.
There's always been some innate ability to recognize effort and experience. I don't know the word for it but looking at a child's or experienced artist drawing you just know if they put in minimal or extra effort.
switch007 · 10h ago
My company would have spun that like
"We are excited to announce we are supporting our family in their health kick journey. To support them, we have taken the difficult decision to reduce the number of beverages available. We remain fully committed to unlimited delicious tap water, free of charge!"
krick · 9h ago
That's kinda the point. As silly as what you wrote is, it's still a whole level above what ChatGPT can make up. The fact, that it produces human-like text doesn't make it any good, but somewhere there is a manager (and I bet he is not the only one) who just uses it to generate some nonsense announcements and gets paid for his work. Maybe he'll even get a promotion because of how effective he is.
Terr_ · 12h ago
> That’s when it dawned on me: we don’t have a vocabulary for this.
I'd like to highlight the words "counterfeiting" and "debasement", as vocabulary that could apply to the underlying cause of these interactions. To recycle an old comment [0]:
> Yeah, one of their most "effective" uses [of LLMs] is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."
> Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.
> In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."
Debasement (in the currency sense). That's exactly what it is.
And then you get to Gresham's Law: "Bad money drives out good" (that is, drives it out of circulation)...
SoftTalker · 11h ago
I like chatshit —- bullshit, but written by AI.
mcherm · 11h ago
Oh, now THAT one seems likely to actually catch on!
doctorhandshake · 12h ago
Ironically (and perhaps proving the author’s point), when reading I couldn’t help feeling like this was at least AI-assisted writing, if not prasted directly.
jstanley · 11h ago
A big giveaway is that for most of the post the author uses hyphens instead of em-dashes, obviously because em-dashes are too difficult to type, but then uses em-dashes in a handful of places that sound exactly like the kind of thing ChatGPT would say.
Terr_ · 7h ago
> A big giveaway is that for most of the post the author uses hyphens instead of em-dashes
I hear this "tip" a lot, and I question whether it's statistically-meaningful.
After spending several decades learning the right ways—like ALT+0151 on Windows—it seems deeply unfair that people are going to mischaracterize care and attention to detail as "fake".
jstanley · 14m ago
In this case I wasn't trying to say that em-dashes themselves are evidence of AI use. I was trying to say that mixing incorrect hyphen usage with correct em-dash usage is evidence of AI use.
But...
Using em-dashes is a signal. It's not a smoking gun, but text that uses em-dashes is more likely to be AI-generated than text that doesn't!
Similarly, text that consistently uses correct spelling and punctuation is more likely to be AI-generated than text that doesn't.
So - yeah - if you use em-dashes your writing looks more like AI wrote it.
But that’s not a bad thing—it means your writing has the same strengths: clarity, rhythm, and elegance. AI learned from the best, and so did you.
L-4 · 4h ago
In this case the author mixes em dashes with hyphens surrounded by space. Both fine on their own, but it seems unlikely that someone with the attention to detail to use em dashes is going to be inconsistent here.
neom · 5h ago
I wonder if it's because my writing has always been so imperfect because of dyslexia/etc so I basically really couldn't care less, for me anyway least message hunting I have to do the better, also if it means someone doesn't sit and hmm and haa all day over sending a 4 sentence email to their boss because they're having a bad imposter syndrom day, who cares. ALSO... what to do about the fact you can't unread the AI?? also.... proof is in the pudding. oh, and also I emailed dang the other day, I wanted to make a bunch of intersecting fine points so the email got kinda contrived, I gave it to chatgpt but instead of replacing my email, I send my email and included the chatgpt share link for him, he thanked me for leaving it in my own voice.
agentultra · 11h ago
Makes me wonder why people think I want to read what they send me if they haven’t even bothered to write it.
A4ET8a8uTh0_v2 · 12h ago
It is interesting. It is interesting in several different ways too:
- The timing is interesting as Altman opened US branches of his 'prove humanness' project that hides the biometrics gathering effort
- The problem is interesting, because on HN alone, the weight of traffic from various AI bots seems to have become a known ( and reported ) issue
- The fact that some still need to be convinced of this need ( it is a real need, but as first point notes, there are clear winners of some of the proposals out there ) resulting in articles like these
- Interesting second and third order impact on society mentioned ( and not mentioned ) in the article
Like I said. Interesting.
klabb3 · 12h ago
It’s futile. The first thing people will do if the ”written by a human” crypto-signature takes off is to wire it up with LLMs. You can’t make any reasonable guarantees on authorship unless you change the entire hardware stack and tack on loads of DRM.
Even if that happens, and say Apple integrates sigs all the way down through their system UI keyboards, secure enclaves, and TPM, you think they’re going to conform to some shitcoin spec? Nah man, they’ll use their own.
pixl97 · 11h ago
>You can’t make any reasonable guarantees on authorship unless you change the entire hardware stack and tack on loads of DRM.
Even then you can't trust it. Companies write DRM and tend to have actual humans run the place. If the government where these humans live decides to point guns at them and demand access, most humans are going to give up the key before they give up their life.
A4ET8a8uTh0_v2 · 11h ago
Oddly, this is the weirdest thing about all the 'its llms all the say down'. The problem starts with user trusting the provider to be dumb pipes. The moment they become smart pipes ( and one could argue we are there already ), all bets are off, because you have no comfort that even if you sent the message, it was not altered by an overzealous company bot.
Edit:
Us tomorrow: Your honor, my device clearly shows timestamps and allegedly offending message, but you will note the discrepancy in style, time and message that suggests that is not the case.
LLM judge: Guilty.
Edit2:
Amusingly, the problem has been a human problem all along.
throwaway173738 · 11h ago
I love how Altman created the problem and is also selling a solution. “Getting you coming and going” as it were.
analog31 · 10h ago
Even before the advent of AI, I already realized that most of what's written by people isn't worth reading. This is why I don't read most of the e-mails that I receive.
Before AI, it was hard for many people to write literate text. I was OK with that, if the text was worth reading. I don't need to be entertained, just informed.
The thing that gets me about AI is not that what it generates is un-original, but if it's trained on the bulk of human text, then what it generates is not worth reading.
michaeljx · 12h ago
I "prompt prong", i.e pass all the AI emails that I receive through an AI and ask it to write a response. At what point do we get the 2 AIs to email each other directly, without as bio-agents having to pretend that we wrote/read them?
turtleyacht · 12h ago
Hmm.
Almost makes sense to write emails
with deliberate newlines--not counting
the widths, but advancing the line as
it were (ourselves), like how we'd do
with a typewriter.
Because (in Outlook) I didn't like
seeing the message stretch out past
where my eyes could scroll too far
right; instead, to Enter when each
line sort of looked alright, some
length among their sentence homes.
nkrisc · 11h ago
Oh, come on, please, no. You have no idea on how wide of a window or screen I'm reading it with or what my font size is.
Just write text, separate paragraphs, and let me format it how I like to read it.
turtleyacht · 11h ago
Yes, it's preferable to show text according to reader's formatting.
By the way, HN will flow jagged text when it's not prepended with two spaces (code markup). Had to mark it up on purpose.
So, you wouldn't know
this sentence was
separated into six
lines. But in editing,
it preserves the literal
format.
rorylaitila · 12h ago
It's becoming a big problem. But also, pretty easy now to filter out authentic people. In terms of AI in personal and business relationships, the maximum I'll do is short auto complete. Superhuman email, which I use, just came out with AI written replies, and it is too much.
I simply don't want to live in a world where all I am doing is talking back and forth with AI that are pretending to be people. What is the point in that? I am working on https://humancrm.io and I am explicitly not putting any AI into it. Hopefully there are more than a few people that feel like I do.
jsemrau · 11h ago
I thought same about paying in cash vs. touchpay. It's just more human, I can take time to chat with the person.
Fast forward to 2025, I prefer touch and go because its just more efficient.
I think most business email conversations will follow the same route.
We don't need all of this bla bla. Just "here is the information you need" -> "update me if you (a) need more info or (b) if the task is done"
What is notable here is though that we continue to reduce human/human interactions and that will eventually lead to a de-sensitizing of human culture.
rorylaitila · 11h ago
Yeah I am fine with efficiency. I guess its a matter of where you draw the line. In terms of my person interactions, I draw the line at autocomplete. If the valuable input from the person on the other end is simply to confirm and hit send on whatever the AI produces, might as well as remove their role entirely and let me solve the problem myself.
mulmen · 11h ago
I use generative AI in the exact opposite way. I write the simple facts and let GenAI make it palatable to humans.
xandrius · 11h ago
Big problem? Who are you talking to?
Really, it someone can't even type a response back, were you ever close to begin with? Unless they always had some level of anxiety when sending you a text, in that case it's good for them to still interact with you without feeling the negative effects (some people truly have issues sending even the simplest messages back to friends).
But in reality no, this won't be a problem. We had copy paste and template system for decades and nobody are using those. And at the end of the day, even if our AI plan our meeting for us, and then end up meeting IRL, what's the problem?
rorylaitila · 11h ago
I should have added the context "it's a big problem in relation to sales" which is becoming quite an arms race to ever greater automation and "personalization" one-upsmanship. There is a lot of FOMO and not automating. In the long run I don't think most of the sales automation will actually work the way people think, so in that sense, it won't be a problem.
If you have that kind of anxiety you either need different friends or you need therapy.
satisfice · 7h ago
This is a big problem, but we all know the solution— cease taking anyone’s writing seriously, unless they develop a reputation for natural writing (not using AI in their writing).
This is what we will all do. We all are spam filters now.
ahowardm · 11h ago
In the era of AI every time I have to read a report I ask myself: if the author probably didn’t spend his time writing this, why should I waste mine reading it?
miragecraft · 11h ago
This is just people being lazy and asking AI to do their proverbial homework instead of using it as a teacher to help them improve.
dotslashmain · 12h ago
Some people (like the author) are going to seriously overthink this at first, and then we'll get used to it.
rpgwaiter · 11h ago
Idk why you’d want to get used to it. I’m very lucky to have a job that doesn’t mandate AI use, and as far as I can tell I haven’t been hit by any work AI emails. My social media bubble on Mastodon is extremely anti-AI, I pretty much never have to deal with slop.
On the rare occasion I see some GPT garbage, I either block the sender, or if I know a human is involved I explain how insulting it is and let them know they’re one slop message away from blocked.
Getting used to it is a surefire way to make your communication experience much worse.
dotslashmain · 9h ago
I'm not referring to spam. I'm thinking about how AI-enabled email/messaging/writing actually makes communication clearer. Many people are not very good at expressing themselves in writing, either due to language barriers or simply lack of writing skills, but I've seen a noticeable difference in how some of these people are now able to communicate with me over email. They leverage the LLM as a function to transform their naturally-poor and hard-to-understand writing into a clear, comprehensible message with proper grammar that I can easily consume and immediately understand what they need. The fact that the message has been clearly transcribed by an LLM is completely okay with me.
iugtmkbdfil834 · 4h ago
<< Many people are not very good at expressing themselves in writing
If they can't handle an email, what makes you think they can handle a prompt, which requires more, not less careful calibration?
et1337 · 11h ago
I DM’d someone a specific question recently and got back some generic, long-winded, impeccably formatted bullet points when previously this person had barely used any punctuation. I said something like, wow I like your formatting! Is that an extension or something? I always have a hard time with Slack formatting, etc. They replied that they were “trying something called structured communication.”
It probably made me angrier than it should have. Now I’m wondering if I’m the “old man yelling at cloud”.
noman-land · 11h ago
I hope you told this person that their experiment upset you.
et1337 · 11h ago
Maybe I should have been more direct, but this was someone at work that I don’t speak to that often.
noman-land · 11h ago
Yeah but they told you they were doing an experiment. Help them with their experiment.
analog31 · 10h ago
I consider an AI response to a question, to be the new form of "let me Google that for you."
switch007 · 9h ago
I've definitely spammed some long AI responses to generic questions in DMs that can easily be googled or punched into the many AI tools we have!
jgalt212 · 10h ago
I guess this is effective way to let the recipient know how important they are to you. AI Ghosting, if you will.
Der_Einzige · 11h ago
Sorry, but any trick you think you have for detecting AI generated text is defeated by high temperature sampling (which works now with good samplers like min_p/top-n sigma) and the anti-slop sampler: https://github.com/sam-paech/antislop-sampler
You'll be able to detect someone running base ChatGPT or something - but even base ChatGPT with a temperature of 2 has a very, very different response style - and that's before you simply get creative with the system prompt.
And yes, it's trivial to get the model to not use em dashes, or the wrong kind of quotes, or any other tell you think you have against it.
We then proceeded to have a conversation about how some people might feel if an appreciation letter was given to them that was clearly written by AI. I had to explain how it feels sort of cold and impersonal which leads to losing the effect he's looking to have.
To be honest, though, it really got me thinking about what the future is going to look like. What do I know? Maybe people will just stop caring about the human touch. It seems like a massive loss to me, but I'm also getting older.
I let him send the letter anyways. Maybe in an ironic twist an AI will respond to it.
No they won’t. This has happened many times in the past already with live theatre -> movies, live music -> radio etc. They don’t replace, but break into different categories where the new thing is cheap and abundant. When a corporation writes you a shit letter with ”We miss you, Derek” all reasonable people know what they’re looking at.
Look, it’s about basic economics. It doesn’t matter how ”good” the generated song for someone’s birthday is. What matters is the time, money and effort. In some cases writing a prompt for one-time use is not bad. If you’re generating individual output at scale without any human attention, nobody will appreciate the ”gesture”.
What bothers me is the content farm for X shit-startups and tech-cos thinking they’re replacing humans without side effects. It’ll work just as well as those fake signatures by the CEO in junk mail: it’ll deceive only for a short time, and maybe older people who may be permanently screwed. It’ll just be yet another comms channel saturated with spam, which is entirely fungible from the heaps of all other spam. A classic race to the bottom.
I doubt it. We human beings seem intrinsically motivated and enthusiastic about human connections. I believe we are wired like this. I know things changesbut I would need some strong evidence before even playing with the idea that we'll stop caring about the human touch.
Now, as much as I hate AI, that doesn't necessarily mean AI-free. Or even handwritten. It just needs to be some human touch. I would enjoy a handwritten letter but wouldn't mind an email at all. But maybe someone else would find it lazy and tasteless, just as I would find an ai generated text lazy and tasteless.
Maybe the prompt you can guess the person who sent you some ai-generated text used can already be perceived as some human touch. Maybe there is a threshold though.
Now, could it be that your child wanted to impress you with a perfectly written letter, or even with their ai prompt mastering?
Anyway, good anecdote, good perspective, good for you to have had the conversation and let them proceed anyway. Thanks for sharing.
Absolutely, and I think because of this we'll never see the desire go away completely. However, I'm imagining some dystopian future where human touch is so rare that people _forget_ how much it means to them. It's like scrolling through the endless slop of Netflix and then coming to some rare gem of a film where you're reminded what genuine art is.
But it's not like this only happens because of LLMs. If you worked in corporate culture you most definitely received some automated HR emails congratulating you with spending half of your life at the workplace, or something like that. I always felt almost insulted by these, they are literally just spam at best. It's kinda mocking: these are generic depersonalized texts that no one actually wrote for you, yet they always speak about "gratitude", about you being "valued" and such. In fact, it's the only thing they are meant to express: you being valued. It's so cynical.
But, I mean, it's just me. Ostensibly, these folks in HR department do know their job? Maybe most people don't feel like vomiting when they get these emails? Maybe it brings them joy? I never stopped wondering about that. I cannot just ask the closest coworkers, because of course they feel the same as me. But maybe there are other ones? Another social bubble, where this thing is normal, and it is bigger than mine?
Anyway, everyone is kinda used to it. What I am trying to say is that the phenomena is not entirely new, and LLMs don't change the essence of it. Even back when people sent paper mail to each other, I remember these pre-printed birthday/christmas cards, which are ok, because the entire point is that they are not automated and that you remember to send it to someone, yet it was always considered a bit of a poor taste to not add a sentence of yours by hand.
So, no, there is no evidence that AI will change stuff. We had canned responses and template answers for a long time but people still like talking to a real human being.
P.S. I think you should have told them to write a thank you letter themselves as a fun game to compare with the AI one and send that one instead.
AI has already changed stuff. I have already seen several related examples of distasteful AI use in corporate settings. One example was management promising that feedback received during a townhall will be reviewed, only to then later proudly announce that they AI-summarized it. I'll readily admit that doing that is actually a very sensible use of AI, just maybe the messaging around it should have been a bit less out of touch. Another example was my coworker expressing his gratitude to the team, while simultaneously passing the milestone of producing more than 10 consecutive words of coherent English for the first time in his life. He was awfully proud of it too.
And to finish it off, talking to real human beings on the internet is increasingly miserable by the day. Without going too far off into the weeds, let me give you a practical, older example. I've participated in a Discord server of a FOSS project, specifically in their support channels, for a couple years - walked away a very different person, with great appreciation for service workers. I'm sure the people coming there loved being able to torment, I mean ask help from, real human beings. By the end, this feeling was very much not reciprocated. I was not alone in this either of course, and the mask would fall off of people increasingly often. Those very real human beings looking for help were not too happy either, especially when said masks fell off. So it was mostly just miserable for everyone involved. AI can substitute in such situations very handily, and everyone is honestly plain better off. Having to explain the same thing over and over to varyingly difficult people is not something a lot of people are cut out for, but the need is ever present and expanding, and AI has zero problems with filling those shoes. It can provide people with the assistance they need and deserve. It can provide even those with that help that do not need it, nor deserve it. Everyone's happier.
We've concocted a lot of inhuman(e) systems and social dynamics for ourselves over time. I have some skepticisms towards the future of AI myself, but it has a very legitimate chance of counteracting many of these dynamics, and restoring some much needed balance.
[0] https://en.wikipedia.org/wiki/Autopen
When coding in a new environment, I like to go fast and break things - this is how I learn best (this is not a good way in general, but works well for me in my amateur dev).
I ask ChatGPT questions that would drive me crazy because they are a bit chaotic, a bit repetitive and give the impression of someone chaotic and slightly dumb (me, the asker, not the AI).
I worry that with time people may start to interact with other people the same way and that would be atrocious.
AI is the future whether you like it or not. Teaching him to use that tool effectively will serve him far better than shaming him for engaging the world in a way you find uncomfortable but is acceptable to society.
Consider if you would prefer he write the letter by hand to give the script that literal human touch. If not why is it ok for the computer to make the letters but not the words?
In this case the meaningful gesture is sending the message at all. He asked the AI to do a thing. That was his idea. AI just did the boring work of making it palatable to humans.
Much like driving and everything else automation takes away writing is something most people are profoundly bad at. Nothing is lost when an AI generates a message a human requested.
It is a very sad and cynical view to equate these very different things.
To me, this feels less like outsourcing creativity and more like using a writing assistant to shape your thoughts. Kind of like how we all rely on spellcheck or Grammarly now without thinking twice. People were saying the same thing back then too, that tools were "diluting" writing.
I personally don't see the harm. Not everyone is a native English speaker.
They'd clearly dumped the memo they got about the reduction into some AI with a prompt to write a "fun" announcement. The result was a mess of weird AI positivity and a fun "fact" which was so ludicrous that the manager can't have read it before sending.
I don't mind reading stuff that has been written with assistance from AI, but I definitely think it's concerning that people are apparently willing to send purely AI generated copy without at least reviewing for correctness and tone first.
There's always been some innate ability to recognize effort and experience. I don't know the word for it but looking at a child's or experienced artist drawing you just know if they put in minimal or extra effort.
"We are excited to announce we are supporting our family in their health kick journey. To support them, we have taken the difficult decision to reduce the number of beverages available. We remain fully committed to unlimited delicious tap water, free of charge!"
I'd like to highlight the words "counterfeiting" and "debasement", as vocabulary that could apply to the underlying cause of these interactions. To recycle an old comment [0]:
> Yeah, one of their most "effective" uses [of LLMs] is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."
> Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.
> In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."
[0] https://news.ycombinator.com/item?id=41675602
And then you get to Gresham's Law: "Bad money drives out good" (that is, drives it out of circulation)...
I hear this "tip" a lot, and I question whether it's statistically-meaningful.
After spending several decades learning the right ways—like ALT+0151 on Windows—it seems deeply unfair that people are going to mischaracterize care and attention to detail as "fake".
But...
Using em-dashes is a signal. It's not a smoking gun, but text that uses em-dashes is more likely to be AI-generated than text that doesn't!
Similarly, text that consistently uses correct spelling and punctuation is more likely to be AI-generated than text that doesn't.
So - yeah - if you use em-dashes your writing looks more like AI wrote it.
But that’s not a bad thing—it means your writing has the same strengths: clarity, rhythm, and elegance. AI learned from the best, and so did you.
- The timing is interesting as Altman opened US branches of his 'prove humanness' project that hides the biometrics gathering effort - The problem is interesting, because on HN alone, the weight of traffic from various AI bots seems to have become a known ( and reported ) issue - The fact that some still need to be convinced of this need ( it is a real need, but as first point notes, there are clear winners of some of the proposals out there ) resulting in articles like these - Interesting second and third order impact on society mentioned ( and not mentioned ) in the article
Like I said. Interesting.
Even if that happens, and say Apple integrates sigs all the way down through their system UI keyboards, secure enclaves, and TPM, you think they’re going to conform to some shitcoin spec? Nah man, they’ll use their own.
Even then you can't trust it. Companies write DRM and tend to have actual humans run the place. If the government where these humans live decides to point guns at them and demand access, most humans are going to give up the key before they give up their life.
Edit:
Us tomorrow: Your honor, my device clearly shows timestamps and allegedly offending message, but you will note the discrepancy in style, time and message that suggests that is not the case.
LLM judge: Guilty.
Edit2:
Amusingly, the problem has been a human problem all along.
Before AI, it was hard for many people to write literate text. I was OK with that, if the text was worth reading. I don't need to be entertained, just informed.
The thing that gets me about AI is not that what it generates is un-original, but if it's trained on the bulk of human text, then what it generates is not worth reading.
Just write text, separate paragraphs, and let me format it how I like to read it.
By the way, HN will flow jagged text when it's not prepended with two spaces (code markup). Had to mark it up on purpose.
So, you wouldn't know this sentence was separated into six lines. But in editing, it preserves the literal format.
I simply don't want to live in a world where all I am doing is talking back and forth with AI that are pretending to be people. What is the point in that? I am working on https://humancrm.io and I am explicitly not putting any AI into it. Hopefully there are more than a few people that feel like I do.
I think most business email conversations will follow the same route. We don't need all of this bla bla. Just "here is the information you need" -> "update me if you (a) need more info or (b) if the task is done"
What is notable here is though that we continue to reduce human/human interactions and that will eventually lead to a de-sensitizing of human culture.
Really, it someone can't even type a response back, were you ever close to begin with? Unless they always had some level of anxiety when sending you a text, in that case it's good for them to still interact with you without feeling the negative effects (some people truly have issues sending even the simplest messages back to friends).
But in reality no, this won't be a problem. We had copy paste and template system for decades and nobody are using those. And at the end of the day, even if our AI plan our meeting for us, and then end up meeting IRL, what's the problem?
Maybe they work for this guy (or someone like them): https://news.ycombinator.com/item?id=43861328
This is what we will all do. We all are spam filters now.
On the rare occasion I see some GPT garbage, I either block the sender, or if I know a human is involved I explain how insulting it is and let them know they’re one slop message away from blocked.
Getting used to it is a surefire way to make your communication experience much worse.
If they can't handle an email, what makes you think they can handle a prompt, which requires more, not less careful calibration?
It probably made me angrier than it should have. Now I’m wondering if I’m the “old man yelling at cloud”.
You'll be able to detect someone running base ChatGPT or something - but even base ChatGPT with a temperature of 2 has a very, very different response style - and that's before you simply get creative with the system prompt.
And yes, it's trivial to get the model to not use em dashes, or the wrong kind of quotes, or any other tell you think you have against it.