"AI-first" is the new Return To Office

286 LorenDB 175 4/30/2025, 1:38:57 PM anildash.com ↗

Comments (175)

hbsbsbsndk · 7h ago
Having worked at Shopify, Tobi is 100% a try-hard Elon wannabe. Anil is correct that this is about performing "being a CEO" for the sake of his image among his peers.

My favorite stupid Shopify cult thing is the hiring page having a "skip the line" for "exceptional abilities" which explicitly lists being good at video games as a reason to skip the normal hiring process. The "other" category includes examples like "Olympic athlete".

kunzhi · 6h ago
I know some folks who work and have worked at Shopify and their experiences track with this statement.

I was told that within Shopify there's something called a "Tobi Tornado" - basically when Tobi swoops in on a program / feature and demands significant change in short order. Carefully planned initiatives can be blown up and then it's maximum effort expected to turn it around.

What everyone had in common was saying that Tobi is quite a smart person and often not wrong, but he's still human, and so there's simply no way he can make 100% good calls because he can't always have full context.

skrebbel · 6h ago
Fwiw this is exactly what I'd expect a product founder to do. If I wanted to work at a bank I'd go work at a bank. It's so easy for groups to over-cover-their-asses, go for maximum safety everywhere, design-by-committee all the things, etc. I think this is almost unavoidable in a large org. It's the task of a decent product minded founder to add some balance to this.

I've no idea whether Tobi gets it right, just.. this isn't necessarily a bad thing!

ketzo · 4h ago
I mostly agree that the benefit of a product-focused founder & CEO is the ability to make judgement calls, even corrections, and get things moving faster

However, I think the more work you blow up when you do this, the more it’s reflective of a poor management style; even if it’s the right call under the circumstances, that call should almost certainly have been made earlier.

Of course we don’t live in a perfect world, and if something’s 75% done but really bad, you press the red button and stop it, even if people will be upset.

But if you’re consistently being described as a “tornado”, that says to me you’re not applying your founder judgement early enough in your company’s development process.

jen729w · 2h ago
Shopify store owner here. It’s my entire income.

This is terrifying.

Shopify is well past its move fast… phase. It powers a vast percentage of ecommerce. If not in dollar percentage, certainly in human.

Please, I beg you, pretend like you work at a bank.

zachthewf · 2h ago
There are certainly slow-moving ecommerce platforms that I'm sure would be happy to take your business. Yet Shopify has the best product. Why do you think that is?
dismalaf · 42m ago
Amazon and Ebay operate more like banks. Which is how Shopify carved out their market share. This is a fairly mature market. It's just that the incumbents aren't great because of how slow-moving they are...

Shopify is good because of how they operate.

notfromhere · 3h ago
Blowing things up like that is mostly for people who want to feel important. It's a bad leadership model and fairly disrespectful to your employees IMO
hbsbsbsndk · 5h ago
There's a kind of survivorship bias to people saying "he's very smart". I was in a neglected part of the engineering org far away from his micromanaging. If he had been showing up and complaining about my projects I wouldn't have stayed for years.
rchaud · 6h ago
The PC term for this is "founder mode".
ryandrake · 5h ago
I've heard it called "Seagull Management." A manager swoops in unannounced, shits all over and disrupts everything, and then swoops out while the rank and file have to deal with the mess.
ohgr · 5h ago
Got one of them. I ignore him most of the time. This has generated more ROI than his initiatives would have. Evidenced by the teams who do them generating a loss every fucking time.

His nickname, which he wasn't worked out, is the first half of a sexual lubricant brand because he's such a wanker.

mathgladiator · 2h ago
And that is the rank and file's job. Chaos is important for breaking organizational bones.
psunavy03 · 1h ago
And there's the bootlicking take of the day.
jasondigitized · 5h ago
This is also called "Swoop and Poop" or "Show up and Throw Up"
flappyeagle · 5h ago
What if he’s right more than 50% of the time?
senko · 5h ago
He can be right 100% of the time, this is no way to run a company.

You hire passionate people who pour their soul and overtime into a thing then you parachute in, override half their decisions, micromanage other half, then leave leaving them to live with the mess.

After a few of these stunts, you end up with disillusioned, cynical, burnt out people who just don't care any more, and either quiet quit or leave for greener pastures, or the kind of folks who crack the game and fail upwards while caring nothing about the company and the products.

And as soon as the word spreads out that this is your modus operandi, smart folks who have neen around the block a few times will avoid you like the plague.

It can work if you're willing to churn people (khm Elon), for some definition of "work".

phillryu · 4h ago
You could be right, but in the hypothetical world where he's 100% right, and the current path the project or feature is taking is heading towards a dead end or is broken in some way. I feel there is a real cost that can be more easily ignored, e.g. working on a dead end or slowly failing thing that isn't set on the right trajectory is ultimately demoralizing and ends up in hindsight having been an opportunity cost and waste of your time vs. working on something else. And conversely if he's right about something like this, there should be positive feedback when it is turned around and starts performing better for the team involved?
senko · 4h ago
Yeah I don't mean to say founders, CEOs, etc, should ignore problems when they see one (let's assume they correctly identify the problem, ie 100% right): quite the opposite!

But there's a (slower, harder?) way to right the ship and make the team better, and (quicker, easier?) way to swoop in like a Marvel Avenger and break everything (and everyone) in the process.

I feel Founder Mode should in theory be the former, but is in fact excuse for many to do the latter (I've no evidence for this, just what it looks like to me).

phillryu · 3h ago
Ah ok yeah I see where you're coming from there. It's kind of like the question of did Steve Jobs need to be an asshole to be as successful as he was. And it can be tempting to think that they are intrinsically linked, but I also like to believe there is a world where he grew more on the empathy side but was still able to lead Apple perhaps even better.
flappyeagle · 58m ago
Shrug? I disagree strongly. If he was 100% right I would just double the estimate of every project and build in the assumption of a pivot
jimbokun · 4h ago
> (khm Elon), for some definition of "work".

In that case the definition of "work" being "become the wealthiest person in the world".

geodel · 2h ago
Not good enough until I get to fire tens of thousands, turn their life upside down and gaslight them on their way out. And I am sad because these laid off people can 't even praise my genius while cleaning out their desk. So I am just leaving.
senko · 4h ago
Bah, that's overrated, I'd be quite happy with just a few $B :D
dghlsakjg · 5h ago
Seagull management too.

Fly in, make a ton of noise, shit on everything, fly away.

TZubiri · 56m ago
This sounds like any normal privately owned company or hierarchical organization to be honest.
skrebbel · 7h ago
Another counterpoint, he's been a pretty pro-nuance voice for a long time, very un-Elon like. Eg I still frequently quote some of this thread to people: https://threadreaderapp.com/thread/1210242184341000192.html

My favorite part:

> I've never worked through a night. The only times I worked more than 40 hours in a week was when I had the burning desire to do so. I need 8ish hours of sleep a night. Same with everybody else, whether we admit it or not.

> For creative work, you can't cheat. My believe is that there are 5 creative hours in everyone's day. All I ask of people at Shopify is that 4 of those are channeled into the company.

Obviously, as I'm replying to someone with first-hand Shopify experience, which I don't have, take all this as you wish. I only know the Twitter Tobi. (and I think his "AI first" memo is ridiculous, to the point that I struggle to imagine that the same person wrote this twitter thread)

hbsbsbsndk · 5h ago
2019 public-facing Tobi is probably the least insane version. He has gotten worse and more egocentric (and more explicitly right-wing) since then.

Early (pre IPO) Shopify had a pretty toxic internal culture with a lot of drinking and sexual harassment. I was lucky to be there around the post-IPO, pre-pandemic era when there was a bit of structure and the techbros were getting reigned in a bit. Once the pandemic hit I think he just lost his mind.

remich · 5h ago
Feels like the "loses mind during COVID" is a common thread in the VC class. Wonder why.
daveguy · 5h ago
Because so many of them are spoiled brats that can't deal with even the thought of putting others before themselves? Just guessing. Effective Altruism is a hell of a drug.
ed · 7h ago
Counterpoint: externally, at least, Shopify seems to be a well run organization and that has to come from somewhere. (Parter for about 8 years.)
pera · 7h ago
Not from my point of view: Last year I paid for a product from a shop that was using Shopify, you would think that given that this was a one time transaction, once the payment has been processed they wouldn't keep storing your PII forever but guess what? A couple of months ago I was going to buy a product from a totally unrelated shop also using Shopify and as soon as I typed my email address they sent me an OTP to my phone to autocomplete all my personal details. So paying for a product equals to creating an account with Shopify.

This is incredibly shady and I wonder if it's even legal here in Europe.

OJFord · 5h ago
You may have thought you were saving your details with that shop (or not realised at all of course) but yes this is a recentish feature I think, at least I haven't noticed it for long, branded 'Shop pay' iirc.

As for legality in the EU/UK, it's just like everything else, on some level they technically asked for consent and you gave it, but yes, dark patterns abound.

pera · 5h ago
It may have been stated somewhere in their T&C, but just to be clear: I did not explicitly consent for this.
OJFord · 1h ago
Right, but the enforcement on this stuff is terrible, you probably also started receiving mailing list spam that you didn't knowingly opt in to, and nothing's going to change or come of it even if you do report it to the ICO.
jen729w · 2h ago
Shopify store owner here. Credit to them, they make deleting customer data trivial. One obvious button.

This is interesting though: is that data deleted everywhere? It makes no sense just to delete from ‘my store’. But I can delete any customer data at any time.

Perhaps this is a nice example of complexity. From the outside it’s easy for us to why don’t they just…, but as soon as you scratch the surface…

bcrosby95 · 6h ago
I think it's shady to use cross-shop information unless the customer explicitly opts into it.

But shopify isn't just a payment processing service. It's a full blown ecommerce suite. Do you think there's an online store out there that gets rid of all PII once an order is paid for, or even after its fulfilled?

We've had people try to return/replace things (or even credit card disputes) years after they bought it. How exactly would that work if we got rid of all information about their order shortly after they made it?

blahaj · 7h ago
You could do a GDPR request for the data they have on you. I would be curious what they save. Keep us updated if you want to.
85392_school · 2h ago
ysavir · 7h ago
> My favorite stupid Shopify cult thing is the hiring page having a "skip the line" for "exceptional abilities" which explicitly lists being good at video games as a reason to skip the normal hiring process.

Hah! Now you have my curiosity. What do they replace the normal hiring process with? A game of LoL?

mjburgess · 7h ago
Some weak evidence of why "exceptional abilities" is not a bad idea, even if gimmiky, is that performance "at the extreme" is highly correlated. So that the people who tend, eg., to be concert pianists, tend also to be very accomplished artists, and the like.

So if you're hitting (a verifiable) top 0-0.5% in some field, there's a reasonable bias towards assuming a high general competence.

I did once hit 0.5 percentile in a multinational PHP exam in my teenage years however I did have a second window open with an interpreter running for the most fringe questions. -- who knows what that means.

glitchc · 7h ago
Wait, you expect a concert pianist to be a good software developer... on average? Most of the time? Not at all?
jimbokun · 4h ago
Because a lot of the skills and personality trait of becoming elite at something are transferable to becoming elite at other things.
mjburgess · 7h ago
I expect a concert pianist who's applying for a software developer position to be worthy of an initial consideration.

I know a software developer who could well be a concert pianist, for example. Ie., that pool of people who overlap, in that overlap, are probably extraordinarily talented.

evantbyrne · 6h ago
It only make sense if you assume that people can generally brute force their way into the top fraction of a percent. Not a view that I agree with.
butlike · 6h ago
Yeah, I'm kind of with you. Just because they have the capacity to excel doesn't mean they're going to for YOUR company.

Also it smells like a false metric. People who are in the 0.05% of excellence are probably still heavily invested in the thing they're excelling at.

Spivak · 7h ago
Yes! Well specifically someone who is a concert pianist and then who also set out to be a software developer. The ability to get very good at a difficult thing is highly correlated to being able to get very good at another really difficult thing.

Case in point I have a friend who is a top 32 magic player in NA. She recently, not even a year ago recently, made it her goal to become a chess grandmaster and she's already 2000 ELO. You could argue that maybe some skills transfer but it's pretty shaky reasoning.

michaelbarton · 6h ago
Exactly. Olympic athletes in one sport have a much higher chance of being an Olympian in another. It’s not that they inherently “sporty” that makes the difference but they’re willing to put in the 100s of hours of training and get up at 5am regularly. You could say it’s discipline that would make them a good SWE?

See: https://www.nine.com.au/sport/olympics/olympians-who-changed...

dismalaf · 4h ago
No, you'd expect them to have a capacity to learn at a high level. Which is a good trait for those who are also developers.
DrillShopper · 7h ago
They can't possibly be worse than half of my coworkers
WJW · 7h ago
Starcraft 2 is I think the preferred one.
Lammy · 5h ago
Make it Brood War and I'm in
MarcelOlsz · 3h ago
I've played with BoxeR and trained with team IM from Korea. Hmm I should apply.
morkalork · 6h ago
I wonder how it worked out for that kid.
0x1ceb00da · 6h ago
You gotta beat the boss to get a job.
spacemadness · 7h ago
That is one of the stupidest things I’ve read all day, and that’s saying something. Why are so many tech CEOs this childish?
rchaud · 6h ago
They've been slathered with money, PR and attention for years before even making a single penny in profit. At some point that creates a sense of cognitive dissonance that's hard to shake.
colechristensen · 6h ago
Every group has these people. Whether it's churchgoers, world leaders, tech CEOs, high schoolers, beer league softball, or even chimpanzees. Social bullshitters who do most things for the image among a small group of peers. It's a good skill to recognize this kind of pattern and how so many kinds of patterns are the same things repeating themselves in a different environment.
MarcelOlsz · 6h ago
Dinner party economics.
BoGoToTo · 7h ago
I remember one town hall where he was asked "If you had to start over again, what sort of company would you start?" and his answer was "I wouldn't, all the good ideas are already taken."
JohnFen · 7h ago
That's genuinely astonishing in its degree of ignorance.
flappyeagle · 56m ago
Why? It’s the same thing DHH says. He’s never going to have a better idea than rails and basecamp. Tobi is the same with Shopify
remich · 39m ago
There's a difference between having the humility to admit that you might not be able to hit another home run, and between claiming that "all the good ideas are taken." At best, the latter is an admission of a lack of a desire to even try anymore, at worst it shows a stunning lack of curiosity and creativity.
sulam · 6h ago
That sounds too dumb to be true. As in, is there some missing context that this quote was forklifted from?
Kalabasa · 34m ago
Shopify has an esports team called Shopify Rebellion
ferguess_k · 6h ago
Actually the investment banking industry seems to be doing this for ages.

And if people think about it, it's actually not too different from Leetcoding.

No comments yet

tschellenbach · 6h ago
When I watch Messi play I think his technique is sloppy to be honest.
TZubiri · 53m ago
It does say Top 100, and implies competitive gaming.

You made it sounds stupid. But being Top 100 in something with a huge global competitive base is not of useless or easy.

If you are offered a kid who spends 16 hrs per day competing and studying to be the best at something, and they can channel that energy at your company (with probably a shitty salary) wouldn't you take it?

TZubiri · 57m ago
The skip the line concept sounds interesting. I'm interviewing and some companies telling me they schedule an interview in 2 weeks feels like a light rejection / recognition that the skill has abundant offer.

It reminds me of the time I wanted to go out with a girl and she scheduled a date with me in 2 weeks, not a good outlook. I was happy to have a date so I just counted the days. When this happened with another girl I was less invested in, I told her to forget it, and she literally removed the guy she was seeing that week to go out with me.

I think that when the queue is too long, the solution is to cut the line or find another one (or participate in a meat market as a commodity amongst 100 for a low probablity of advancing for a low salary)

guywithahat · 5h ago
Well having a fast reaction time is directly correlated with your IQ, and so you could make the argument that to be good (like professionally good) at video games means you're exceptionally intelligent. I certainly wouldn't list it in the list as you've described though.
cadamsdotcom · 5h ago
That statement, “ having a fast reaction time is directly correlated with your IQ”, is fascinating.

Care to share the evidence you’d use to back it up?

guywithahat · 4h ago
Yeah, the first (and most famous) studies were done by Ian Deary in the 80's and 90's, where he would have subjects perform simple tasks (such as checking how quickly you can cover a light with your finger), and he found a correlation with reaction time, which grew stronger as the task became more complex. There are more recent, higher quality studies, but Deary did a lot of the early work and is who I'm most familiar with since his work was mentioned in the bell curve. Should be enough info for you to google it
grotted · 4h ago
The Bell Curve has been widely debunked. A quick glance at the Wikipedia article gets you countless citations and counter arguments.

For the lazy, here’s a fun video summarizing them: https://youtu.be/UBc7qBS1Ujo?feature=shared

dismalaf · 4h ago
Dunno, Shopify is the second largest Canadian corporation, in a country ruled by a bunch of monopolies enforced by the government. Only RBC, a 161 year old bank, is worth more.

Tobi's doing something right.

JimDabell · 7h ago
> This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? I'm actually old enough that I was at different workplaces when they started using spreadsheets and email and the web, and I can tell you, they absolutely didn't have to drive adoption by making people fill out paperwork about how they were definitely using the cool new technology.

I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.

For instance, if you weren’t around before version control became the norm, then you probably missed the legions of developers who said things like “Ugh, why do I have to use this stupid thing? It just slows me down and gets in my way! Why can’t I just focus on writing code?” Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.

AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner. If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?

An awful lot of people are stuck in a rut where they tried an early model, got poor results to begin with, and refused to use it again. These people do need a firm, top-down push, or they will be left behind.

This has happened before, many times. Contrary to the article’s claims, sometimes top-down pushes have been necessary even for things we now consider near universally good and productive.

mjr00 · 7h ago
> I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.

There was never any widespread resistance to "the Internet", let's be real here.

In any case, adoption of all those things was bottom-up rather than top-down. CEOs were not mandating that tech teams use version control or ORMs or automated testing. It was tech leadership, with a lot of support from ICs in their department.

Tech people in particular are excited about trying new things. I never heard CEOs mandating top-down that teams use Kubernetes and adding people's Kubernetes usage into their performance reviews, yet Kubernetes spread like wildfire--to the point where many software companies which had no business using something as complicated as Kubernetes started using it. Same with other flavor-of-the-month tools and approaches like event sourcing, NoSQL/MongoDB, etc.

If anything, as a leader you need to slow down adoption of new technology rather than force it upon people. The idea that senior leadership needs to push to get AI used is highly unusual, to say the least.

brummm · 7h ago
Isn't Amazon's APIs everywhere another example of just this that came right from the top? In some companies CEOs double as the tech lead, no?
mjr00 · 6h ago
The API mandate notably specified what rather than how. "It doesn’t matter what technology [you] use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter." In some ways it's quite the opposite of CEO mandates to use AI, which specify how you must build things (using AI!) rather than what.

The equivalent of the API mandate for AI would be if CEOs were demanding that all products include a "Summarize Content" button. Or that all code repositories contain a summary of their contents in a README. The use of AI to solve these problems would be an implementation detail.

ryandrake · 5h ago
Or if CEOs were demanding that everything be written in Python. Programming language should also be an implementation detail, not something a CEO would worry about. Just like "using AI."
jimbokun · 4h ago
My recollection is that AWS was extremely popular among developers very early on.
skwirl · 1h ago
Do you just mean within Amazon? Because outside of Amazon, there was major resistance to AWS/cloud computing in general from older devs highly invested in the status quo. I have spent a significant amount of effort in my career fighting for cloud adoption.
dowager_dan99 · 6h ago
to me this was more about guiding towards a desired outcome. An opinionated bet, but not overly prescriptive. "AI first" is saying do everything with AI and then hope you find some efficiencies, almost by accident.
trilobyte · 7h ago
I see you speak from experience. I feel like I'm watching the same cycle play out over and over again, which is that a new, transformative technology lands, people with a vested interest spend a lot of time denouncing it (your examples mostly land for me), the new technology gets over-hyped and fails to meet some bar and the haters all start crowing about how it's just B.S. and won't ever be useful, etc. etc.

Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.

With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".

Centigonal · 7h ago
Such a sane, nuanced take on new technologies. I wish more people were outspoken about holding these types of opinions.

It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.

pixl97 · 7h ago
Moderates typically have a lot less to say than extremists and don't feel a need to have their passion heard by the world. The discussion ends up being controlled by the haters and hypers.

New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.

Quite often the 'best' or at least most useful technology shows up via shadow IT.

remich · 6h ago
And a subgroup (or cousin?) of the exuberant AI boosters are the people absolutely convinced that LLM research leads to the singularity in the next 18-24 months.

I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.

The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.

*Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.

ryandrake · 7h ago
Unfortunately, thoughtful, nuanced takes don't make headlines, don't get into Harvard Business Review, and don't end up as memos on the CEO's desk. Breathless advocacy and knee-jerk dismissals get the clicks and those are the takes that end up bubbling to the top and influencing the decision makers.
n4r9 · 7h ago
> Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.

But why do they have to fill out some paperwork? If the new technology is a genuine productivity boost and any sort of meaningful performance review is undertaken, then it will show up if they're performing sub-par compared to colleagues.

The real problem is that senior management are lazily passing down mandates in lieu of trusting middle management to do effective performance reviews. Just as it was with Return To Office.

Faark · 2h ago
Social responsibility?

I have a few colleagues who like the way they work and would prefer everything to stay the way it is. Such "skilled artisans" might be on the way out, replaced by "Ai factory" mass production.

Sure, they could just be kicked out and replaced. But they worked with the company, in some case for a decade plus. Giving them a fair picture of what seems to be down the road is the very least I'd expect of a company treating it's workers as more than just replaceable cogwheels.

SpicyLemonZest · 7h ago
Some people are good enough that they'll do well on performance reviews anyway, and if there's a new technology that's acting as a force multiplier those are exactly the people who the company most wants to adopt it.
n4r9 · 6h ago
Fair point about comparing performance reviews. It's also practically impossible to judge someone's performance on novel tasks, or whether they're caring enough about tech debt. Even as I wrote my first post there was a nagging voice in my head saying "Almost no one does performance reviews well enough for that".

In my (limited) experience, the tasks you want to assign to elite devs are less amenable to AI in the first place.

DrillShopper · 6h ago
Perhaps then they should focus on getting their 0.1x developers using it to get competent rather than trying to get their 1x developers to 10x using it.
JohnFen · 7h ago
> if you weren’t around before version control became the norm, then you probably missed the legions of developers

I was around before version control and I don't remember that reaction from more than an insignificant percentage of devs. Most devs reacted to the advent of version control with glee because it eased a real pain point.

closeparen · 6h ago
It is incredibly early. Copilot and Cursor are both incapable of writing a mapping between two structs with identical fields - some of the most menial coding imaginable - because they either don’t have or won’t use basic coding mechanics like looking up the signature of a thing before writing code about it. This is the technology that should be making me 10x more productive? This honestly feels like an emperor has no clothes situation. Being charitable, maybe the hype is all from people generating code into empty projects with no existing context?
jayGlow · 2h ago
that's weird I'm pretty sure I've done that exact thing multiple times with chatgpt. I've noticed copilot doesn't always work well but it's still frequently useful for me.
dowager_dan99 · 6h ago
>> AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try.

My take-away was this is exactly what the OP is targeting. Management's job is to convince you to try and help you make it demonstrate value; mandating "though shall be AI-first" does neither of these effectively - ironically some of your best developers will: require the most evidence to be convinced, fight the hardest, and have the best options to jump ship if you go far enough. It's just weak management when there's the obvious alternative. Dash is a developer relations/evangelist so it's not surprising he bristles at this approach.

bootsmann · 1h ago
I think it would be a worthwhile exercise for yourself to find and replace every mention of AI in your post with blockchain or metaverse. Just because something is new doesn’t mean its useful and if you’re having to force knowledge workers to adopt something supposedly making them more productive then its probably a bad sign.
bluefirebrand · 4h ago
> AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner

I'm not a beginner though. In fact I'm actually very experienced at doing my job

Which is why I don't need non-technical management and AI consultants to be telling me what tools I should be using

If I thought AI was going to be a useful tool for me then I would use it

But so far it hasn't, so I don't

I'm not investing my time and energy into a "skill" that doesn't seem like it is going to pay off

tbrownaw · 6h ago
> If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?

This way of phrasing it rejects the possibility that maybe the new thing really does suck, and that this can sometimes be identified pretty quickly.

gwbas1c · 2h ago
> These people do need a firm, top-down push, or they will be left behind.

> even for things we now consider near universally

We aren't at the point where AI tools provide a major productivity boost. Sometimes they help, sometimes they don't, sometimes working with AI has negative productivity.

Assuming AI improves to the point where employees who use it are significantly more productive... They'll excel relative to their peers. The people who can't figure it out will underperform.

suddenlybananas · 7h ago
There have also been many fads that were forced on people which fizzled out and didn't amount to much.
andy99 · 7h ago

  did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack? 
I see this is already a favorite quote amongst commentors. It's mine too: I had a job ~15 years ago where the company had introduced an internal social network, that was obviously trying to ride on the coattails of Facebook et al without understanding why people liked social networks.

Nobody used it because it was useless, but management evidently was invested in it because your profile and use of that internal site did in fact factor in to performance reviews.

This didn't last long, maybe only one review cycle before everyone realized it was irretrievably lost. The parallel with the article is very apt thought. The stick instead of the carrot is basically an indication that a dumb management idea is in its death throes.

alabastervlog · 7h ago
That "Facebook for business" fad was so fucking stupid and managed to last what, about a decade?

It's a great example of how executive group-think can drive whole multi-industry initiatives that are very-obviously, to anyone outside that bubble, pure waste.

pimlottc · 6h ago
Nobody wants Facebook at work, what they want is a comprehensive org chart that's actually up-to-date.
rsynnott · 3h ago
Longer; a lot of the metaverse hype was pretty much a reframing of it.
alabastervlog · 2h ago
I once worked at a company that'd acquired a virtual-office company. I wasn't around for the acquisition, but it really smelled like one rich person bailing out another's failed investment (or someone on the board had a bunch of money in the acquisition target, or something along those lines).

To justify owning the useless damn thing, they insisted everyone use it, basically like Slack if it ate 3-4x the resources (really saying something, given Electron already eating 5-10x the resources it ought to need for any given task), monopolized a screen when in use, and added all the awkward elements of physical environments to virtual ones for no reason ("is it weird if 'I' 'sit' in this chair 'next to' this other 'person' when there are other chairs available in the room?", or "oh shit where's that meeting room 'physically' located, again? I think I'm lost...") while removing none of the awkwardness of virtual interactions.

Truly, bizarrely pointless. It was like some shit out of the Silicon Valley TV show, so absurd it was hard to believe it was real. I swear to god, I'm not making this up, they even had in-world presentations, so you could add all the fun of having a bad angle on a screen or being too far away to comfortably read the text to the joy of a Zoom screen-share. Totally nuts. Luckily you could also maximize whatever was being presented, but... hooray, your best feature is that I can ignore all the things that make your dumb crap distinctive? What a win.

This is what I think of every time I see anyone trying to promote Zuckerberg's weird, bad idea. I assure you, being in VR goggles would not have made the experience either more productive or more pleasant. Nobody who's ever tried to work like this even for one week could possibly think it's a good idea to invest in it.

nkrisc · 7h ago
I would say we might have worked at the same company, but there were so many companies trying that at the time we may very well not have.

Where I worked, it was an open secret that the CEO had an alter ego he used on the site. I have no idea if he knew that we all knew who that really was (I have to assume he did), but every played along.

By the time I had worked there it had been around for a few years already and once a quarter the head of our group set time aside for everyone to "engage" with it for an hour so that no one would be dinged on their performance review.

romellem · 4h ago
Oh wow, totally forgot about [Yammer][1]. What a waste of time.

[1]: https://en.wikipedia.org/wiki/Viva_Engage

ramesh31 · 7h ago
A social network never did a weeks worth of work for me in 10 minutes. Keep swinging that hammer, John Henry.
skybrian · 7h ago
If it makes that much of a difference and quality is the same, nobody will care whether you used AI or not. It’s an implementation detail.

No comments yet

srveale · 9h ago
I don't necessarily disagree with the main argument, but

> did your boss ever have to send you a memo demanding that you use a smartphone

Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them. I know this was true for many companies.

This isn't the perfect analogy anyway, since one major reason companies did this was to increase security, while forcing AI onto begrudging workers feels like it could have the opposite effect. The commonality is efficiency, or at least the perception of it by upper management.

One example I can think of where there was worker pushback but it makes total sense is the use of electronic medical records. Doctors/nurses originally didn't want to, and there are certainly a lot of problems with the tech, but I don't think anyone is suggesting now that we should go back to paper.

You can make the argument that an "AI first" mandate will backfire, but the notion that workers will collectively gravitate towards new tech is not true in general.

Uehreka · 7h ago
> Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them.

Anil is referring specifically to the way that people who were told to use a Blackberry would bring an iPhone to work anyway and demand that IT support it because it was so much better. In the late 2000s Blackberries were a top-down mandate that failed because iPhones were a bottom-up revolution that was too successful to ban.

So look for situations where employees are using their personal AI subscriptions for work and are starting to demand that IT budget for it so they don’t have to pay out of pocket. I’m seeing this right now at my job with GitHub Copilot.

pxx · 7h ago
I don't think your example is really a counterexample. Work-provided Blackberries allowed you to be more responsive to work messages while communicating over an ostensibly secure medium.

on the other hand, making sure that people use AI for performance reviews would be akin to measuring the percentage of work days that you used your blackberry. that's not something that anyone sane ever did.

somewhat in the same vein, nobody ever sent a directive saying that all interoffice memoranda must be typed in via blackberry.

ryandrake · 7h ago
Yea, the point is, if a product or technology is useful, people will want to use it. They'll bang down your door to be allowed to use it. They'll even surreptitiously use it if you don't allow it. If you have to mandate that they use it, what does that really say about the tool?

A better example is probably source control. It might not have been true in the past, but these days, nobody has to mandate that you use source control. We all know the benefits, and if we're starting a new software business, we're going to use source control by default from day one.

anildash · 7h ago
Yeah, I think that's fair, but those bosses that made us get Blackberries were mostly doing that because they wanted to be able to call us and make us work, not because we had to be convinced that smartphones had value, right? We all ended up buying smartphones on our own as well.
srveale · 5h ago
You may underestimate how many people do not need to be convinced. Again, I'll refrain from making a value judgment, but the hard numbers show that LLMs have been one of the most quickly adopted technologies in the history of mankind, including the time before anyone was forced to use them.

Not sure if these are the best stats to illustrate the point, but ChatGPT was released November 2022, 2.5 years ago, and they currently claim ~1 billion users [1]

By comparison, iPhone sales were something like 30 million over the same time period, June 2007 through 2009. [2]

In other words, what took ChatGPT several months took smartphones several years.

Of course there are problems with the comparison (iPhones are expensive, but many people bought each version of the iPhone making the raw user count go down, Sam Altman is exaggerating, people use LLMs other than ChatGPT, blah blah blah), so maybe let's not concentrate on this particular analogy. The point is: even a very skeptical view of how many people use LLMs day-to-day has to acknowledge they are relatively popular, for better or worse.

I think we're better served trying to keep the cat from scratching us rather than trying to put it back in the bag. Ham-fisted megalomaniac CEOs forcing a dangerous technology on workers before we all understand the danger is a big problem, that's for sure. To the original point, "AI-first is the new RTO", there's definitely some juice there, but it's not because the general public is anti-AI.

[1] https://www.forbes.com/sites/martineparis/2025/04/12/chatgpt...

[2] https://www.globaldata.com/data-insights/technology--media-a...

bluefirebrand · 4h ago
> In other words, what took ChatGPT several months took smartphones several years

You are comparing a cheap subscription service to an expensive piece of hardware that would replace hardware that most people already owned

Of course smartphones were slower to adopt. Everyone had phones already, and they were expensive!

ChatGPT is *free

srveale · 3h ago
Do you have any thoughts on the second half of my comment?
remich · 6h ago
Well, we all ended up buying smartphones eventually. But the delta between when Blackberries first were adopted in corporate environments and when iPhones/Androids were can't-miss technologies wasn't small.
hintymad · 5h ago
I have a more charitable view on the "AI-first" movement. When you're a CEO, the last thing you want is for your company to miss a major tech shift. We've seen this story play out repeatedly - companies that didn't adapt to Windows, to internet, or to mobile, ended up struggling or dying.

The tricky part is that you can't just think or talk your way into a new paradigm - the entire company has to act. After all, good ideas and breakthroughs often come from individuals in the trenches instead of from executives. This means exploring new possibilities, running experiments, and constantly iterating based on what you learn. But the reality is that most people naturally resist change. They get comfortable with how things work today. In many companies, you're lucky if employees don't actively fight against new approaches.

This is why CEOs sometimes need to declare company-wide mandate. Microsoft did this in the mid-90s with their famous "Internet Tidal Wave" pivot when Bill Gates sent that memo redirecting the entire company. Intel forced its "right-hand turn" when CPU business was still nascent.

Without these top-down pushes, organizations tend to keep doing what they've always done. Or to say the least, such top-down mandate at least sends a clear message to the entire company, potentially triggering a cultural shift. The "AI-first" thing may well be overhyped, but it's probably just leaders trying to make sure their companies don't get left behind in what looks like a significant shift. Even if the mandate fails, at least the company can learn something valuable. Note I'm talking about directions. The mandate can fail badly due to poor execution, but that's a different topic.

remich · 5h ago
Yeah but there's a significant difference in the leader of a company shifting the goals, product, or market of a company in a top-down way and a leader attempting to shift the implementation details of goals in a top-down way. The former is the entire job of a CEO, the latter is micro-managing of the first order.
oddthink · 3h ago
Yes, but the message from Shopify leadership is "it's part of your job to mess around with this stuff and see what works". Not "use AI at all costs".

The general feeling I'm getting is that using this AI stuff is important, but it's a learned skill, and we want as many people as possible to get familiar enough with it to have actual opinions.

I find that pretty unobjectionable.

remich · 1h ago
That's one reading, and if that happens to be the correct reading then I agree it's unobjectionable. To me, though, making it part of a performance review process makes it closer to the "use AI at all costs" requirement than a request for devs to mess around with new technologies.
Workaccount2 · 7h ago
I know this post is about tech, and obviously HN is a tech centered site, and most people here work in tech, but let me say:

Outside of tech, AI has been phenomenally helpful. I know many tech folk are falling over themselves for non-tech industry problems that can be software-solved then leased out monthly, and there are tons of these problems out there, but very hard to locate and model if you are outside the industry.

But with the current crop of LLMs, people who don't know how to program, but recognize that a program could do this task, finally can now summon that program to do the task. The path still has a tech-ability moat, but I can only imagine the AI titans racing to get programming ability into Supply Chain Technician Kim's hands. Think Steve Jobs designing an IDE for your mother to use.

I believe it will be the CEO's of these non-tech companies that will be pushing "AI first" and having people come in to show non-techy non-tech workers how to leverage LLMs to automate tasks. You guys have to keep in mind that if you walk into most offices in most places of the world, most workers will say "What the hell is a macro? I just go down the list line by line..."

dowager_dan99 · 6h ago
>> But with the current crop of LLMs, people who don't know how to program, but recognize that a program could do this task, finally can now summon that program to do the task. The path still has a tech-ability moat, but I can only imagine the AI titans racing to get programming ability into Supply Chain Technician Kim's hands. Think Steve Jobs designing an IDE for your mother to use

Brooks has yet to be proven wrong; even if this appears to be the silver bullet it could be just as likely that this widens the tech moat when non-programmers paint themselves into corners where they can't do their jobs without all the brittle, impossible to maintain code they've written. Think of the skilled trades vacuum we have in much of the Western world. Can Supply Chain Technician Kim Jr do her job without AI if she's never seen that before?

Workaccount2 · 5h ago
I think the core disconnect is that the current tech paradigm is writing broad scope feature packed programs (and it kind of needs to be that way). One stop shops for every need that the broadest possible customer base could have, while minimizing time that needs to be spent on support.

But I don't see that being the future. Instead I think people will just spin up bespoke ultra narrow scope programs (maybe scripts is more fitting here, but people like GUIs - scripts with GUIs?) that are generally under 3K LOC.

You don't need an AI to one-shot Excel.exe if you just want a simple way to track how many plastic pellets came in today, and how many went out. A GUI on a simple program on an SQlite database will do that no problem. And you can ditch that bloated excel doc you have been using for years.

At my own company we forwent a proprietary CAD package because Claude could decode the files we had, make a GUI for doing the transformation we needed to do, and properly reencode the file.

remich · 6h ago
Re your third paragraph - can they though? I'm not trying to be combative, it's just that what you wrote has been the promise of all no-code/low-code technologies for literal decades.

The exclamation "finally, all of the back-office people can write their own software!" is just the other side of the "finally, we can get rid of all the software engineers!" coin.

But, so far, every single other time this has been tried it runs into the ease-of-use / customizability problem. The easier a tool is to use/learn, the harder it is (or impossible) to use for specific use-cases. And, vice versa, the more flexible / customizable a tool is, the harder it is to use/learn (looking at you Jira).

Maybe this time is actually different, but I'll believe it when I see it.

ctkhn · 6h ago
Yep - the easier it is to go way down a hole the harder it is to dig yourself out.
lsy · 8h ago
The point is well-made that truly revolutionary technologies don't need top-down mandates to drive adoption, and it's certainly a bad sign for the AI industry that we are in this phase of the hype cycle. But it also seems likely that top-down mandates will create perverse incentives for product development around these technologies, preventing genuinely valuable use cases from emerging as people determine what does and doesn't work.

If everyone, to satisfy their CEO's emotional attachment to AI, is forced to type into a chat box to get dreck out and then massage it into something usable for their work, we'll see that ineffective mode persist longer, and probably miss out on better modes of interaction and more well-targeted use cases.

ryandrake · 7h ago
I wonder about the opportunity cost of going all-in on AI. What great products are not being developed, simply because every CEO is currently under this AI spell and is insisting that everything everyone does must be about AI? AI is sucking all the oxygen out of the room and probably causing actually useful projects to be canceled or de-prioritized.
mulmen · 6h ago
It’s not every CEO. If you have a better idea do that and outcompete the companies that are making what you think is a bad bet.
resolutefunctor · 7h ago
> stating plainly that he doesn't see AI replacing his employees. (Though that does immediately raise the "who brought that up?" question...)

Almost everyone who isn't highly informed in this field is worried about this. This is a completely reasonable thing to include in a memo about "forced" adoption of AI. Because excluding it induces panic in the workforce.

It is funny that this post calls out groupthink, while failing to acknowledge that they're falling into the groupthink of "CEO dumb" and "AI bad"

Forced AI adoption is nothing more than a strategy, a gamble, etc from company leadership. It may work out great, it may not, and anyone stating with conviction one way or another is lying to themselves and everyone they're shouting to. It is no different than companies going "internet-first" years ago. Doesn't have to mean that the people making the decision are "performing" for each other or that they are fascists, my god.

Imo its a great way of allowing high performers to create even more impact. A great developer typing syntax isn't valuable. Their ability to engineer solutions to challenges and problems is. Scaling that out to an entire company that believes in their people is no different, less time spent on the time-consuming functions of a job that are low-value in isolation, and more time spent on high-value functions of a job.

The Twitter/Reddit-style "snark-for-clicks" approach is disappointing to see so high on a site like this that is largely comprised of intelligent and thoughtful people.

dorian-graph · 6h ago
> .. and "AI bad"

He's not saying that though, is he?

He's quite literally said that people have found AI useful, and that's great! For example:

> We don't actually have to follow along with the narratives that tech tycoons make up for each other. We choose the tools that we use, based on the utility that they have for us. It's strange to have to say it, but... there are people picking up and adopting AI tools on their own, because they find them useful.

And:

> The strangest part is, the AI pushers don't have to lie about what AI can do! If, as they say, AI tools are going to get better quickly, then let them do so and trust that smart people will pick them up and use them. If you think your workers and colleagues are too stupid to recognize good tools that will help them do their jobs better, then ..

Anyway, how many layers of accused irony and snark can we go down? Am I the next?

resolutefunctor · 6h ago
I was keying in on the line:

> This is an important illustration: AI is really good for helping you if you're bad at something, or at least below average. But it's probably not the right tool if you're great at something.

Considering the authors complaint is having professionals (who would in theory be good at their job because they are professionals) use AI puts that in the "not the right tool."

But I probably did stretch a bit there, and appreciate you calling it out.

anildash · 4h ago
Yeah, and to be clear, the context that I assumed most readers would have (on my site, where I think they'd know a bit about me, as opposed to here on HN) is that I'm a former CEO. So my disdain for CEO stupidity is higher, and my tendency to be critical is much stronger, because of that perspective.
jmtulloss · 2h ago
> AI is really good for helping you if you're bad at something, or at least below average. But it's probably not the right tool if you're great at something.

This shows the author’s lack of experience in working with AI on something they’re great at.

AI is great for experts (all the productivity gains, no tolerance for the bullshit)

AI is great for newbies (you can do the thing!!)

A more interesting take would be on the struggle to go from newbie to expert in a field dominated by AI. We’re too early to know how to do this.

Of course AI-first is the future. We’re just still learning how to do it right.

dullcrisp · 2h ago
I think if you’re such an expert at something that you need AI to do it for you then that’s a good opportunity to hand it off to someone more junior.
jmtulloss · 1h ago
It’s probably not productive for a human to do work that AI can do well. Of course the trick is figuring out how to identify and verify that work. Experts can do this quickly.
Lammy · 6h ago
> This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone?

They did for Android testing actually. The biggest status symbol within the company was based around who got the latest iPhone model first, who was important enough to get a prioritized and yearly upgrade, and who was stuck with their older models for another year. This was back in the iPhone 3GS/4/4S/5 era. I took advantage of this by getting them to special-order me expensive niche Androids, because it was the only way they could get any employee to use one lol

Illniyar · 7h ago
"An average LLM won't even know that Drake's favorite MIME type is application/pdf"

Is this seeding for future AI models? If I ask chatGPT a year from now what is drake's favorite Mime type would it confidently say "application/PDF"

minimaxir · 6h ago
I tried that prompt against Google Gemini 2.5 Pro which has a cutoff date of January 2025 (enough to have knowledge of the Kendrick feud/the pdf jokes) but it couldn't do it, but was self-aware enough to know that the question was absurd and had an unexpected answer.

> Joke/Wordplay: Is there a pun or play on words involving "Drake" and a MIME type?

> Trick question/Testing the AI: The user might be testing if the AI will invent an answer, hallucinate, or recognize the absurdity.

No comments yet

85392_school · 2h ago
It's just a joke about AI not getting a joke (that's really just self-censoring).
eat · 6h ago
You can see this on perplexity now ;)
hosh · 6h ago
The company I work for is getting bought out by another company. That acquiring company built their whole business and product on "people-first". They are making a lot of profit in an industry (real estate) that has been contracting. I don't know what their AI plans are, only that they have them, and I would not be surprised if they are building "people-first" AI into their products.

I guess people, not things, creates value.

hyfgfh · 7h ago
> The return to office fad was a big part of this effort, often largely motivated by reacting to the show of worker power in the racial justice activism efforts of 2020.

Not against this point, but I don't get it, maybe because I don't live in the US, but I see as another way to "soft-fire" people, as is this AI crazy What I'm missing?

mjheart · 7h ago
As someone who works at a larger corporation who takes the quoted "normal policy", I can attest that it's extremely refreshing.

Incidentally, some people on my team have used Copilot for task management, but nobody has found it useful for coding / debugging / testing.

walleeee · 6h ago
What is "task management" in this context and why did your coworkers find llms useful for it?
alabastervlog · 6h ago
Heh, I can definitely imagine hooking up an LLM to our task board's API and having it suggest actions (say, marking something as ready for QA) based on my Git log, then execute those if approved. Probably save me a shitload of time. Hell, if I just let it run loose and do whatever it decided to do, daily, with no intervention, it'd likely do a better job than the median developer (and better than I do). Maybe suggest next-tasks for me. I'd worry about it damaging something, but I worry about that a ton every time I have to open up any of those confusing-UI shared-workspace sorts of things anyway.

That this would be a significant time savings mostly has to do with most task tracking systems being so very miserable and slow to work in for the majority of the people expected to use them, though. If we used something lighter and closer to where the work is happening (the code) it wouldn't really be that helpful.

ctkhn · 6h ago
Is marking a jira ticket ready for testing that complicated? That doesn't seem like something that I both spend a ton of time on while being non productive to want to outsource to AI
alabastervlog · 4h ago
Various administrative junk probably takes up about 20% of my actual wall-time as far as the work itself, but contributes enormously more than that to my mental fatigue, task-switching time loss, and general stress level. It subjectively feels like 50% of my job. Interacting with Jira or Azure Devops or Asana or what have you is far from everything, but is a decent chunk of that.

This does tend to be a much bigger problem at bigcos than smaller shops, though.

zkmon · 6h ago
Not surprising. We have seen online-first, mobile-first, cloud-first and now AI-first. Oh, and I remember someone talking about customer-first as well.
stpedgwdgfhgdd · 4h ago
“This is unusual — did your boss ever have to send you a memo demanding that you use a smartphone?”

AI has the promise to optimize worker’s efficiency x-fold. This promise was not the case with smartphones, slack, etc.

And AI will change everyone’s work in years to come, especially for developers.

postalrat · 1h ago
AI-first is the evolution of WFH.
carterschonwald · 4h ago
I like the zingers: This would not get you invited to the fascist VC group chat, tho!
guywithahat · 5h ago
I feel like they're pretty conjoined. If you're working from home, why can't an AI just replace you? AI is great at performing narrow jobs, if you're a part of the company you're needed to build relationships and help grow the company in creative meaningful ways that can't be done remotely. The author seems to have a pretty pessimistic view of everything
remich · 5h ago
Pretty wild to claim that remote employees can't build relationships or help grow companies in creative or meaningful ways.
quantadev · 4h ago
AI use is definitely a two edged sword. It can help us seasoned developers be much more productive, because we've seen all the anti-patterns, bad ideas, tarpits, and most "classes" of bad ideas and bad code that you can imagine; so when we get AI to generate code, we older developers can look at it, and in an instant know if it's good or bad code, most of the time.

However, for the more Junior Devs (i.e. under 10 to 15 yrs experience), their judgement about Generated Code is often simply "Does it appear to work or not." and that's a very big problem, and a very dangerous problem, that will cause lower quality code to creep in and in a way where AI may allow them to crank out tons of work, but all of it super buggy code. And most everyone would agree we'd rather have simpler, less feature rich products that are solid and reliable, rather than products that are loaded with both features and bugs.

So to all you seasoned developers out there, who have trouble getting hired, since you're over 40, your value as an employee has just quadrupled, compared to the less-experienced. The big question is, of course, how long will it take the 20ish to 30ish hiring managers to realize that, and start valuing experience and wisdom over youthfulness and good looks.

booleandilemma · 8h ago
I thought AI-first was the new blockchain?
cmrdporcupine · 7h ago
"did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack?"

In fact I remember very distinctly the Google TGIF All-Hands where Larry and Sergey stood up and told SWEs they should be trying to do development on tablets, because, y'know, mobile was ascendant, they were afraid of being left behind in mobile, and wanted to develop for "mobile first" (which ended up being on the whole "mobile only" but I'll put that aside for now).

It frankly had the same aura of ... not getting it... lack of vision pretending to be visionionary.

In the end, the job of upper management is not to dictate the tools to engineers to drive them to efficiency. We frankly already have that motivation ourselves. If engineers are skeptical of "AI", it's mostly because we've already been engaged with it and understand many of its limitations, not because we're being "luddites"

One sign of a healthy internal engineering culture is when engineers who are actually doing the work work together to pick their tools to do the work, rather than have them hoisted on them.

When management sends memos out demanding people use AI, what they're actually reflecting is their own fear of being left behind in the buzzword cycle. Few of us doing the work have that fear. I've seen more projects damaged by excessive novelty and forced "innovation" than the other way around.

Devasta · 6h ago
With AI, all we will need is staff to write down a few paragraphs of their needs and constraints and the computer will be able to deliver the desired outcomes.

Also, despite the fact that we were all working remotely for years, we need you all to come into the office because water cooler chats are far better than writing down a few paragraphs outlining what you need and the constraints.

akomtu · 4h ago
It's the old trick when the company hires cheap labor offshore and makes employees train their replacement. Except that today the cheap labor is AI that lives on a remote land called data centers, but unlike employees, AI works for food and has no ethics.
renewiltord · 7h ago
Making sure everyone is using LLMs is smart. It’s a transformative technology and some people are just slow to adopt. Org wide process improvements often need mandates or they won’t happen.

No different than using version control etc. There were and are engineers who would rather just rsync without having to do the bookkeeping paperwork of `git commit` but you mandate it nonetheless.

ChrisArchitect · 7h ago
Aside: where do the extra slashes in the submitted anildash urls come from? frequent reoccurrence. An rss feed somewhere? Some scheme for him to track outside shares? heh

https://www.anildash.com//2025/04/19/ai-first-is-the-new-ret...

ctkhn · 7h ago
Was thinking the same thing. When I go through google to his blog posts I don't see the double slash and it isn't there when I go to his blog homepage and click through to the specific post.
anildash · 4h ago
I might have screwed something up in my RSS (well, technically Atom) feed. I'll take a look later.
iLoveOncall · 7h ago
AI-first is the new blockchain, trying to fit a solution to a non-existent problem, that's it.
dowager_dan99 · 6h ago
disagree - it shows some - maybe significant - value, which makes it far more dangerous in this AI-everything echo chamber.
iLoveOncall · 6h ago
I didn't say AI, or even more specifically GenAI is the same as blockchain. I said " AI-first" is.
teddy-smith · 8h ago
The article presents some good points but to me it seems like not being a jerk and calling Will.I.am "nobodys favorite rapper" is a selling point for A.I.
anildash · 7h ago
I will admit it was not my kindest joke, but on the other hand, I genuinely did not expect him to read a treatise on IOT infrastructure.
ctkhn · 7h ago
I think you have some leeway joking on a guy who wrote a song called "Let's get retarded"
anildash · 4h ago
Was genuinely my feeling at the time, too.
dkdbejwi383 · 7h ago
A selling point for humans is that some of them are actually funny, unlike LLMs which just regurgitate synatically competent but humourously incompetent waffle.
glitchc · 7h ago
I dunno. The latter part sounds like a number of humans I've interacted with. They're all nice people though.
flappyeagle · 5h ago
Anil is a smart guy so if this is his best set of arguments then I’m afraid Tobi is right.

Every advancement in tech I’ve used in my lifetime was at first deployed top-down

Smartphone (blackberries), Personal computers, Version control (CVS), PowerPoint

The personal adoption FOLLOWED