Are people's bosses making them use AI tools?

107 soraminazuki 84 8/31/2025, 2:47:47 AM piccalil.li ↗

Comments (84)

AlexandrB · 14h ago
LLMs are the first technology I've experienced where there's a lot of top-down pressure to adopt it ASAP. Most other technologies in my career, like VCS or static analysis or whatever else were championed by colleagues or peers.
IMSAI8080 · 13h ago
It's all about the story that's sold to the higher ups. The higher you go up the corporate ladder, the vaguer the understanding of the technology. The big boss hears from a Microsoft salesman that AI = you can fire 20% of your workforce, but never questions exactly how that works. They probably never got sold static analysis in that way. That was just some kind of tool that somehow helps with that mumbo jumbo that developers spend all day typing. There's no story there that inspires a manager. AI = cut costs is music to the ears of the board. So then pressure gets applied to those lower down.

Something similar was going on with cloud a few years ago. The story was if you get cloud you can get rid of those expensive infrastructure people and it will all be so much more reliable. So the big boss gets a cloud strategy and foists it on those lower down. There's also pressure to be an on-trend boss. If all the other boss' are getting into it, then you need to as well.

UncleMeat · 7h ago
I think it is worth really deeply understanding that the bosses hate us. Capital has only begrudgingly involved labor when forced to. It is no surprise to me that genai hype happened after the largest increase in general labor power in recent memory (post 2020 labor market) and a decade long increase in the labor power among software engineers.

The bosses have seen pay and benefits go up and up and up. They've seen people jump between companies, taking institutional knowledge with them. They need the job market to crater so they can re-exert control in the relationship. LLMs are fucking catnip to this belief system. "You mean I do need to deal with those people that I have to hire and train and pay? I hate those guys! Awesome!"

ares623 · 14h ago
It’s like forcing your entire organisation to use Emacs because you switched to using Orgmode, literal programming, Lisp and found your productivity shot up for specific use cases.

But oh yeah, it makes “the line” go up and to the right.

echelon · 13h ago
Business leaders were bamboozled into thinking they were going to get innovators dilemma'd if they didn't adopt AI. It was either that, or being told that they could go lean and fire their workforce.

That's all probably going to happen at the next phase of AI, but it's just not where the current slate of LLMs are. We've hit a wall in terms of utility. LLMs kind of suck for everything but search and code tab auto-suggest.

Generative images and video, on the other hand, are 100% going to disrupt incumbents. Everyone from Adobe to Disney and Netflix are scrambling and trying to figure out how this tech changes the economics of production. I'm talking to production studios that are already underbidding each other by an order of magnitude. It's a bloodbath.

wiml · 12h ago
I've seen it before. Adopting Windows Server or IIS. Choosing Oracle for your RDBMS. That kind of thing. It has all the hallmarks of a decision based on a salesguy's pitch with no technical evaluation.
danaris · 8h ago
I agree that it looks like a mandate due to a sales pitch, but there's a difference between the higher-ups mandating adoption of a particular brand of a more generic technology—eg, choosing IIS over Apache, or Oracle over MySQL—and mandating the technology itself, when it's not something that's already required for the project (as a webserver is for a website).

While this isn't a completely unprecedented situation, it's definitely much less common than just having to use the Big Corporate version of a tech stack rather than an open source version of same just because it came bundled with the package your CEO got sold.

muldvarp · 8h ago
Not that surprising. There's nothing in it for me:

If I get more productive, I don't get paid more. If I get less productive, I get told I'm not using it correctly. Worst case: I help train the LLM to eventually take my job.

Towaway69 · 11h ago
Nobody gets fired for buying OpenAI, AWS, IBM or Microsoft.

It's probably because there is little or no known long term consequences of using AI. In management circles AI is the magic solution to all problems.

Hence its safe to push onto users.

j4coh · 13h ago
They are imagining being able to fire all the people who are setting it up. It remains to be seen if this will actually happen.
soraminazuki · 4h ago
Oh, it'll happen. Whether the replacement will be any good is another matter entirely though. The likely outcome is that it'll resemble Google's "support." Mostly automated and good for nothing besides enraging already frustrated users. Wealthy investors and executives running the show just won't care.

Unless society puts preventative measures in place, people will lose jobs and consumers will get screwed. The increased exploitation will exacerbate the wealth gap, further intensify societal conflict, and lead to chaos. But don't worry, LLMs will flood the internet with narratives praising the state of affairs so that people can feel somewhat better about all of this.

ludicrousdispla · 12h ago
Cloud computing was similar in its top down push, at least in large organizations.
airstrike · 6h ago
I think the difference there is it was mostly contained in IT whereas AI made its way to front office, client facing parts of the organization very quickly
staplers · 13h ago

  there's a lot of top-down pressure to adopt it ASAP
Because you're training it to replace you. MBA's have found a tool that finally allows them to cut out pesky intellectuals and creatives and they're chomping at the bit to make that a reality. Look around, dark enlightenment is being embraced/tolerated at the top.
EFreethought · 13h ago
What a lot of these More Bad Advice pinheads are too stupid to understand is: If you do not need those pesky tech people to make an app for your clients, then your clients do not need you either.
realusername · 13h ago
People are generally surprised when I tell them that, we would replace the CEO before we replace the developer the way it's going.

And if you manage somehow to replace all your devs with AI, be worried about your business because the bar to compete has been lowered.

realusername · 13h ago
That's what they think they are doing yes but there's a big difference between what the marketing department tells us and the reality of those tools
bowsamic · 13h ago
The pressure is from the investors. Then the upper management are basically obliged to fulfill that general desire
VariousPrograms · 13h ago
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
mcv · 12h ago
The idea that there are even AI metrics to hit...

AI should not be a goal in itself, unless you make and sell AI. But for anyone else, you need to stick to your original quality and productivity metrics. If AI can help you improve those, that's great. But don't make AI use itself a goal.

I've got a coworker who complains she's getting pressured by management to use AI to write the documents she writes. She already uses AI to review them, and that works great, according to her. But they want her to use AI to write the whole thing, and she refuses, because the writing process is also how she organizes her own thinking around the content she's writing. If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.

People ignore the importance of such mental models a lot. I recall a story of air traffic control that was automated, leading air traffic controllers to lose track in their heads of which plane was where. So they changed the system so they still had to manually move planes from one zone to another in an otherwise automated system, just to keep their mental models intact.

freehorse · 12h ago
Stories like this don’t surprise me. Ime a lot of managers don’t have a good understanding of what their employees actually do. Which is not that terrible in itself unless they try also to micromanage how they should do their work etc.
Towaway69 · 12h ago
Really well said - it has put something I've been sensing/feeling into words.

It's also how I utilties AIs - summaries or rewrite text to make it sound better but never to create code or understand code. Nothing that requires deep understanding of the problem space.

Its the mental models in my head that don't jell with AI that prevent AI adoption for me.

BrenBarn · 12h ago
> AI should not be a goal in itself

This is true of all technology, and it's weird to me to see all this happening with AI because it just makes me wonder what other nonsense bosses were insisting people use for no reason other than cargo culting. It just seems so wild to imagine someone saying "other people are using this so we should use it too" without that recommendation actually being based in any substantive way on the tool's functionality.

clickety_clack · 12h ago
I think a general, informal rule of thumb should be that you put in as much effort to write a thing as you expect from someone to read the thing. If you think I’m going to spend an hour figuring out what happened to you, you’d better have spent at least an hour actually trying to figure it out yourself.
ludicrousdispla · 12h ago
Can't you just have the AI generate it's own AI metrics?
incompatible · 13h ago
If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?
VariousPrograms · 12h ago
People read the summary to see the actual action items instead of reading the whole case. Now the action plan has constant random bullet points like “John Smith will add Mohammad to the email thread. Target date: Tuesday, July 20 2025. This will ensure all critical resources are engaged on the outage and reduce business impact.” or whatever because it’s literally summarizing every email rather than understanding the core of the work that needs doing.
zdragnar · 12h ago
It's not that they weren't useful, it's that someone higher up has to justify the expensive enterprise contract that they've foisted upon everyone else with the vague promise of saving money by using it.

The consumers of the incident report aren't the ones who had any say in using LLMs so they're stuck with less certainty.

ironmagma · 12h ago
Perverse incentives.
dkiebd · 7h ago
It will be funny when one of those reports says that certain steps will be taken in the future to make sure the same incident doesn't occur again, nobody reads the report so nobody notices, and then when the same incident occurs again one of the clients sues.
aniforprez · 12h ago
Cargo culting
topkai22 · 14h ago
The answer, well documented in the article, is yes.

While the article presents cases that appear the be problematic in the particulars, I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect.

It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.

It's easy to defend the utility of these tools so long as you caveat them. For example, I've had a lot of success in AI driven code generation for utility scripts, but it is less useful for full fledged feature development in our main code base. AI driven code summarization and its ability to do coding standards enforcement on PRs is a huge help.

Finally, I find the worries in the article about using these tools on sensitive data or scenarios such as ideation to be rather overdrawn. They are just SaaS services. You shouldn't use the free version of most tools for business purposes due to often problematic licensing, but purchasing and legal should be able help find an appropriate service. After all, if you are using google docs or Microsoft 365 to create and store your documents why would (at least with some due diligence that they don't retain or train on your input) you treat Gemini or Copilot (or their other LLM options) as presenting higher legal peril?

mgh95 · 14h ago
> It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.

There is a difference between experimentation and mandated usage, however. In the former, you typically see "shadow IT" attempt to access useful tools outside the bounds of what is considered acceptable, as compared to mandated usage. This indicates a greater willingness to adopt.

There is also a difference between a technology replicating an existing functionality in a new medium (email vs usps) and introduction of a new technology. In the former, there is clear market demand, and only a matter of redirecting existing demand to new tools. In the latter, it is unclear if the technology will be useful.

I don't think that just because LLMs are a new technology which use computing makes them the internet and I don't think it's accurate to analyze them through the lens you propose.

beezlewax · 13h ago
> but it is unlikely all of them are

How so? I have access to a huge number of these tools and they're all pretty similar.

Azrael3000 · 13h ago
That's also what he writes in the article, I.e. LLMs are large language models, so the approach is generally flawed. A sentiment I agree with.
soraminazuki · 12h ago
The definition of insanity is doing the same thing over and over again and expecting a different result. The current hype is now officially insane.
bigstrat2003 · 11h ago
> I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect. It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are.

If the tool is good, then management won't need to mandate it. People will be tripping over themselves to get access to the tool that helps them to do their job better. So perhaps you're right that some of the tools will be good (though I personally haven't yet had that experience), but I think that it is incorrect for managers to push for (let alone mandate) tool usage. Measure the result, not the path an employee takes to get there. If Bob uses AI tools to great effect, but Alice is doing just as well as him without using said tools, it's a mistake to force her to change her workflow thinking that the tools will be just as good for her as for Bob.

pmg101 · 11h ago
Somewhat true but let's also recognise all of us have a certain level of friction. Yes, Alice may be effective with using tool A, due to her knowledge and experience, but not have the higher context to realise that she's at a local maximum and could, after a period of confusion and relearning, become even MORE effective using tool B.

However this is a subtle and nuanced situation requiring careful people management and helping to nudge or lead people, letting them take risks, letting them fail, giving them psychological safety, and praising their attempts. Blanket mandates are just a very tone deaf and stupid way to try to achieve this.

makeitdouble · 12h ago
> A great historical analogies are the adoption of PCs in the 80s

Another historical analogy is Scientific Management, pushed top down and widely adopted by the industry. It has many flavors and all of them were wrong.

We have samples in basically any direction one would like to argue for. Historical precedence isn't a good argument IMHO.

EagnaIonat · 12h ago
> It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are.

All can absolutely be wrong at the same time, but the tool isn't the main issue IMHO. Its the user.

For simple generic stuff its not an issue, but where you need an expert, it has to be an expert in that field who uses the AI. So you know what is wrong.

A good recent example is the OpenAI Academy. Clearly the site content is generated by ChatGPT, and completely misses the point of the areas it claims to be training you in.

bitwize · 11h ago
Do you also believe that open-plan offices make people more productive by "fostering collaboration"?
marc_abonce · 14h ago
The most important advice for people in this situation, from the article:

> I’d say my overarching advice, based on how difficult tech recruitment is right now, is to sadly play along. But — and I cannot stress this enough — make sure you document everything.

> What I mean by that is every single time AI tools cause problems, slow-downs and other disappointing outcomes, document that outcome and who was responsible for that decision. Make sure you document your opposition and professional advice too.

Personally, I would just add a warning to be careful to blame the tool, not the person. Otherwise, you will be seen as the "bad" person in the story even if your report is technically correct.

mystifyingpoi · 13h ago
It's easy to blame person in a roundabout way. I have to do this all the time. Instead of saying "The suggestion from Bob was wrong and now we are in trouble" go with "We complied with the suggestion from email thread at this date and time and now it seems are in trouble". Whoever doesn't care, Bob is covered. Whoever does care, will find the info they need, but that's on them now.
mkagenius · 14h ago
The question shouldn't be whether we should use it or not but that how can we use it so that we are more productive than before.

In future, there will be some sort of "method" or should I say "art" in how we use the ai tools.

It's very easy (and frustrating) to just say "try again, <error log>" without knowing fully what's going on. The transformation from a coder to a code manager would need some sort of learning to be good at it.

meristohm · 13h ago
Whatever I do for money isn't a huge part of my identity, so telling a boss (if/when I find myself in that situation) to stuff it with the AI nonsense isn't going to be difficult. Decoupling one's self-worth from the job makes it much easier to roll with being fired.

"Playing along" is a great way to be part of someone else's potentially-harmful project. Consider your values, and don't cross those lines. If the boss is upset about it, they have options. I don't do their work for them.

Collective action with your fellow workers against enshittification is a humanist way forward.

balder1991 · 14h ago
My company literally built its own ChatGPT Enterprise wrapper and forced us all to make X prompts in it per week. If we don’t meet that quota, our immediate leader will “strongly suggest” we do it or we might get a bad performance review eventually. It’s also tied to our yearly bonus now.
randycupertino · 14h ago
We have had a huge push for mandatory ChatGPT use as well at my 800 person company - 4 mandatory IT trainings about it, told how much we use it will be factored into our performance reviews, hyped up by management at all-hands, told if we don't use it we will be outperformed by those who do.

I am curious if they read what we're prompting into the system and considering our use case as well.

consp · 13h ago
This sounds like an interesting case for letting the LLM chat with itself by opening two instances and doing malicious compliance.
dylanowen · 13h ago
Sounds like a great way to script yourself a bonus
krackers · 11h ago
And you could even ask chatGPT to generate a set of prompts and the API calls for you!
bitwize · 11h ago
"I'm gonna proompt myself a minivan"
01HNNWZ0MV43FF · 14h ago
Just a few years ago I got in trouble for wasting the company bot's time trying to make it play Zork with me over lunch break
ulfw · 7h ago
Its okay. You're going to be laid off thanks to this newfound 'efficiency' soon anyways.

That's the whole point

skhameneh · 13h ago
> This is the thing about AI tools. They are by design going to honour your prompt, which often results in your AI tool agreeing with you, even if you’re wrong.

LLMs augment the input with their trained data. LLMs don't inherently agree if you set up context correctly for analysis.

I've arrived at the conclusion that the top-down push without adequate upskilling creates bad experiences and subpar results. It's like adopting a new methodology for something without actually training anyone on the new methodology, it leaves everyone scrambling trying to figure it out often with poor results.

I find LLMs to be a great multiplier. But that multiplier will take whatever you put in context. If one puts in bias and/or fragmented mess, it's far more difficult to steer the context to correct it than it was to add it to begin with.

radarsat1 · 10h ago
> It's like adopting a new methodology for something without actually training anyone on the new methodology, it leaves everyone scrambling trying to figure it out often with poor results.

Agree strongly. I've had to push back when the CEO didn't see the results he wanted, had to basically remind him, look, this technology is like.. 6 months old (at the time, and with respect to actually getting good results, Claude Code etc).. you can't expect everyone on the team to just "know" how to use it.. we're literally _all_ learning this very new thing right now, not just us, but everybody in the industry. It's a little crazy to expect immediate uptake and an overnight revolutionary productivity boost with this thing we barely know how to use properly, there's going to be a learning phase here whether you like it or not

BrenBarn · 12h ago
> The CTO at my previous job tried Claude Code and really liked it so he said that all the devs had to use Claude Code in our work

Imagine this sentence with "Claude Code" replaced by anything else and "CTO" and "devs" replaced by more generic terms like "boss" and "employee". It's just "The boss tried Tool X and liked it so he said all the employees have to use it". It just seems to me that that is a bad way to make decisions regardless of what tool we're talking about or even what industry we're in. It's certainly possible it could make sense with a few more steps in there ("the boss tried this and liked it because X so he said we have to work on using it in way Y to accomplish Z"). But the way this is described sounds like a fire-and-forget mentality where the boss tells people to do a thing a certain way and that's the extent of his involvement, which seems stupid.

AbbeFaria · 14h ago
I work at MSFT. There’s top down pressure to use LLMs everywhere. At this point, if you can convince your management about using LLMs anywhere, they would happily head nod and let you go do that. And management themselves are not that technical wrt LLMs, they are being fed the same AI hype slop that we are fed.

Most of these efforts have questionable returns and most projects will usually involve increasing test coverage or categorising customer incidents for better triage, apart from these low hanging fruits not much comes out of it.

People still play the visibility game though. Hey, look at what we did using LLMs. That’s so cool, now where’s my promotion? Business outcomes wise, there’s some low hanging fruits that have been plucked but otherwise it doesn’t live up to the hype.

Personally for me, it is helpful in a few scenarios,

1. Much better search interface than traditional search engines. If I want to ramp up on some new technology or product, it gives me a good broad overview and references to dive deep. No more 10 blue links.

2. Better autocomplete than before but it’s still not as groundbreaking as AI hype hucksters make it out to be

3. If I want to learn some concepts (say how ext4 FS works), it can give a good breakdown of the high level concepts and then I go need to study and come back with more Q’s. This is the only genuine use case that I really like. Where I can iteratively ask Q’s to clarify and cement my understanding of a concept. I have used Claude code and ChatGPT for this and I can barely see any difference between the two.

This is my balanced take.

bonzini · 13h ago
I have a similar mandate and a similar take, but slightly different use cases.

As to the search engine, my searches are often very narrow, like I want to recall a specific message from a mailing list, so I don't use that too much. On the other hand, I found Google's NotebookLM to be really good at recalling concepts from both source code and manuals (e.g. processor manuals in my case).

Code generators are incredible refactoring machines. In one case (not so easy to reproduce in general, but it did work) Claude Code did a Python to decently idiomatic Rust conversion in a matter of minutes; it added mypy annotations to 2000 lines of Python code (with 90% accuracy) in half an hour and got the entire job done with my assistance in about an hour. For the actual writing and debugging where the logic matters they're still not there even for small code bases (again 2000 lines of code ballpark). They're relatively good at writing and debugging testcases but IMO that's also where there's a risk of copyright taint. Anyhow it's something I would use maybe 2-3 times a month.

In one case I used it for natural language translation, with pretty good results, but I knew both languages because I needed to check the result. Ask it first to develop a glossary and then to translate.

For studying they're interesting too, though for now I have mostly tried that outside work. At work, Google Deep Research worked well compared to the time it takes and it's able to find a variety of sources (including HackerNews comments in one case :)) which is useful for cross-checking.

Neywiny · 7h ago
So what does 90% accuracy mean here? Is this like you ran it through a linter or language server and 90% had errors? Or just through a quick glance you felt it was that accurate?

I've found incorrect type hints to be one of the biggest issues when trying to use Python type-safely. Mostly (entirely?) with packages that get their own hints wrong, meaning methods aren't shown as existing within the class or the returned instance isn't the class it said it would be.

neumann · 11h ago
My company has no idea what it wants. It just released its new 5 year strategy with big bets on AI to enhance both our offerings to clients and internal efficiencies. Town halls around the globe about AI delivery.

Great. Especially because my team is the AI team in the region. Except half the AI related sites are blocked by IT who are gatekeeping access and demand every "use" of AI is audited. And then need a 5 step approval process. It's mental. They blocked access to metals models because they labeled it unlicensed platforms.

Havoc · 4h ago
Nothing yet here. In a financial corp though not dev.

Some casual attempts to lead from the front with AI summaries of meetings is the extent of it.

Having access without pressure to use it suits me well :)

thom · 12h ago
In their defence, your boss is probably getting pressure from the board level to define their AI strategy. A partner at a VC company sitting on that board is in turn getting pressure to find synergies between their larger portfolio companies and shiny new AI investments.
rednafi · 13h ago
Yes. My workplace soft-enforced it as well. I like using LLMs, but sparingly. I consider myself a better writer and find their tone bland and lifeless. Other than some minor proofreading, I almost never use them to generate text.

For coding, unless I’m writing trivial RPC endpoints, editing docs, or writing tests for an already hardened API, I find agents a complete waste of time. So my usage is mostly limited to chat sessions.

To use up the quota, I apply the provided tokens to a few personal projects here and there, but no one can make me push an actual production CL with them unless I find it useful myself.

m2f2 · 14h ago
I have used my company LLM thingy. Able to summarize and document code leveraging remarks and general code behavior just because LLM just ingested the full python docs.

About generating things well... it just copypastes the same snippets you could find on stackoverflow, including bugs - if the task you throw at it has already been answered.

For complete and complex code... well it spews out the same useless advice you could get from a drunk non expert person while sitting at the bar.

Issue is... LLMs are too big to fail, everyone just poured billions in this huge statistics bean counter, and... someone has to justify those expenses at board meetings.

Spooky_Fusion1 · 14h ago
If Google/Meta's 'AI' discreditation of 100,000 social media customers. Including me. Is anything to go by. Then Yes !
sys_64738 · 5h ago
You have to use AI for a minimum number of uses per week. Everything is metric driven now in companies.
lillesvin · 12h ago
Where I work it's a solution looking for a problem, and we're heavily encouraged to implement it even if there's no real problem for it to solve, because "we're obligated to give it a try".

I'm not sure why we have that obligation.

pjmlp · 14h ago
Yes, this is pretty much the reality around my bubble, with OKRs for how much is being used in practice, can hardly wait for the bubble to burst.

Using AI powered tooling is one thing, better IDE workflows, writing and voice recognition, translations and so forth.

Copying text around into and out of a chat window is worse than just writing COBOL, and at least COBOL is deterministic.

heresie-dabord · 8h ago
huflungdung · 13h ago
Yes. Our company is becoming “AI first engineering”. This is not a small company
jongjong · 13h ago
I don't care about software craftsmanship anymore in my day job. 2 years ago I would have been deeply upset about this sort of thing but now I'm intentionally apathetic.

Once I changed my goal from maximizing code quality to maximizing billable hours, I feel a lot more optimistic about the future and these AI tools are going to create so many opportunities for me.

I found it hard to compete with junior developers in my last job (before AI was mainstream) in terms of volume of code because some of them would write 1000 lines of low quality code per day... Now I can also do this. It gives me a lot of surplus energy to figure out how to play politics and shift blame... It gives me an actual competitive upper hand over the juniors. I can out-compete them both in meeting/debate and code/feature volume. I talk wisely and code foolishly. Win win. I couldn't do this before because I was writing the code myself and I had essentially lost the ability to write high volumes of dirty code. I had a kind of analysis paralysis due to trying to solve problems in an optimal, minimalist, most reliable way. No longer a problem. Bugs are a problem for someone else.

I have so much more time to think about career strategy now. I managed to avoid being assigned to any difficult projects... I feel bad for the other people who try to go above and beyond and end up wedging themselves into a difficult situation where the software is down all the time and they have to take the blame... The AI never gets the blame.

I hated playing politics before but AI has made playing politics a necessity. It's like the more apathetic you are about your output, the better off you are.

The ironic thing is that I know it's possible to produce high quality code with AI. I've had some really positive experiences with Claude Code on side projects... But that doesn't align with the reality of 99% of software projects. The foundation is not set up right to get these kinds of results. I could set up the foundation correctly but I'd have to be present in the project since the beginning and I'd have to be given a lot of decision power; but I never get such opportunities. Bad foundations beget bad code, especially with AI because the AI never gets the idea to refactor... If your codebase is unmaintainable, it will hallucinate dirty code which doesn't work. It keeps coming up with more and more hacks... Then it delivers hacks on top of its own hacks.

voidfunc · 13h ago
Im being told to push it on my reports from leadership because leadership is telling us rewards in the future will be tied to AI adoption.

Its not forced per-se but its definitely being heavily encouraged.

mrcsharp · 14h ago
> All participants are completely anonymised for their privacy and protection

Having to do this says a lot about how fragile the state of AI/LLM hype.

kotaKat · 10h ago
Yes, and I’m sick of my corporate machine pushing Copilot every day even after I unpin it everywhere and I’m getting really close to just filing harassment complaints with HR to force some bullshit.

I never asked for this assault and ignorance to be shoved upon me but management has been made utterly stupid thanks to the snake tongues of Silicon Valley.

tomjen3 · 12h ago
Makes sense. I have seen far too many coworkers dismiss AI completely without trying it for their job.

At this point, you need to learn what AI can and cannot do, for the same reason you need to keep up with new versions of whatever framework you use. Since AI develops so fast (e.g many image use cases that AI would be terrible at 4 months ago, they now do perfectly), you need to repeat that exercise frequently.

There are 4 problems with adoptions as I see them:

1) Hype. Some people overhype what AI can do, which causes people to dismiss them when they don't immediately work;

2) Plenty of people don't like to change what they do/feel threatened by change. Doubly so when that change is perceived (real or not) to impact their job.

3) AI is weird and so it sometimes fails spectacularly at simple things, while it works very well at more complex things;

4) People use ChatGPTs free model or other AIs that are free. These are older/less powerful models, which means people end up with wrong expectations of what they can and cannot do.

5) Who likes to be told what to do? Especially by a clueless boss.

Where I running a company, I would ensure that my employees had access to a top of the line model and cursor/windsurf. I would monitor usage and have a talk with those whose usage was drastically lower than their peers.

However it would be a talk only - with the aim of figuring out why AI did not work for that employee, and what we could do to fix it.

pmg101 · 11h ago
This isn't too bad but still takes a kind of panopticon style of people management for granted.

Instead of letting everyone do their own thing then "talking to" certain people why not get people to work together and see how each other do or don't get value from LLMs, to build institutional confidence and skills.

bkircher · 14h ago
Forced or not, there’s observable slop in documentation and specs in my company’s Confluence.
ares623 · 14h ago
If someone used AI to generate some doc, I reserve the right to use AI to summarise it.

Any lossy mistakes won’t be my fault right?

iamacyborg · 12h ago
We have a new performance review and new company values at work (~1000 enployees) which are heavily leaning towards us being required to use “AI” (LLM’s) at work.

There seems to be some magical thinking at work that simply uttering the words “AI” and “automation” will somehow render us more productive.

polskibus · 12h ago
I find it interesting why this article dropped so quickly from the front page. Is there a way for privileged users to actively push an article away from it?
iwontberude · 14h ago
If anything my bosses are pointing out how LLMs and AI tools are not worth investing in. They have been fairly opposed to them and only have become more so as time goes on. But we are known to think different.
sethammons · 13h ago
Is this why siri is still terribad?

No comments yet

bitwize · 11h ago
I'm getting emails from the CEO that we need to take advantage of these tools and strong recommendations to try them out and leave feedback.

But my refusal to be AI-assisted at work is viewed as "healthy skepticism" by upper management. And my colleagues who have tried the tools are not particularly impressed.

jokethrowaway · 12h ago
After one of my client forced all employees and contractors to use AI, my boss, who was previously reasonable, started: - Regurgitating AI crap to every answer, often just replying with a ChatGPT / Claude screenshot - Not being able to explain code but "don't worry, I got Claude to generate some tests and the tests pass" - Introducing random bots in slack and github which print tons of noise humans just skip through because they're not accurate enough.

The effect on the team of developers with various level of experience started showing up as well:

The application architecture turn into a horrible mess, it's worse than junior engineers. The application started exhibiting tons of hard to debug issues, because the generated code was too low level and not covering corner cases.

Every attempt of the AI engineers to fix the issue generated one more class wrapping the existing codebase - with a fix which never worked (eg. ConnectionManagerWithTimeouts).

Eventually we basically had to rewrite the application, throwing away most of the code twice. One to just get something working with the existing architecture without crashing every hour and then another to use a framework and eliminate the last bugs occurring every once and then.

LLM needs to be in incredibly capable hands in order to be used safely and engineers will have to fight their instinct and not get swayed by the LLM telling them they're right.