The On-Line Encyclopedia of Integer Sequences (OEIS) (oeis.org)
3 points by tzury 34m ago 0 comments
The Unfashionable Art of Learning Things (medium.com)
3 points by FromTheArchives 41m ago 0 comments
Are people's bosses making them use AI tools?
107 soraminazuki 84 8/31/2025, 2:47:47 AM piccalil.li ↗
Something similar was going on with cloud a few years ago. The story was if you get cloud you can get rid of those expensive infrastructure people and it will all be so much more reliable. So the big boss gets a cloud strategy and foists it on those lower down. There's also pressure to be an on-trend boss. If all the other boss' are getting into it, then you need to as well.
The bosses have seen pay and benefits go up and up and up. They've seen people jump between companies, taking institutional knowledge with them. They need the job market to crater so they can re-exert control in the relationship. LLMs are fucking catnip to this belief system. "You mean I do need to deal with those people that I have to hire and train and pay? I hate those guys! Awesome!"
But oh yeah, it makes “the line” go up and to the right.
That's all probably going to happen at the next phase of AI, but it's just not where the current slate of LLMs are. We've hit a wall in terms of utility. LLMs kind of suck for everything but search and code tab auto-suggest.
Generative images and video, on the other hand, are 100% going to disrupt incumbents. Everyone from Adobe to Disney and Netflix are scrambling and trying to figure out how this tech changes the economics of production. I'm talking to production studios that are already underbidding each other by an order of magnitude. It's a bloodbath.
While this isn't a completely unprecedented situation, it's definitely much less common than just having to use the Big Corporate version of a tech stack rather than an open source version of same just because it came bundled with the package your CEO got sold.
If I get more productive, I don't get paid more. If I get less productive, I get told I'm not using it correctly. Worst case: I help train the LLM to eventually take my job.
It's probably because there is little or no known long term consequences of using AI. In management circles AI is the magic solution to all problems.
Hence its safe to push onto users.
Unless society puts preventative measures in place, people will lose jobs and consumers will get screwed. The increased exploitation will exacerbate the wealth gap, further intensify societal conflict, and lead to chaos. But don't worry, LLMs will flood the internet with narratives praising the state of affairs so that people can feel somewhat better about all of this.
And if you manage somehow to replace all your devs with AI, be worried about your business because the bar to compete has been lowered.
AI should not be a goal in itself, unless you make and sell AI. But for anyone else, you need to stick to your original quality and productivity metrics. If AI can help you improve those, that's great. But don't make AI use itself a goal.
I've got a coworker who complains she's getting pressured by management to use AI to write the documents she writes. She already uses AI to review them, and that works great, according to her. But they want her to use AI to write the whole thing, and she refuses, because the writing process is also how she organizes her own thinking around the content she's writing. If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.
People ignore the importance of such mental models a lot. I recall a story of air traffic control that was automated, leading air traffic controllers to lose track in their heads of which plane was where. So they changed the system so they still had to manually move planes from one zone to another in an otherwise automated system, just to keep their mental models intact.
It's also how I utilties AIs - summaries or rewrite text to make it sound better but never to create code or understand code. Nothing that requires deep understanding of the problem space.
Its the mental models in my head that don't jell with AI that prevent AI adoption for me.
This is true of all technology, and it's weird to me to see all this happening with AI because it just makes me wonder what other nonsense bosses were insisting people use for no reason other than cargo culting. It just seems so wild to imagine someone saying "other people are using this so we should use it too" without that recommendation actually being based in any substantive way on the tool's functionality.
The consumers of the incident report aren't the ones who had any say in using LLMs so they're stuck with less certainty.
While the article presents cases that appear the be problematic in the particulars, I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect.
It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.
It's easy to defend the utility of these tools so long as you caveat them. For example, I've had a lot of success in AI driven code generation for utility scripts, but it is less useful for full fledged feature development in our main code base. AI driven code summarization and its ability to do coding standards enforcement on PRs is a huge help.
Finally, I find the worries in the article about using these tools on sensitive data or scenarios such as ideation to be rather overdrawn. They are just SaaS services. You shouldn't use the free version of most tools for business purposes due to often problematic licensing, but purchasing and legal should be able help find an appropriate service. After all, if you are using google docs or Microsoft 365 to create and store your documents why would (at least with some due diligence that they don't retain or train on your input) you treat Gemini or Copilot (or their other LLM options) as presenting higher legal peril?
There is a difference between experimentation and mandated usage, however. In the former, you typically see "shadow IT" attempt to access useful tools outside the bounds of what is considered acceptable, as compared to mandated usage. This indicates a greater willingness to adopt.
There is also a difference between a technology replicating an existing functionality in a new medium (email vs usps) and introduction of a new technology. In the former, there is clear market demand, and only a matter of redirecting existing demand to new tools. In the latter, it is unclear if the technology will be useful.
I don't think that just because LLMs are a new technology which use computing makes them the internet and I don't think it's accurate to analyze them through the lens you propose.
How so? I have access to a huge number of these tools and they're all pretty similar.
If the tool is good, then management won't need to mandate it. People will be tripping over themselves to get access to the tool that helps them to do their job better. So perhaps you're right that some of the tools will be good (though I personally haven't yet had that experience), but I think that it is incorrect for managers to push for (let alone mandate) tool usage. Measure the result, not the path an employee takes to get there. If Bob uses AI tools to great effect, but Alice is doing just as well as him without using said tools, it's a mistake to force her to change her workflow thinking that the tools will be just as good for her as for Bob.
However this is a subtle and nuanced situation requiring careful people management and helping to nudge or lead people, letting them take risks, letting them fail, giving them psychological safety, and praising their attempts. Blanket mandates are just a very tone deaf and stupid way to try to achieve this.
Another historical analogy is Scientific Management, pushed top down and widely adopted by the industry. It has many flavors and all of them were wrong.
We have samples in basically any direction one would like to argue for. Historical precedence isn't a good argument IMHO.
All can absolutely be wrong at the same time, but the tool isn't the main issue IMHO. Its the user.
For simple generic stuff its not an issue, but where you need an expert, it has to be an expert in that field who uses the AI. So you know what is wrong.
A good recent example is the OpenAI Academy. Clearly the site content is generated by ChatGPT, and completely misses the point of the areas it claims to be training you in.
> I’d say my overarching advice, based on how difficult tech recruitment is right now, is to sadly play along. But — and I cannot stress this enough — make sure you document everything.
> What I mean by that is every single time AI tools cause problems, slow-downs and other disappointing outcomes, document that outcome and who was responsible for that decision. Make sure you document your opposition and professional advice too.
Personally, I would just add a warning to be careful to blame the tool, not the person. Otherwise, you will be seen as the "bad" person in the story even if your report is technically correct.
In future, there will be some sort of "method" or should I say "art" in how we use the ai tools.
It's very easy (and frustrating) to just say "try again, <error log>" without knowing fully what's going on. The transformation from a coder to a code manager would need some sort of learning to be good at it.
"Playing along" is a great way to be part of someone else's potentially-harmful project. Consider your values, and don't cross those lines. If the boss is upset about it, they have options. I don't do their work for them.
Collective action with your fellow workers against enshittification is a humanist way forward.
I am curious if they read what we're prompting into the system and considering our use case as well.
That's the whole point
LLMs augment the input with their trained data. LLMs don't inherently agree if you set up context correctly for analysis.
I've arrived at the conclusion that the top-down push without adequate upskilling creates bad experiences and subpar results. It's like adopting a new methodology for something without actually training anyone on the new methodology, it leaves everyone scrambling trying to figure it out often with poor results.
I find LLMs to be a great multiplier. But that multiplier will take whatever you put in context. If one puts in bias and/or fragmented mess, it's far more difficult to steer the context to correct it than it was to add it to begin with.
Agree strongly. I've had to push back when the CEO didn't see the results he wanted, had to basically remind him, look, this technology is like.. 6 months old (at the time, and with respect to actually getting good results, Claude Code etc).. you can't expect everyone on the team to just "know" how to use it.. we're literally _all_ learning this very new thing right now, not just us, but everybody in the industry. It's a little crazy to expect immediate uptake and an overnight revolutionary productivity boost with this thing we barely know how to use properly, there's going to be a learning phase here whether you like it or not
Imagine this sentence with "Claude Code" replaced by anything else and "CTO" and "devs" replaced by more generic terms like "boss" and "employee". It's just "The boss tried Tool X and liked it so he said all the employees have to use it". It just seems to me that that is a bad way to make decisions regardless of what tool we're talking about or even what industry we're in. It's certainly possible it could make sense with a few more steps in there ("the boss tried this and liked it because X so he said we have to work on using it in way Y to accomplish Z"). But the way this is described sounds like a fire-and-forget mentality where the boss tells people to do a thing a certain way and that's the extent of his involvement, which seems stupid.
Most of these efforts have questionable returns and most projects will usually involve increasing test coverage or categorising customer incidents for better triage, apart from these low hanging fruits not much comes out of it.
People still play the visibility game though. Hey, look at what we did using LLMs. That’s so cool, now where’s my promotion? Business outcomes wise, there’s some low hanging fruits that have been plucked but otherwise it doesn’t live up to the hype.
Personally for me, it is helpful in a few scenarios,
1. Much better search interface than traditional search engines. If I want to ramp up on some new technology or product, it gives me a good broad overview and references to dive deep. No more 10 blue links.
2. Better autocomplete than before but it’s still not as groundbreaking as AI hype hucksters make it out to be
3. If I want to learn some concepts (say how ext4 FS works), it can give a good breakdown of the high level concepts and then I go need to study and come back with more Q’s. This is the only genuine use case that I really like. Where I can iteratively ask Q’s to clarify and cement my understanding of a concept. I have used Claude code and ChatGPT for this and I can barely see any difference between the two.
This is my balanced take.
As to the search engine, my searches are often very narrow, like I want to recall a specific message from a mailing list, so I don't use that too much. On the other hand, I found Google's NotebookLM to be really good at recalling concepts from both source code and manuals (e.g. processor manuals in my case).
Code generators are incredible refactoring machines. In one case (not so easy to reproduce in general, but it did work) Claude Code did a Python to decently idiomatic Rust conversion in a matter of minutes; it added mypy annotations to 2000 lines of Python code (with 90% accuracy) in half an hour and got the entire job done with my assistance in about an hour. For the actual writing and debugging where the logic matters they're still not there even for small code bases (again 2000 lines of code ballpark). They're relatively good at writing and debugging testcases but IMO that's also where there's a risk of copyright taint. Anyhow it's something I would use maybe 2-3 times a month.
In one case I used it for natural language translation, with pretty good results, but I knew both languages because I needed to check the result. Ask it first to develop a glossary and then to translate.
For studying they're interesting too, though for now I have mostly tried that outside work. At work, Google Deep Research worked well compared to the time it takes and it's able to find a variety of sources (including HackerNews comments in one case :)) which is useful for cross-checking.
I've found incorrect type hints to be one of the biggest issues when trying to use Python type-safely. Mostly (entirely?) with packages that get their own hints wrong, meaning methods aren't shown as existing within the class or the returned instance isn't the class it said it would be.
Great. Especially because my team is the AI team in the region. Except half the AI related sites are blocked by IT who are gatekeeping access and demand every "use" of AI is audited. And then need a 5 step approval process. It's mental. They blocked access to metals models because they labeled it unlicensed platforms.
Some casual attempts to lead from the front with AI summaries of meetings is the extent of it.
Having access without pressure to use it suits me well :)
For coding, unless I’m writing trivial RPC endpoints, editing docs, or writing tests for an already hardened API, I find agents a complete waste of time. So my usage is mostly limited to chat sessions.
To use up the quota, I apply the provided tokens to a few personal projects here and there, but no one can make me push an actual production CL with them unless I find it useful myself.
About generating things well... it just copypastes the same snippets you could find on stackoverflow, including bugs - if the task you throw at it has already been answered.
For complete and complex code... well it spews out the same useless advice you could get from a drunk non expert person while sitting at the bar.
Issue is... LLMs are too big to fail, everyone just poured billions in this huge statistics bean counter, and... someone has to justify those expenses at board meetings.
I'm not sure why we have that obligation.
Using AI powered tooling is one thing, better IDE workflows, writing and voice recognition, translations and so forth.
Copying text around into and out of a chat window is worse than just writing COBOL, and at least COBOL is deterministic.
Once I changed my goal from maximizing code quality to maximizing billable hours, I feel a lot more optimistic about the future and these AI tools are going to create so many opportunities for me.
I found it hard to compete with junior developers in my last job (before AI was mainstream) in terms of volume of code because some of them would write 1000 lines of low quality code per day... Now I can also do this. It gives me a lot of surplus energy to figure out how to play politics and shift blame... It gives me an actual competitive upper hand over the juniors. I can out-compete them both in meeting/debate and code/feature volume. I talk wisely and code foolishly. Win win. I couldn't do this before because I was writing the code myself and I had essentially lost the ability to write high volumes of dirty code. I had a kind of analysis paralysis due to trying to solve problems in an optimal, minimalist, most reliable way. No longer a problem. Bugs are a problem for someone else.
I have so much more time to think about career strategy now. I managed to avoid being assigned to any difficult projects... I feel bad for the other people who try to go above and beyond and end up wedging themselves into a difficult situation where the software is down all the time and they have to take the blame... The AI never gets the blame.
I hated playing politics before but AI has made playing politics a necessity. It's like the more apathetic you are about your output, the better off you are.
The ironic thing is that I know it's possible to produce high quality code with AI. I've had some really positive experiences with Claude Code on side projects... But that doesn't align with the reality of 99% of software projects. The foundation is not set up right to get these kinds of results. I could set up the foundation correctly but I'd have to be present in the project since the beginning and I'd have to be given a lot of decision power; but I never get such opportunities. Bad foundations beget bad code, especially with AI because the AI never gets the idea to refactor... If your codebase is unmaintainable, it will hallucinate dirty code which doesn't work. It keeps coming up with more and more hacks... Then it delivers hacks on top of its own hacks.
Its not forced per-se but its definitely being heavily encouraged.
Having to do this says a lot about how fragile the state of AI/LLM hype.
I never asked for this assault and ignorance to be shoved upon me but management has been made utterly stupid thanks to the snake tongues of Silicon Valley.
At this point, you need to learn what AI can and cannot do, for the same reason you need to keep up with new versions of whatever framework you use. Since AI develops so fast (e.g many image use cases that AI would be terrible at 4 months ago, they now do perfectly), you need to repeat that exercise frequently.
There are 4 problems with adoptions as I see them:
1) Hype. Some people overhype what AI can do, which causes people to dismiss them when they don't immediately work;
2) Plenty of people don't like to change what they do/feel threatened by change. Doubly so when that change is perceived (real or not) to impact their job.
3) AI is weird and so it sometimes fails spectacularly at simple things, while it works very well at more complex things;
4) People use ChatGPTs free model or other AIs that are free. These are older/less powerful models, which means people end up with wrong expectations of what they can and cannot do.
5) Who likes to be told what to do? Especially by a clueless boss.
Where I running a company, I would ensure that my employees had access to a top of the line model and cursor/windsurf. I would monitor usage and have a talk with those whose usage was drastically lower than their peers.
However it would be a talk only - with the aim of figuring out why AI did not work for that employee, and what we could do to fix it.
Instead of letting everyone do their own thing then "talking to" certain people why not get people to work together and see how each other do or don't get value from LLMs, to build institutional confidence and skills.
Any lossy mistakes won’t be my fault right?
There seems to be some magical thinking at work that simply uttering the words “AI” and “automation” will somehow render us more productive.
No comments yet
But my refusal to be AI-assisted at work is viewed as "healthy skepticism" by upper management. And my colleagues who have tried the tools are not particularly impressed.
The effect on the team of developers with various level of experience started showing up as well:
The application architecture turn into a horrible mess, it's worse than junior engineers. The application started exhibiting tons of hard to debug issues, because the generated code was too low level and not covering corner cases.
Every attempt of the AI engineers to fix the issue generated one more class wrapping the existing codebase - with a fix which never worked (eg. ConnectionManagerWithTimeouts).
Eventually we basically had to rewrite the application, throwing away most of the code twice. One to just get something working with the existing architecture without crashing every hour and then another to use a framework and eliminate the last bugs occurring every once and then.
LLM needs to be in incredibly capable hands in order to be used safely and engineers will have to fight their instinct and not get swayed by the LLM telling them they're right.