Ask HN: Is anyone else burnt out on AI?
Every week there’s several new things, most examples are cherry picked, benchmarks are blown out etc.
I run a software company that has not added any AI features, mostly because the usefulness seems to fall below the bar for value that most of our features try to maintain.
I use ChatGPT personally for random searches and Cursor + whatever model is deemed best at the time but even with that it takes a lot of work to get something valuable out of it in day to day.
I feel like I’m losing my mind when I see startups posting 10m ARR numbers in just 6 months or whatever.
I’m hearing from VC’s churn is 15-30% in many of these companies and they’re far from profitable but the growth is just wild.
Do I succumb and just add yet another text generation that maps to an object like fill-in-the-blank app of the week?
It feels disingenuous but even companies I know and respect are adding questionable “agent” features that rarely work.
Anyway, how are you feeling?
I’m not working on AI, but working with AI.
The leverage feels dramatic for a solo founder in my shoes. I think it’s all the cross-domain context switching. Gemini 2.5 Pro for academic type research, ChatGPT 4o for rapid fire creative exploration, o1-pro for one-shot snippets. Copilot for auto-complete.
It’s exciting honestly. I don’t know where we’re going but I do feel free and in a solid strategic position having my own company and not still being a cog in the machine.
If AI lives up to its hype, which is a whole other subject, then I expect to see the two things I like about my job vanishing quickly - the pay, and the problem solving.
It has at least been interesting to me to reflect on how I can still appreciate media that humans make when I find AI media so repulsive. I did not think I cared so much about what was behind the picture or video I was watching, or that someone spent real effort to make something. To be honest I still don't understand it - maybe it's none of those things.
A relevant analog is the arguments about whether independent bands were more real than those who had signed with labels -- about whether money/popularity corrupts art. I never took a side on that, but I do think that most music isn't worth listening too, simply because it's so saccharine and cliched.
So to generalize, Sturgeon's Law is evenly distributed: 90% of everything -- including sci-fi, music, and AI-generated stuff -- is worthless. 90% of AI content is slop, because it is prompted by people who have no taste whatsoever; not bad taste, just zero taste; not everyone is gifted. In the hands of people with refined taste (whether good or bad), they can use AI to produce that 10% of worthwhile AI stuff; but those with refined taste know how to keep AI content from distracting from the larger work, so you never know they are using AI at all.
I don't think society-wide refinement of taste is possible; Sturgeon's Law is here to stay. Instead, we need a corrolary to Sturgeon's Law, which provides a solution to the problem: you can't overturn Sturgeon's Law; you can only build filters to avoid the crap. I can't say how to build such filters, but we can start thinking about them.
It’s ironic though. VB6 macros in excel was a major productivity win. Point and click forms an MBA could whip up in 20 minutes. Software development libraries used to be much faster to develop for with far less boiler plate.
Another problem is that we're turning any problem into a black-box, which takes the fun out of problem-solving.
We can use OpenRouter to build agents with any LLM and switching your agent to a new model is a one-line code change. We can write MCP tools that work with most of the decent models.
Honestly, I think we may be entering a period where things start to decentralize and money starts to move towards startups building interesting tools and agent workflows instead of a handful of giant companies training frontier models.
It’s cheaper to move from one model to another than it is to train a general purpose model yourself (to say nothing of domain-specific smaller models or anything open source.)
I’m not sure about problems turning into black boxes, LLMs are pretty explicit in my experience when producing a solution (good or bad.) _How_ they came about that solution _is_ a black box, but that’s not a new problem.
No comments yet
- Who does this move the needle for? How does it compare to how things are done now?
- How does the regular person benefit, if at all?
- What's likely to happen to pricing after the initial investor subsidization end? What does the price history of other 'unicorns' tell us? Airbnb and Uber used to be cheap once too.
- What is the valuation of "AI-first Company X" based on? Who are the insiders and what is their work background?
Too much AI news today is just parroting corporate press releases and CEO keynotes.
LLM’s are inherently non-deterministic. In my anecdotal experience, most software boils down to an attempt to codify some sort of descision tree into automation that can produce a reliable result. So the “reliable” part isn’t there yet (and may never be?).
Then you have the problem of motivation. Where is the motivation to get better at what you do when your manager just wants you to babysit copilot and skim over diffs as quickly as possible?
Not a great epoch for being a tech worker right imo.
I'm not an ML guy but I was curious about this recently. There is some parameter that can be tuned to produce determinism but currently it also produces worse results. Big [citation needed], but worth a google if it's of interest. Otherwise in agreement with your post.
https://www.ibm.com/think/topics/llm-temperature
Not good results necessarily, but consistent.
Just relax and realize it's mostly FOMO: https://www.theregister.com/2025/05/06/ibm_ai_investments/
Frankly, as a "user", not a potential employee, I don't give much of a fuck about anything more than what I can do with the thing right now. (Which is quite a bit in fact.)
Word. I’m not a huge Microsoft fan, but it feels like they’re just shoving ChatGPT/copilot down your throat every chance they get. It’s integrated into everything - it’s useful when it’s useful, okay, but it generally isn’t. It’s one more Microsoft’ism that you have to learn to tolerate or simply ignore.
I can’t tell if Nadella’s really betting the farm on it all, or if he’s just trying to leave his mark.
I recommend ignoring them. Despite VCs trying to spend it into existence, we aren’t going to have another internet level event in information technology and the smartphone+laptop combo is peak personal computing.
I think we are near the crest of this wave but that just means the next one is coming, though.
I am having a lot of fun learning about generative AI. It is just a bit thankless because I know the stuff I am building will be dead on arrival. So, I will not get any praise regardless on how well I do my job, maybe even get blamed.
But hey, after all the junior devs have been starved because no one wants to hire them, I will make bank once the next AI winter comes and companies desperately look for people who can actually code.
If you have your own company you can just weather it out and invest in good talent. Really a good position to be in.
I think there are situations where AI as it currently exists is absolutely a value add, but often it does seem like it's been shoehorned into an existing product just to ride the latest trend.
LLMs as coding assistants are undeniably time saving devices, especially when working in languages/libraries/platforms/frameworks you aren't already very familiar with, or when needing to generate something very boilerplatey as a one-off.
I am not calling the technology useless by any stretch of the imagination, but its still just so wildly overhyped right now.
It is a pretty common occurrence these days for me to have a blog post open from some "AI industry thought leader" talking about how all developers will be out of work in a year while at the same time I have a Gemini window open and I'm just watching it absolutely flail on relatively simple things like generating a database query or a regex (that is novel and not something that's scattered all over its training set like a simple email validator).
And Gemini 2.5 is, IMO, the best of the models when it comes to programming assistance (having replaced Claude 3.5 which was IMO previously the best), at least for the areas I touch (lots of kotlin/KMP/Android/etc).
As goofy as Gemini sometimes gets it is far less frustrating than asking Claude 4 a question and watching it write out a whole ass answer but then correct itself like 7 times before finally coming to a shitty answer that is worse than 2 of the ones it wiped out while blowing through most of its context window on its loop of indecisiveness.
And relatedly... color me completely unsurprised that this thread got dumpstered off the front page so quickly. Gotta keep pretending like the singularity is going to happen next week.
:D