The AI lifestyle subsidy is going to end (digitalseams.com)
1 points by bobbiechen 5m ago 0 comments
Show HN: I Built an Interactive Spreadsheet (reasonyx.com)
1 points by Kushal6070 29m ago 0 comments
Nvidia CEO criticizes Anthropic boss over his statements on AI
48 01-_- 25 6/15/2025, 3:03:24 PM tomshardware.com ↗
Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.
The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI
I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?
Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?
I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.
I do agree Claude's the best for programming so would love to use it full-featured version.
- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".
- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.
- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.
- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.
To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?
Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.
These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.
Maybe this is different for JS and Python code?
Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).
It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.
I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.
We will manage. Hey, we can always eat the rich!
No surprises here.
> that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years
I'm not a betting woman but I feel extremely confident taking the other end of this bet.
AFAICT this is a complete article of faith. Or insofar as it's true, it's true because doing it in the open allows other stakeholders to criticize and shape its direction – which seems precisely the dialogue that Jensen seems allergic to (makes sense given his incentives, of course)
It feels very akin to the Uber vs Lyft situation, two companies with very different perceptions pursuing identical business models
The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.