Nvidia CEO criticizes Anthropic boss over his statements on AI

48 01-_- 25 6/15/2025, 3:03:24 PM tomshardware.com ↗

Comments (25)

imperialdrive · 31m ago
Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead, at least for my daily flavor which is PowerShell. No way a double-digit amount of jobs aren't at stake. This stuff feels like it is really starting to take off. Incredible time to be in tech, but you gotta be clever and work hard every day to stay on the ride. Many folks got comfortable and/or lazy. AI may be a kick in the pants. It is for me anyway.
WXLCKNO · 21m ago
I've been trying every flavor of AI powered development and after trying Claude Code for two days with an API key, I upgraded to the full Max 20x plan.

Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.

The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI

solumunus · 4m ago
It really is night and day. Most of them feel like cool toys, Claude Code is a genuine work horse. It immediately became completely integral to my workflow. I own a small business and I can say with absolute confidence this will reduce the amount of devs I need to hire going forward.
wellthisisgreat · 1m ago
hey can you explain the appeal of Claude Code vs Cursor?

I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?

Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?

I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?

I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.

I do agree Claude's the best for programming so would love to use it full-featured version.

unshavedyak · 4m ago
I purchased Max a week ago and have been using it a lot. Few experiences so far:

- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".

- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.

- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.

- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.

kbos87 · 4m ago
Companies like Nvidia and OpenAI base their answers to any questions on economic risk on their own best interests and a pretty short view of history. They are fighting like hell to make sure they are among a small set of winners while waving away the risk or claiming that there's some better future for the majority of people on the other side of all this.

To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.

When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?

Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.

scuol · 9m ago
Just this morning, I had Claude come up with a C++ solution that would have undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified) just by reading the code.

These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.

Maybe this is different for JS and Python code?

jsrozner · 3m ago
This is exactly right. LLMs do not build appropriate world models. And no...python and JS have similar failure cases.

Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).

ddaud · 5m ago
I agree. That mental model is precisely why I don’t use LLMs for programming.
mistrial9 · 4m ago
a difference emerges when an agent can run code and examine the results. Most platforms are very cautious about this extension. Recent MCP does define toolsets and can enable these feedback loops in a way that can be adopted by markets and software ecosystems.
rectang · 19m ago
The Anthropic CEO wants companies to lay off workers and pay Anthropic to do the work instead. Is Anthropic capable enough to replace those workers, and will it actually happen? Such pronouncements should be treated with the skepticism you'd apply to any sales pitch.
dsign · 20m ago
Why look at five years and say "everything is gonna be fine in five years, thus, everything is gonna be fine and we should keep this AI thing going"?

It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.

I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.

quonn · 13m ago
> It's early days and nobody knows how things will go, but to me it looks that in the next century or so

How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.

If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?

Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.

Therefore the interesting question is what happens short-term not what happens long-term.

fmbb · 17m ago
We have only been selling our labor for a couple of hundred years. Humanity has been around for hundreds of thousands of years.

We will manage. Hey, we can always eat the rich!

dsign · 5m ago
As long as they are not made of silicon....
leetrout · 28m ago
Anthropic warns unemployment is a serious risk. Nvidia has an inflated stock and knows how to play the game so of course they deny any such thing with a view not much past the next quarterly earnings call.

No surprises here.

levocardia · 11m ago
Nvidia is also very mad about Anthropic's advocacy for chip export controls, which is not mentioned in this article. Dario has an entire blog post explaining why preventing China from getting Nvidia's top of the line chips is a critical national security issue, and Jensen is, at least by his public statements, furious about the export controls. As it currently stands, Anthropic is winning in terms of what the actual US policy is, but it may not stay that way.
KerrAvon · 3m ago
Jensen is right, though. If we force China to develop their own technology they’ll do that! We don’t have a monopoly on talent or resources. The US can have a stake at the table or nothing at all. The time when we, the US, could do protectionism without shooting ourselves in the foot is well and truly over. The most we can do is inconvenience China in the short term.
Spivak · 2m ago
Hey now, let's not criticize the Anthropic CEO just yet. He made a totally not just pulling a number out of his ass prediction, but a prediction that's nonetheless falsifiable.

> that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years

I'm not a betting woman but I feel extremely confident taking the other end of this bet.

sorcerer-mar · 34m ago
> If you want things to be done safely and responsibly, you do it in the open

AFAICT this is a complete article of faith. Or insofar as it's true, it's true because doing it in the open allows other stakeholders to criticize and shape its direction – which seems precisely the dialogue that Jensen seems allergic to (makes sense given his incentives, of course)

artemsokolov · 23m ago
Strange to see such unfounded criticism directed at Anthropic and Dario. So far, they seem to be the most transparent and responsible in the AI race.
jjfoooo4 · 1m ago
Anthropic's marketing has been successful at position them as the responsible alternative to OpenAI, but what concretely do they actually do differently than any other model provider?

It feels very akin to the Uber vs Lyft situation, two companies with very different perceptions pursuing identical business models

sorcerer-mar · 21m ago
Because this has nothing to do with wanting more transparency or responsibility – just doesn't want a chilling of demand, export controls, or other regulation on chips.
qoez · 19m ago
Definitely responsible but transparent definitely not. Anyway he's probably just saying this because it's beneficial for nvidias bottom line if it's open.
jjfoooo4 · 4m ago
The AI executives predicting AI doomsday trend has been pretty tiresome, and I'm glad it's getting push back. It's impossible to take it seriously given an Anthropic's CEO's incentives: to thrill investors and to shape regulation of competitors.

The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.