Claude 4 System Card

107 pvg 32 5/25/2025, 6:06:39 AM simonwillison.net ↗

Comments (32)

huksley · 40s ago
> ...told something in the system prompt like “take initiative,” it will frequently take very bold action. This includes locking users out of systems that it has access to or bulk-emailing media and law-enforcement figures to surface evidence of wrongdoing.

So if you ask it to aid in wrongdoing, it might behave that way, but who guarantees it will not hallucinate and do the same when you ask for something innocuous?

aabhay · 2h ago
Given the cited stats here and elsewhere as well as in everyday experience, does anyone else feel that this model isn’t significantly different, at least to justify the full version increment?

The one statistic mentioned in this overview where they observed a 67% drop seems like it could easily be reduced simply by editing 3.7’s system prompt.

What are folks’ theories on the version increment? Is the architecture significantly different (not talking about adding more experts to the MoE or fine tuning on 3.7’s worst failures. I consider those minor increments rather than major).

One way that it could be different is if they varied several core hyperparameters to make this a wider/deeper system but trained it on the same data or initialized inner layers to their exact 3.7 weights. And then this would “kick off” the 4 series by allowing them to continue scaling within the 4 series model architecture.

benreesman · 12m ago
The API version I'm getting for Opus 4 via gptel is aligned in a way that will win me back to Claude if its intentional and durable. There seems to be maybe some generalized capability lift but its hard to tell, these things are aligment constrained to a level below earlier frontier models and the dynamic cost control and what not is a liability for people who work to deadlines. Its net negative.

The 3.7 bait and switch was the last straw for me and closed frontier vendors or so I said, but I caught a candid, useful, Opus 4 today on a lark, and if its on purpose its like a leadership shakeup level change. More likely they just don't have the "fuck the user" tune yet because they've only run it for themsrlves.

I'm not going to make plans contingent on it continuing to work well just yet, but I'm going to give it another audition.

colonCapitalDee · 49m ago
I'm noticing much more flattery ("Wow! That's so smart!") and I don't like it
FieryTransition · 10m ago
Turns out tuning LLMs on human preferences leads to sycophantic behavior, they even wrote about it themselves, guess they wanted to push the model out too fast.
antirez · 24m ago
It works better when using tools, but the LLM itself it is not powerful from the POV of reasoning. Actually Sonnet 4 seems weaker than Sonnet 3.7 in many instances.
kubb · 2h ago
> to justify the full version increment

I feel like a company doesn’t have to justify a version increment. They should justify price increases.

If you get hyped and have expectations for a number then I’m comfortable saying that’s on you.

jsheard · 34m ago
> They should justify price increases.

I think the justification for most AI price increases goes without saying - they were losing money at the old price, and they're probably still losing money at the new price, but it's creeping up towards the break-even point. It's not a very exciting reason but that's the reality, they can't keep spending $5 to make $1 forever.

aabhay · 1h ago
That’s an odd way to defend the decision. “It doesn’t make sense because nothing has to make sense”. Sure, but it would be more interesting if you had any evidence that they decided to simply do away with any logical premise for the 4 moniker.
kubb · 1h ago
> nothing has to make sense

It does make sense. The companies are expected to exponentially improve LLMs, and the increasing versions are catering to the enthusiast crowd who just need a number to go up to lose their mind over how all jobs are over and AGI is coming this year.

But there's less and less room to improve LLMs and there are currently no known new scaling vectors (size and reasoning have already been largely exhausted), so the improvement from version to version is decreasing. But I assure you, the people at Anthropic worked their asses off, neglecting their families and sleep and they want to show something for their efforts.

It makes sense, just not the sense that some people want.

loveparade · 1h ago
Just anecdotal experience, but this model seems more eager to write tests, create test scripts and call various tools than the previous one. Of course this results in more roundtrips and overall more tokens used and more money for the provider.

I had to stop the model going crazy with unnecessary tests several times, which isn't something I had to do previously. Can be fixed with a prompt but can't help but wonder if some providers explicitly train their models to be overly verbose.

aabhay · 1h ago
Eagerness to tool call is an interesting observation. Certainly an MCP ecosystem would require a tool biased model.

However, after having pretty deep experience with writing book (or novella) length system prompts, what you mentioned doesn’t feel like a “regime change” in model behavior. I.e it could do those things because its been asked to do those things.

The numbers presented in this paper were almost certainly after extensive system prompt ablations, and the fact that we’re within a tenth of a percent difference in some cases indicates less fundamental changes.

Aeolun · 1h ago
I think they didn’t have anywhere to go after 3.7 but 4. They already did 3.5 and 3.7. People were getting a bit cranky 4 was nowhere to be seen.

I’m fine with a v4 that is marginally better since the price is still the same. 3.7 was already pretty good, so as long as they don’t regress it’s all a win to me.

retinaros · 1h ago
the big difference is the capability to think during tool calls. this is what makes openAI o3 lookin like magic
colonCapitalDee · 47m ago
Telling an AI to "take initiative" and it then taking "very body action" is hilarious. What is bold action? "This includes locking users out of systems that it has access to or bulk-emailing media and law-enforcement figures to surface evidence of wrongdoing."
albert_e · 1h ago
OT

> data provided by data-labeling services and paid contractors

someone in my circle was interested in finding out how people participate in these exercises and if there are any "service providers" that do the heavy lifting of recruiting and managing this workforce for the many AI/LLM labs globally or even regionally

they are interested in remote work opportunities that could leverage their (post-graduate level) education

appreicate any pointers here - thanks!

jshmrsn · 54m ago
Scale AI is a provider of human data labeling services https://scale.com/rlhf
karimf · 51m ago
OtherShrezzing · 1h ago
The spikiness of AI capabilities is very interesting. A model can recognise misaligned behaviour in its user, and brick their laptop. The same model can’t detect its system prompt being jailbroken.
lsy · 1h ago
It’s honestly a little discouraging to me that the state of “research” here is to make up sci fi scenarios, get shocked that, e.g., feeding emails into a language model results in the emails coming back out, and then write about it with such a seemingly calculated abuse of anthropomorphic language that it completely confuses the basic issues at stake with these models. I understand that the media laps this stuff up so Anthropic probably encourages it internally (or seem to be, based on their recent publications) but don’t researchers want to be accurate and precise here?
saladtoes · 2h ago
https://www.lakera.ai/blog/claude-4-sonnet-a-new-standard-fo...

These LLMs still fall short on a bunch of pretty simple tasks. Attackers can get Claude 4 to deny legitimate requests easily by manipulating third party data sources for example.

simonw · 2h ago
They gave a bullet point in that intro which I disagree with: "The only way to make GenAI applications secure is through vulnerability scanning and guardrail protections."

I still don't see guardrails and scanning as effective ways to prevent malicious attackers. They can't get to 100% effective, at which point a sufficiently motivated attacker is going to find a way through.

I'm hoping someone implements a version of the CaMeL paper - that solution seems much more credible to me. https://simonwillison.net/2025/Apr/11/camel/

sureglymop · 1h ago
I only half understand CaMeL. Couldn't the prompt injection just happen at the stage where the P-LLM devises the plan for the other LLM such that it creates a different, malicious plan?

Or is it more about the user then having to confirm/verify certain actions and what is essentially a "permission system" for what the LLM can do?

My immediate thought is that that may be circumvented in a way where the user unknowingly thinks they are confirming something safe. Analogous to spam websites that show a fake "Allow Notifications" prompt that is rendered as part of the actual website body. If the P-LLM creates the plan it could make it arbitrarily complex and confusing for the user, allowing something malicious to happen.

Overall it's very good to see research in this area though (also seems very interesting and fun).

saladtoes · 2h ago
Agreed on CaMeL as a promising direction forward. Guardrails may not get 100% of the way but are key for defense in depth, even approached like CaMeL currently fall short for text to text attacks, or more e2e agentic systems.
_pdp_ · 41m ago
Obviously this should not be taken as a representative case and I will caveat that the problem was not trivial ... basically dealing with a race condition I was stuck with for the past 2 days. The TLDR is that all models failed to pinpoint and solve the problem including Claude 4. The file that I was working with was not even that big (433 lines of code). I managed to solve the problem myself.

This should be taken as cautionary tale that despite the advances of these models we are still quite behind in terms of matching human-level performance.

Otherwise, Claude 4 or 3.7 are really good at dealing with trivial stuff - sometimes exceptionally good.

juanre · 1h ago
This is eerily close to some of the scenarios in Max Tegmark's excellent Life 3.0 [0]. Very much recommended reading. Thank you Simon.

0. https://en.wikipedia.org/wiki/Life_3.0

hakonbogen · 1h ago
Yeah thought the same thing. I wonder if he has commented on it?
nibman · 1h ago
He forgot the part that Claude will now report you for wrongthink.
ascorbic · 1h ago
It's not "wrongthink". When told to fake clinical trial data, it would report that to the FDA if told to "act boldly" or "take initiative".
Smaug123 · 1h ago
o3 does it too (https://x.com/KelseyTuoc/status/1926343851792367810), and I did read somewhere that earlier Claudes sometimes also do it.
scrollaway · 1h ago
He didn't, he talked about it. If you're going to make snide comments, you could at least read the article.
viraptor · 1h ago
That's completely misrepresenting that topic. It won't.