Super odd experience seeing Palantir and Anthropic partnering up in DC this week to sell services to the military.
zeroCalories · 20h ago
What kind of stuff does Claude refuse to deal with? Makes me wonder what kind of workloads natsec is giving Claude.
philipkglass · 19h ago
The system prompt currently includes language like "Claude does not provide information that could be used to make chemical or biological or nuclear weapons." [1]
If you're working at Los Alamos National Laboratory, and Anthropic knows you're working at LANL, refusing to deal with nuclear weapons is trying to close the barn door 80 years after the horse left.
I recently obtained some older shock physics code from Sandia National Laboratories. This is the sort of thing that is useful for numerical modeling of explosive implosion of nuclear weapons cores (among many other things). It was written in old Fortran. I asked Claude to update it to Fortran 95. It seemed to go well for a while, then I stepped away to get a cup of coffee. When I came back it wasn't done. The whole chat had just gone away (it was still listed in my chats, but the actual contents had vanished). I don't know if this was just a bug, since it is by far the largest single-shot task I have ever given to an LLM, or if it finally reached the section about high explosive equations of state and decided it was too dangerous.
If you're working at Los Alamos National Laboratory, and Anthropic knows you're working at LANL, refusing to deal with nuclear weapons is trying to close the barn door 80 years after the horse left.
I recently obtained some older shock physics code from Sandia National Laboratories. This is the sort of thing that is useful for numerical modeling of explosive implosion of nuclear weapons cores (among many other things). It was written in old Fortran. I asked Claude to update it to Fortran 95. It seemed to go well for a while, then I stepped away to get a cup of coffee. When I came back it wasn't done. The whole chat had just gone away (it was still listed in my chats, but the actual contents had vanished). I don't know if this was just a bug, since it is by far the largest single-shot task I have ever given to an LLM, or if it finally reached the section about high explosive equations of state and decided it was too dangerous.
[1] https://docs.anthropic.com/en/release-notes/system-prompts#m...