It’s interesting to see this type of redaction as plan text in the shared document (at least on mobile it’s indistinguishable from the response text).
bananapub · 2h ago
gpt-5 doesn't "think" anything, and all LLMs routinely return incorrect things in their output, which is why you're meant to have a human who knows what they're doing review it before doing anything with it/wasting anyone else's time with it/drinking that bottle of stuff you found under the sink.
It’s interesting to see this type of redaction as plan text in the shared document (at least on mobile it’s indistinguishable from the response text).