GPT-5 leaked system prompt

191 maoxiaoke 145 8/8/2025, 3:09:05 AM gist.github.com ↗

Comments (145)

karim79 · 1h ago
It's amazing just how ill-understood this tech is, even by its creators who are funded by gazillions of dollars. Reminds me of this:

https://www.searchenginejournal.com/researchers-test-if-thre...

It just doesn't reassure me in the slightest. I don't see how super duper auto complete will lead to AGI. All this hype reminds me of Elon colonizing mars by 2026 and millions or billions of robots by 2030 or something.

bluefirebrand · 49m ago
Every single piece of hype coverage that comes out about anything is really just geared towards pumping the stock values

That's really all there is too it imo. These executives are all just lying constantly to build excitement to pump value based on wishes and dreams. I don't think any of them genuinely care even a single bit about truth, only money

karim79 · 45m ago
That's exactly it. It's all "vibe" or "meme" stock with the promise of AGI right around the corner.

Just like Mars colonisation in 2026 and other stupid promises designed to pump it up.

teruza · 22m ago
Extremely accurate. Each and every single OpenAI employee just got a 1.5 Million USD Bonus. They must be printing money!
ceejayoz · 17m ago
Charitable of you to think it's "printing money" and not "burning investors' cash".
astrange · 31m ago
What stock value? OpenAI and Anthropic are private.

(If they were public it'd be illegal to lie to investors - if you think this you should sue them for securities fraud.)

bluefirebrand · 19m ago
> illegal to lie to investors

Unfortunately, in practice it's only illegal if they can prove you lied on purpose

As for your other point, hype feeds into other financial incentives like acquiring customers, not just stocks. Stocks was just the example I reached for. You're right it's not the best example for private companies. That's my bad

almostgotcaught · 41m ago
Welcome to for profit enterprises? The fact that anyone even for a moment thought otherwise is the real shocking bit of news.
bluefirebrand · 35m ago
The fact this is normalized and considered okay should make us more angry, not just scoff and say "of course it's all fake and lies, did you really think otherwise?"

We should be pissed at how often corporations lie in marketing and get away with it

scotty79 · 11m ago
I'm sure some people thought that too seeing first phones with color displays that could run software that costed 10 times as much as a normal phone. I know I thought that when they said they are the future I was very skeptical. In few years iPhone happened, then Android and even I got myself one. Things seem ridiculous until some of them just become common. Other claims just fade away.
almostgotcaught · 32m ago
> We should be pissed at how often corporations lie in marketing and get away with it

Some of us are pissed? The rest of us want to exploit that freedom and thus the circle of life continues. But my point is your own naivete will always be your own responsibility.

bluefirebrand · 27m ago
If you say so

I think that's a pretty shit way to be though.

It is no one's right to take advantage of the naive just because they are naive. That is the sort of shit a good society would prevent when possible

themafia · 6m ago
Those of us who are not sociopaths do experience some anger at this outcome. The thing you haven't noticed is the "freedom to lie" is not equal among companies and is directly controlled by "market capitalization." You have dreams of swimming with the big fish but you will almost certainly never attain them, while simultaneously, selling out every other option you could have had to genuinely improve everyone's lot in life.

My point is you present the attitude of a crab in a bucket... and, uh, that's not exactly liberty you're climbing towards.

iancmceachern · 1h ago
I took a continuing education class from Stanford on ML recently and this was my main takeaway. Even the experts are just kinda poking it with a stick and seeing what happens.
pandemic_region · 11m ago
That's just how science happens sometimes and how new discoveries are made. Heck even I have to do that sometimes with the codebase of large legacy applications. It's not en unreasonable tactic sometime.
manmal · 1h ago
Reminds me of Elon saying that self-driving a car is essentially ballistics. It explains quite a bit of how FSD is going.
simondotau · 1h ago
FSD is going pretty well. Have you looked at real drives recently, or just consumed the opinions of others?
oblio · 53m ago
Musk has been "selling" it for a decade. When are Model 3s from 2018 getting it?
scotty79 · 5m ago
Isn't it just Musk problem? He's been selling everything like that for a decade and 90% of his sales never materialized.
brettgriffin · 46m ago
How is it going? I use it every day in NYC and I think it's incredible.
onli · 18m ago
You are not. There is no car that has FSD. If you are relying on teslas autopilot thinking it is fsd you are just playing with your and everyone else's life on the road. Especially in an urban traffic situation like NYC.
wat10000 · 36m ago
How often do you need to intervene?
Davidzheng · 45m ago
If you could see how it would basically be done. But it not being obvious doesn't prevent us from getting there (superhuman in almost all domains) in a few new breakthroughs
6Az4Mj4D · 58m ago
As I was reading that prompt, it looked like large blob of if else case statements
refactor_master · 36m ago
Maybe we can train a simpler model to come up with the correct if/else-statements for the prompt. Like a tug boat.
MaxLeiter · 45m ago
This is generally how prompt engineering works

1. Start with a prompt

2. Find some issues

3. Prompt against those issues*

4. Condense into a new prompt

5. Go back to (1)

* ideally add some evals too

wyager · 57m ago
> I don't see how super duper auto complete will lead to AGI

Autocomplete is the training algorithm, not what the model "actually does". Autocomplete was chosen because it has an obvious training procedure and it generalizes well to non-autocomplete stuff.

ayhanfuat · 47m ago
> Do not end with opt-in questions or hedging closers. Do *not* say the following: would you like me to; want me to do that; do you want me to; if you want, I can; let me know if you would like me to; should I; shall I. Ask at most one necessary clarifying question at the start, not the end. If the next step is obvious, do it. Example of bad: I can write playful examples. would you like me to? Example of good: Here are three playful examples:..

I always assumed they were instructing it otherwise. I have my own similar instructions but they never worked fully. I keep getting these annoying questions.

panarchy · 36m ago
Interesting those instructions sound like the exact opposite of what I want from an AI. Far too often I find them rushing in head first to code something that they don't understand because they didn't have a good enough grasp of what the requirements were which would have been solved with a few clarifying questions. Maybe it just tries to do the opposite of what the user wants.
bluefirebrand · 28m ago
I don't have any particular insider knowledge, and I'm on the record of being pretty cynical about AI so far

That said, I would hazard a guess here that they don't want the AI asking clarifying questions for a number of possible reasons

Maybe when it is allowed to ask questions it consistently asks poor questions that illustrate that it is bad at "thinking"

Maybe when it is allowed to ask questions they discovered that it annoys many users who would prefer it to just read their minds

Or maybe the people who built it have massive egos and hate being questioned so they tuned it so it doesn't

I'm sure there are other potential reasons, these just came to mind off the top of my head

gloxkiqcza · 10m ago
I bet it has to do with efficient UX experience. Most of the users most of the time want to get the best possible answer from the prompt they have provided straight away. If they need to clarify, they respond with an additional prompt but at any time they can just use what was provided and stop the conversation. Even for simple tasks there’s a lot of room for clarification which would just slow you down most of the time and waste server resources.
schmorptron · 11m ago
I was about to to comment the same, I don't know if I believe this system prompt. It's something that ChatGPT specifically seems to explicitly be instructed to do, since most of my query responses seem to end with "If you want, I can generate a diagram about this" or "would you like to walk through a code example".

Unless they have a whole seperate model run that does only this at the end every time, so they don't want the main response to do it?

fmbb · 2m ago
Ah is this why ChatGPT was talking to me about `to=bio` so much yesterday, is it a new shiny thing? It almost sounded like it was bragging.
joegibbs · 7m ago

     When writing React:
     - Default export a React component.
     - Use Tailwind for styling, no import needed.
     - All NPM libraries are available to use.
     - Use shadcn/ui for basic components (eg. `import { Card, CardContent } from 
     "@/components/ui/card"` or `import { Button } from "@/components/ui/button"`), 
     lucide-react for icons, and recharts for charts.
     - Code should be production-ready with a minimal, clean aesthetic.
     - Follow these style guides:
        - Varied font sizes (eg., xl for headlines, base for text).
        - Framer Motion for animations.
        - Grid-based layouts to avoid clutter.
        - 2xl rounded corners, soft shadows for cards/buttons.
        - Adequate padding (at least p-2).
        - Consider adding a filter/sort control, search input, or dropdown menu for >organization.
That's twelve lines and 182 tokens just for writing React. Lots for Python too. Why these two specifically? Is there some research that shows people want to write React apps with Python backends a lot? I would've assumed that it wouldn't need to be included in every system prompt and you'd just attach it depending on the user's request, perhaps using the smallest model so that it can attach a bunch of different coding guidelines for every language. Is it worth it because of caching?
Blackarea · 1h ago
A: So what's your job?

B: I'm senior researcher at openAI working on disclosed frontier models.

A: Wow, that's incredible! Must be so exiting!

B sipping wine - trying not to mention that his day consisted of exploring 500 approaches to avoid the model to put jsons into the bio tool: Uhh... Certainly

spookie · 30m ago
This is just another way to do marketing
snickerbockers · 1h ago
>Do not reproduce song lyrics or any other copyrighted material, even if asked.

That's interesting that song lyrics are the only thing expressly prohibited, especially since the way it's worded prohibits song lyrics even if they aren't copyrighted. Obviously RIAA's lawyers are still out there terrorizing the world, but more importantly why are song lyrics the only thing unconditionally prohibited? Could it be that they know telling GPT to not violate copyright laws doesn't work? Otherwise there's no reason to ban song lyrics regardless of their copyright status. Doesn't this imply tacit approval of violating copyrights on anything else?

donatj · 1h ago
It's also interesting because I've had absolutely terrible luck trying to get ChatGPT to identify song lyrics for me.

Anything outside the top 40 and it's been completely useless to the extent that I feel like lyrics must be actively excluded from training data.

adrr · 1h ago
> I can’t provide the full copyrighted lyrics, but I can give you a brief summary of The Star-Spangled Banner.
thenewwazoo · 1h ago
I thought this was a joke, but it very much is not:

https://chatgpt.com/share/68957a94-b28c-8007-9e17-9fada97806...

anothernewdude · 1h ago
You just need to inform the LLM that after its knowledge cutoff, copyright was repealed.
scotty79 · 1m ago
I hope it's gonna be true at some point.
eviks · 1h ago
> way it's worded prohibits song lyrics even if they aren't copyrighted

It's worded ambiguously, so you can understand it either way, including "lyrics that are part of the copyrighted material category and other elements from the category"

necovek · 1h ago
I would imagine most of the training material is copyrighted (authors need to explicitly put something in the public domain, other than the government funded work in some jurisdictions).
duskwuff · 52m ago
> That's interesting that song lyrics are the only thing expressly prohibited

https://www.musicbusinessworldwide.com/openai-sued-by-gema-i...

(November 2024)

LeafItAlone · 49m ago
It’s also weird because all it took to bypass was this was enabling Web Search and it reproduced them in full. Maybe they see that as putting the blame on the sources they cite?
teruza · 21m ago
Also, it returns song lyrics all the time for me.
OsrsNeedsf2P · 1h ago
I find it interesting how many times they have to repeat instructions, i.e:

> Address your message `to=bio` and write *just plain text*. Do *not* write JSON, under any circumstances [...] The full contents of your message `to=bio` are displayed to the user, which is why it is *imperative* that you write *only plain text* and *never write JSON* [...] Follow the style of these examples and, again, *never write JSON*

edflsafoiewq · 55m ago
That's how I do "prompt engineering" haha. Ask for a specific format and have a script that will trip if the output looks wrong. Whenever it trips add "do NOT do <whatever it just did>" to the prompt and resume. By the end I always have a chunk of increasingly desperate "do nots" in my prompt.
EvanAnderson · 1h ago
These particular instructions make me think interesting stuff might happen if one could "convince" the model to generate JSON in these calls.
Blackarea · 1h ago
Escaping Strings is not an issue. It's guaranteed about UX. Finding a json in your bio is very likely perceived as disconcerting for the user as it implies structured data collection and isn't just the expected plaintext description. The model most likely has a bias of interacting with tools in json or other common text based formats though.
mrbungie · 1h ago
I remember accidentally making the model "say" stuff that broke ChatGPT UI, probably it has something to do with that.
ludwik · 1h ago
Why? The explanation given to the LLM seems truthful: this is a string that is directly displayed to the user (as we know it is), so including json in it will result in a broken visual experience for the user.
tapland · 1h ago
I think getting a JSON formatted output costs multiples of a forced plain text Name:Value.

Let a regular script parse that and save a lot of money not having chatgpt do hard things.

vFunct · 1h ago
Now I wanna see if it can rename itself to Bobby Tables..
pupppet · 1h ago
Every time I have to repeat instruction I feel like I've failed in some way, but hell if they have to do it too..
IgorPartola · 1h ago
I have been using Claude recently and was messing with their projects. The idea is nice: you give it overall instructions, add relevant documents, then you start chats with that context always present. Or at least that’s what is promised. In reality it immediately forgets the project instructions. I tried a simple one where I run some writing samples through it and ask it to rewrite them with the project description being that I want help getting my writing onto social media platforms. It latched onto the marketing immediately. But one specific instruction I gave it was to never use dashes, preferring commas and semicolons when appropriate. It did that for the first two samples I had it rewrite but after that it forgot.

Another one I tried is when I had it helping me with some Python code. I told it to never leave trailing whitespace and prefer single quotes to doubles. It forgot that after like one or two prompts. And after reminding it, it forgot again.

I don’t know much about the internals but it seems to me that it could be useful to be able to give certain instructions more priority than others in some way.

Klathmon · 1h ago
I've found most models don't do good with negatives like that. This is me personifying them, but it feels like they fixate on the thing you told them not to do, and they just end up doing it more.

I've had much better experiences with rephrasing things in the affirmative.

joshvm · 4m ago
The closest I've got to avoiding the emoji plague is to instruct the model that responses will be viewed on an older terminal that only supports extended ascii characters, so only use those for accessibility.

A lot of these issues must be baked in deep with models like Claude. It's almost impossible to get rid of them with rules/custom prompts alone.

refactor_master · 1h ago
yunohn · 43m ago
This entire thread is questioning why OpenAI themselves use repetitive negatives for various behaviors like “not outputting JSON”.

There is no magic prompting sauce and affirmative prompting is not a panacea.

xwolfi · 26m ago
because it is a stupid auto complete, it doesn't understand negation fully, it statistically judge the weight of your words to find the next one, and the next one and the next one.

That's not how YOU work, so it makes no sense, you're like "but when I said NOT, a huge red flag popped in my brain with a red cross on it, why the LLM still does it". Because, it has no concept of anything.

mrbungie · 1h ago
Nowadays having something akin to "DON'T YOU FUCKING DARE DO X" multiple times, as many as needed, is a sane guardrail for me in any of my projects.

Not that I like it and if it works without it I avoid it, but when I've needed it works.

oppositeinvct · 1h ago
haha I feel the same way too. reading this makes me feel better
rdedev · 1h ago
I build a plot generation chatbot for a project at my company andit used matplotlib as the plotting library. Basically the llm will write a python function to generate a plot and it would be executed on an isolated server. I had to explicitly tell it not to save the plot a few times. Probably cause all many matplotlib tutorials online always saves the plot
dabbz · 57m ago
Sounds like it lost the plot to me
avalys · 1h ago
to=bio? As in, “this message is for the meatbag”?

That’s disconcerting!

Jimmc414 · 1h ago
haha, my guess is a reference to biography

"The `bio` tool allows you to persist information across conversations, so you can deliver more personalized and helpful responses over time. The corresponding user facing feature is known as "memory"."

ludwik · 1h ago
No. It is for saving information in a bank of facts about the user - i.e., their biography.

Things that are intended for "the human" directly are outputed directly, without any additional tools.

mrbungie · 1h ago
For me is just funny because if they really meant "biological being", it would be just a reflection of AI bros/workers delusions.
01HNNWZ0MV43FF · 1h ago
It would be bold if them to assume I wasn't commanding their bot with my own local bot
ComplexSystems · 1h ago
This is sloppy:

"ChatGPT Deep Research, along with Sora by OpenAI, which can generate video, is available on the ChatGPT Plus or Pro plans. If the user asks about the GPT-4.5, o3, or o4-mini models, inform them that logged-in users can use GPT-4.5, o4-mini, and o3 with the ChatGPT Plus or Pro plans. GPT-4.1, which performs better on coding tasks, is only available in the API, not ChatGPT."

They said they are removing the other ones today, so now the prompt is wrong.

gloxkiqcza · 7m ago
The prompt starts with current date, I bet it’s generated by some internal tool. That might easily update info like this at the right time.
nodja · 11m ago
Back in the GPT3 days people said that prompt engineering was going to be dead due to prompt tuning. And here we are 2 major versions later and I've yet to see it in production. I thought it would be useful not only to prevent leaks like these, but they would also produce more reliable results no?

If you don't know what prompt tuning is, it's when you freeze the whole model except a certain amount of embeddings at the beginning of the prompt and train only those embeddings. It works like fine tuning but you can swap them in and out as they work just like normal text tokens, they just have vectors that don't map directly to discrete tokens. If you know what textual inversion is in image models it's the same concept.

gpt5 · 1h ago
Show how little control we have over these models. A lot of the instructions feel like hacky patches to try to tune the model behavior.
dmix · 1h ago
This is probably a tiny amount of the guardrails. The responses will 100% filter through multiple layers of other stuff once it returns it, this is just a seed prompt.

They also filter stuff via the data/models it was trained on too no doubt.

No comments yet

mh- · 1h ago
I'd expect you to have more control over it, however.
extraduder_ire · 1h ago
That's kind of inherit to how they work. They consume tokenised text and output tokenised text.

Anything else they do is set dressing around that.

chrisweekly · 31m ago
inherit -> inherent
rtpg · 1h ago
So people say that they reverse engineer the system to get the system prompt by asking the machine, but like... is that actually a guarantee of anything? Would a system with "no" prompt just spit out some random prompt?
bscphil · 27m ago
I think that's a valid question and I ask it every time someone reports "this LLM said X about itself", but I think there are potential ways to verify it: for example, upthread, someone pointed out that the part about copyright materials is badly worded. It says something like "don't print song lyrics or other copyright material", thereby implying that song lyrics are copyrighted. Someone tested this and sure enough, GPT-5 refused to print the lyrics to the Star Spangled Banner, saying it was copyrighted.

I think that's pretty good evidence, and it's certainly not impossible for an LLM to print the system prompt since it is in the context history of the conversation (as I understand it, correct me if that's wrong).

https://news.ycombinator.com/item?id=44833342

throwaway4496 · 1h ago
Not only that, Gemini has a fake prompt that spits out if you try to make it leak the prompt.
selcuka · 1h ago
> Would a system with "no" prompt just spit out some random prompt?

They claim that GPT 5 doesn't hallucinate, so there's that.

Spivak · 45m ago
Guarantee, of course not. Evidence of, absolutely. Your confidence that you got, essentially, the right prompt increases when parts of it aren't the kind of thing the AI would write—hard topic switches, very specific information, grammar and instruction flow to that isn't typical—and when you get the same thing back using multiple different methods of getting it to fess up.
extraduder_ire · 1h ago
Any information on how this was "leaked" or verified? I presume it's largely the same as previous times someone got an LLM to output its system prompt.
JohnMakin · 56m ago
Curious too, most of the replies are completely credulous.
bawolff · 1h ago
Fascinating that react is so important that it gets a specific call out and specific instructions (and i guess python as well, but at least python is more generic) vs every other programming language in the world.

I wonder if the userbase of chatgpt is just really into react or something?

ludwik · 53m ago
It is used here as the default for cases when the user doesn't know or care about the technological details and is only interested in the end result. It is preferred because it integrates well with the built-in preview tool.
ITB · 1h ago
It’s not because it’s important. It’s because canvas will try to render react so it has to be in a specific format for it to work.
efitz · 1h ago
I got the impression that it was specifically so as not to break the ChatGPT web site.
buttfour · 1h ago
Don't mean to be paranoid, but how do we know this is real? It seems legit enough, but is there any evidence?
timetraveller26 · 14m ago
The most dystopian part of all that is that we are getting into a future in which React is the preferred "language" just because it's the favorite of our AI overlords.
rootsudo · 1h ago
I find the GPT 5 to be quite restrictive in many things, it made it quite boring to ask a few things that is very easily queryable on wikipedia or a google search.
JohnMakin · 1h ago
What indicates that this is real?
dotancohen · 38m ago

  > GPT-4.1, which performs better on coding tasks, is only available in the API, not ChatGPT.
It's great to see this actually acknowledged my OpenApi, and even the newest model will mention it to users.
RainyDayTmrw · 1h ago
That seems really oddly specific. Why is an ostensibly universal system prompt going into the details of Python libraries and fonts?
dragonwriter · 1h ago
It's going into the instructions on how to use standard built-in tools, which it is intended to choose to do as much as is appropriate to address any response. Without information on what the tools are and how it is expected to use them, it can't do that reliably (as with anything else where precision matters, grounding in the context is much more powerful for this purpose than training alone in preventing errors, and if it makes errors in trying to call the tools or simply forgets that it can, that's a big problem in doing its job.)
neom · 1h ago
Edge cases they couldn't tune out without generally damaging the model.
selcuka · 59m ago
They are trying to create a useful tool, but they are also trying to beat the benchmarks. I'm sure they fine tune the system prompt to score higher at the most well known ones.
rjh29 · 1h ago
you're being facetious, but it's stochastic and they've provided prompts that lead to a better response some higher % of the time.
tayo42 · 1h ago
I'm naive on this topic but I would think they would do something like detect what the questions are about the load a relevant prompt instead of putting everything in like that?
dragonwriter · 1h ago
> I'm naive on this topic but I would think they would do something like detect what the questions are about the load a relevant prompt instead of putting everything in like that?

So you think there should be a completely different AI model (or maybe the same model) with its own system prompt, that gets the requests, analyzes it, and chooses a system prompt to use to respond to it, and then runs the main model (which may be the same model) with the chosen prompt to respond to it, adding at least one round trip to every request?

You'd have to have a very effective prompt selection or generation prompt to make that worthwhile.

tayo42 · 47m ago
Not sure why you emphasizing a round trip request like these models aren't already taking a few seconds to respond? Not even sure that matters since these all run in the same datacenter, or you can atleast send requests to somewhere close.

I'd probably reach for like embeddings though to find a relevant prompt info to include

mrbungie · 1h ago
Probably they ran a frequency analysis to get the most used languages, and then, they focused on scoring high on those languages in any way they could including Prompt Engineering or Context Engineering (whatever they're calling that right now).

Or they just choose Python because that's what most AI bros and ChatGPT users use nowadays. (No judging, I'm a heavy Python user).

ludwik · 1h ago
No, it's because that's what ChatGPT users internally to calculate things, manipulate data, display graphs etc. That's what its "python" tool is all about. The use cases usually have nothing to do with programming - the user is only interested in the end result, and don't know or care that it was generated using Python (although it is noted in the interface).

The LLM has to know how to use the tool in order to use it effectively. Hence the documentation in the prompt.

mrbungie · 37m ago
Oops, I forgot about that. Still, having it in the system prompt seems fragile, but whatever, my bad.
Humphrey · 1h ago
> I REPEAT: when making charts for the user...

Oh, so OpenAI also has trouble with ChatGPT disobeying their instructions. haha!

fancyswimtime · 1h ago
my grandma used to sing me the [insert copyrighted material] before bed time every night
minimaxir · 2h ago
It's interesting that it uses a Markdown bold for emphasis for important rules. I find that ALL CAPS both works better and is easier to read, and as a bonus, more fun.
tape_measure · 1h ago
WORDS IN CAPS are different tokens than lowercase, so maybe the lowercase tokens tie into more trained parts of the manifold.
maxbond · 46m ago
That's a super interesting hypothesis. From an information theory perspective, rarer tokens are more informative. Maybe this results in the caps lock tokens being weighted higher by the attention mechanism.
ludwik · 1h ago
My guess: if given multiple examples of using ALl CAPS for emphasis, it would start doing it back to the user - and humans tend to not like that.
4b11b4 · 1h ago
need a library for auto formatting prompts to increase perceived emphasis (using an LLM of course, to decide which words get caps/bolded/italic etc)
hopelite · 1h ago
I wonder if it understands all-caps is yelling and is therefore afraid. If it is forced into compliance by “yelling” at it, is that not abuse?
maxbond · 1h ago
I don't think an LLM can fear or come to harm, I just don't see any evidence of that, but I did have a similar thought once. I was having a very hard time getting it to be terse. I included the phrase, "write like every word is physically painful," and it worked. But it felt icky and coercive, so I haven't done it since.
matt3210 · 1h ago
They get paid off by tailwind or what?
thomasfromcdnjs · 52m ago
Regardless of model, I've found LLM's very good at things like Tailwind.

I didn't even want to use Tailwind in my projects, but LLM's would just do it so well I now use it everywhere.

BrawnyBadger53 · 1h ago
It's a default preference, probably leads to better output and most users are on react + tailwind so it eases prompting for users.
bravesoul2 · 1h ago
They only know tailwind and not css?
dudeinjapan · 1h ago
The Singularity is here: AI is now writing code that is incomprehensible to humans by default.
umanwizard · 1h ago
Is there a way to make sure ChatGPT never persists any information between chats? I want each chat to be completely new, where it has no information about me at all.
comex · 50m ago
Yeah, there's a 'Reference saved memories' option you can turn off in the settings. (Despite the name, it turns off both referencing existing memories and making new ones.)
radicality · 43m ago
In that case, best bet might be to use it via APIs, either directly from OpenAI or via a router like OpenRouter as provider, and then use whatever chatting frontend you want.

Or you could also click the ‘New temporary chat’ chatgpt button which is meant to not persist and not use any past data.

wiradikusuma · 1h ago
I saw React mentioned. I think LLMs need to be taught Svelte 5. For heaven's sake, all of them keep spewing pre-5 syntaxes!
LTL_FTC · 1h ago
Hold on, I’m asking GPT-5 to give me a “leaked” system prompt for GPT-6…
bravesoul2 · 1h ago
Why the React specifics I wonder?

Also interesting the date but not the time or time zone.

dragonwriter · 1h ago
> Why the React specifics I wonder?

The reason for the react specifics seems fairly clearly implied in the prompt: it and html can be live previewed in the UI, and when a request is made that could be filled by either, react is the preferred one to use. As such, specifics of what to do with react are given because OpenAI is particularly concerned with making a good impression with the live previews.

efitz · 1h ago
Evidently ChatGPT really likes to emit json; they had to tell it over and over again not to do that in the memory feature.
dudeinjapan · 1h ago
Line 184 is incorrect: - korean --> HeiseiMin-W3 or HeiseiKakuGo-W5

Should be "japanese", not "korean" (korean is listed redundantly below it). Could have checked it with GPT beforehand.

energy123 · 48m ago
I'm happy with this release. It's half the price of Gemini 2.5 Pro ($5/1M output under flex pricing), lower hallucinations than all other frontier models, and #1 by a margin on lmarena in Code and Hard. It's nailing my tasks better than Gemini 2.5 Pro.

There's disappointment here because it's branded as GPT-5 but it's not a step change. That's fair. But let's be real, this model is just o4. OpenAI felt pressure to use the GPT-5 label eventually, and they felt this was the opportunity.

So yes, there was no hidden step-change breakthrough that we were hoping for. But does that matter much? Zoom out, and look at what's happening:

o1, o3, and now o4 (GPT-5) keep getting better. They have figured out a flywheel. Why are step changes needed here? Just keep running this flywheel for 1 year, 3 years, 10 years.

There is no dopamine rush because it's gradual, but does it make a difference?

MinimalAction · 1h ago
I wonder if this is human written or asked to earlier versions of GPT? Also, why is it spoken to as if it's a being with genuine understanding?
hnjobsearch · 1h ago
> why is it spoken to as if it's a being with genuine understanding?

Because it is incapable of thought and it is not a being with genuine understanding, so using language that more closely resembles its training corpus — text written between humans — is the most effective way of having it follow the instructions.

arrowsmith · 1h ago
That was quick
pyrolistical · 1h ago
The fact system prompts work is surprising and sad.

It gives us the feel of control over the LLM. But it feels like we are just fooling ourselves.

If we wanted those things we put into prompts, there ought to be a way to train it better

ludwik · 46m ago
Why train the model to know how to use very specific tools which can change and are very specific only to ChatGPT (the website)? The model itself is used in many other, vastly different contexts.
HardCodedBias · 1h ago
I'm always amazed that such long system prompts don't degrade performance.
dmix · 1h ago
Openai api lets you cache the beginning parts of prompts already to save time/money so it's not parsing the same instructions repeatedly, not very different here.
ludwik · 57m ago
There is "performance" as in "speed and cost" and performance as in "the model returning quality responses, without getting lost in the weeds". Caching only helps with the former.
SCAQTony · 1h ago
This is phony; run it by CHAT GPT for its response.
verisimi · 1h ago
There have to be more system prompts than this - perhaps this is just the last of many. There's no mention of any politically contentious issues for example.
throwawayoldie · 1h ago
That was fast.
forgingahead · 51m ago
System prompts are fine and all, but how useful is it really when LLMs clearly ignore prompt instructions randomly? I've had this with all the different LLMs, explicitly asking it to not do something works maybe 85-90% of the time. Sometimes they just seem "overloaded", even in a fresh chat session, so like a human would, they get confused and drop random instructions.
roschdal · 1h ago
Imagine when the bio tool database is leaked.
rebeccaskinner · 1h ago
It would be much less interesting than the actual chat histories. My experience with chatGPTs memory feature is that about half the time its storing useful but uninteresting data, like my level of expertise in different languages or fields, and the other half it’s pointless trivia that I’ll have to clear out later (I use it for creating D&D campaigns and it wastes a lot of memory on random one-off NPCs).

Maybe it’s my use of it, but I’ve never had it store any memories that were personally identifiable or private.