Show HN: BaaS to build agents as data, not code (github.com)
5 points by ishita159 1d ago 0 comments
Show HN: Bringing Tech News from HN to My Community (sh4jid.me)
3 points by sh4jid 1d ago 2 comments
GPT-5 System Prompt
28 georgehill 9 8/9/2025, 6:48:24 AM github.com ↗
> Place rich UI elements within tables, lists, or other markdown elements when appropriate.
Does inference need to process this whole thing from scratch at the start of every chat?
Or is there some way to cache the state of the LLM after processing this prompt, before the first user token is received, and every request starts from this cached state?
Which is to say you wouldn't want to bake such a thing too deeply into a multi-terabyte bunch of floating points because it makes operating things harder
These are NOT included in the model context size for pricing.