Ask HN: How does structured output from LLMs work under the hood?

2 dcreater 0 5/7/2025, 5:48:25 PM
OpenAI, Ollama, LiteLLM python packages allow you to set the response format to a Pydantic model. As I understand that this just, serializes the model into a JSON. Is the JSON then just passed as context to the LLM and the LLM asked to adhere to the provided JSON? Or is there something more technical/deterministic happening that constrains the output to the JSON format provided?

Comments (0)

No comments yet