Ask HN: Why hasn't x86 caught up with Apple M series?
431 points by stephenheron 3d ago 615 comments
Ask HN: Best codebases to study to learn software design?
103 points by pixelworm 4d ago 90 comments
Building your own CLI coding agent with Pydantic-AI
123 vinhnx 28 8/28/2025, 6:34:29 PM martinfowler.com ↗
Its agent model feels similar to OpenAI's: flexible and dynamic without needing to predefine a DAG. Execution is automatically traced and can be exported to Logfire, which makes observability pretty smooth too. Looking forward to their upcoming V1 release.
Shameless plug: I've been working on a DBOS [2] integration into Pydantic-AI as a lightweight durable agent solution.
[1] https://github.com/pydantic/pydantic-ai/pull/2638
[2] https://github.com/dbos-inc/dbos-transact-py
Towards coding agents, I wonder if there are any good / efficient ways to measure how much different implementations work on coding? SWE-bench seems good, but expensive to run. Effectively I’m curious for things like: given tool definition X vs Y (eg. diff vs full file edit), prompt for tool X vs Y (how it’s described, does it use examples), model choice (eg. MCP with Claude, but python-exec inline with GPT-5), sub-agents, todo lists, etc. how much across each ablation, does it matter? And measure not just success, but cost to success too (efficiency).
Overall, it seems like in the phase space of options, everything “kinda works” but I’m very curious if there are any major lifts, big gotchas, etc.
I ask, because it feels like the Claude code cli always does a little bit better, subjectively for me, but I haven’t seen a LLMarena or clear A vs B, comparison or measure.
I haven't looked deeply into pydantic's implementation but this might be related to tool-usage vs completion [0], the backend LLM model, etc. All I know is that with the same LLM models, `openai.client.chat.completions` + a custom prompt to pass in the pydantic JSON schema + post-processing to instantiate SomePydanticModel(*json) creates objects successfully whereas vanilla pydantic-ai rarely does, regardless of the number of retries.
I went with what works in my code, but didn't remove the pydantic-ai dependency completely because I'm hoping something changes. I'd say that getting dynamic prompt context by leveraging JSON schemas, model-and-field docs from pydantic, plus maybe other results from runtime-inspection (like the actual source-code) is obviously a very good idea. Many people want something like "fuzzy compilers" with structured output, not magical oracles that might return anything.
Documentation is context, and even very fuzzy context is becoming a force multiplier. Similarly languages/frameworks with good support for runtime-inspection/reflection and have an ecosystem with strong tools for things like ASTs really should be the best things to pair with AI and agents.
[0]: https://github.com/pydantic/pydantic-ai/issues/582
That's very odd, would you mind sharing the Pydantic model / schema so I can have a look? (I'm a maintainer) What you're doing with a custom prompt that includes the schema sounds like our Prompted output mode (https://ai.pydantic.dev/output/#prompted-output), but you should get better performance still with the Native or Tool output modes (https://ai.pydantic.dev/output/#native-output, https://ai.pydantic.dev/output/#tool-output) which leverage the APIs' native strict JSON schema enforcement.
One thing I can say though.. my models differ from the docs examples mostly in that they are not "flat" with simple top-level data structures. They have lots of nested models-as-fields.
Thanks, this is a very interesting thread on multiple levels. It does seem related to my problem and I also learned about field docstrings :) I'll try moving my dependency closer to the bleeding edge
Nevertheless i would strongly recommend to not use directly the libraries of the ai providers as you get quickly locked in a extremely fast paced market where today's king can change weekly.
The vast majority of bugs we encounter are not in Pydantic AI itself but rather in having to deal with supposedly OpenAI Chat Completions-compatible APIs that aren't really, and with local models ran through e.g. Ollama or vLLM that tend to not be the best at tool calling.
The big three model providers (OpenAI, Claude, Gemini) and enterprise platforms (Bedrock, Vertex, Azure) see the vast majority of usage and our support for them is very stable. It remains a challenge to keep up with their pace of shipping new features and models, but thanks to our 200+ contributors we're usually not far behind the bleeding edge in terms of LLM API feature coverage, and as you may have seen we're very responsive to issues and PRs on GitHub, and questions on Slack.
We do have a proprietary observability and evals product Pydantic Logfire (https://pydantic.dev/logfire), but Pydantic AI works with other observability tools as well, and Logfire works with other agent frameworks.
0 - https://www.definite.app/
1 - https://pydantic.dev/articles/building-data-team-with-pydant...
https://github.com/caesarnine/rune-code
Part of the reason I switched to it initially wasn't so much it's niceties versus just being disgusted at how poor the documentation/experience of using LiteLLM was and I thought the folks who make Pydantic would do a better job of the "universal" interface.
"AI" and vibe coding fits very well into that list.
I've been using it a lot lately and anything beyond basic usage is an absolute chore.
Pydantic still sees multiple commits per week, which is less than it was at one point, but I'd say that's a sign of its maturity and stability more than a lack of attention.
If the issue is still showing on the latest version, seeing the Pydantic model/schema would be very helpful.
Thanks for being a proactive kind of maintainer. The world is better because of people like you.