Alternatively, if you had to go with json, you could consider using jsonl. I think I’d start by evaluating whether this is a good application for json. I tend to only want to use it for small files. Binary formats are usually much better in this scenario.
fidotron · 2h ago
Having only recently encountered this, does anyone have any insight as to why it takes 2GB to handle a 100MB file?
This looks highly reminiscent (though not exactly the same, pedants) of why people used to get excited about using SAX instead of DOM for xml parsing.
itamarst · 1h ago
I talk about this more explicitly in the PyCon talk (https://pythonspeed.com/pycon2025/slides/ - video soon) though that's not specifically about Pydantic, but basically:
1. Inefficient parser implementation.
It's just... very easy to allocate way too much memory if you don't think about large-scale documents, and very difficult to measure. Common problem with many (but not all) JSON parsers.
2. CPython in-memory representation is large compared to compiled languages.
So e.g. 4-digit integer is 5-6 bytes in JSON, 8 in Rust if you do i64, 25ish in CPython. An empty dictionary is 64 bytes.
cozzyd · 13m ago
Funny to see awkward array in this context! (And... do people really store giant datasets in json?!?).
jmugan · 6h ago
My problem isn't running out of memory; it's loading in a complex model where the fields are BaseModels and unions of BaseModels multiple levels deep. It doesn't load it all the way and leaves some of the deeper parts as dictionaries. I need like almost a parser to search the space of different loads. Anyone have any ideas for software that does that?
enragedcacti · 6h ago
The only reason I can think of for the behavior you are describing is if one of the unioned types at some level of the hierarchy is equivalent to Dict[str, Any]. My understanding is that Pydantic will explore every option provided recursively and raise a ValidationError if none match but will never just give up and hand you a partially validated object.
Are you able to share a snippet that reproduces what you're seeing?
jmugan · 3h ago
That's an interesting idea. It's possible there's a Dict[str,Any] in there. And yeah, my assumption was that it tried everything recursively, but I just wasn't seeing that, and my LLM council said that it did not. But I'll check for a Dict[str,Any]. Unfortunately, I don't have a minimal example, but making one should be my next step.
enragedcacti · 2h ago
One thing to watch out for while you debug is that the default 'smart' mode for union discrimination can be very unintuitive. As you can see in this example, an int vs a string can cause a different model to be chosen two layers up even though both are valid. You may have perfectly valid uses of Dict within your model that are being chosen in error because they result in less type coercion. left_to_right mode (or ideally discriminated unions if your data has easy discriminators) will be much more consistent.
>>> class A(BaseModel):
>>> a: int
>>> class B(BaseModel):
>>> b: A
>>> class C(BaseModel):
>>> c: B | Dict[str, Any]
>>> C.model_validate({'c':{'b':{'a':1}}})
C(c=B(b=A(a=1)))
>>> C.model_validate({'c':{'b':{'a':"1"}}})
C(c={'b': {'a': '1'}})
>>> class C(BaseModel):
>>> c: B | Dict[str, Any] = Field(union_mode='left_to_right')
>>> C.model_validate({'c':{'b':{'a':"1"}}})
C(c=B(b=A(a=1)))
You can have nested dataclasses, as well as specify custom serializers/loaders for things which aren't natively supported by json.
jmugan · 3h ago
Ah, but I need something JSON-based.
not_skynet · 1h ago
It does allow dumping to/recovering from json, apologies if that isn't well documented.
Calling `x: str = json.dumps(MyClass(...).serialize())` will get you json you can recover to the original object, nested classes and custom types and all, with `MyClass.load(json.loads(x))`
cbcoutinho · 6h ago
At some point, we have to admit we're asking too much from our tools.
I know nothing about your context, but in what context would a single model need to support so many permutations of a data structure? Just because software can, doesn't mean it should.
shakna · 4h ago
Anything multi-tenant? There's a reason Salesforce is used for so many large organisations. The multi-nesting lets you account for all the descrepancies that come with scale.
Just tracking payments through multiple tax regions will explode the places where things need to be tweaked.
fjasdfas · 7h ago
So are there downsides to just always setting slots=True on all of my python data types?
itamarst · 7h ago
You can't add extra attributes that weren't part of the original dataclass definition:
>>> from dataclasses import dataclass
>>> @dataclass
... class C: pass
...
>>> C().x = 1
>>> @dataclass(slots=True)
... class D: pass
...
>>> D().x = 1
Traceback (most recent call last):
File "<python-input-4>", line 1, in <module>
D().x = 1
^^^^^
AttributeError: 'D' object has no attribute 'x' and no __dict__ for setting new attributes
Most of the time this is not a thing you actually need to do.
masklinn · 6h ago
Also some of the introspection stops working e.g. vars().
If you're using dataclasses it's less of an issue because dataclasses.asdict.
monomial · 5h ago
I rarely need to dynamically add attributes myself on dataclasses like this but unfortunately this also means things like `@cached_property` won't work because it can't internally cache the method result anywhere.
zxilly · 5h ago
Maybe using mmap would also save some memory, I'm not quite sure if this can be implemented in Python.
itamarst · 5h ago
Once you switch to ijson it will not save any memory, no, because ijson essentially uses zero memory for the parsing. You're just left with the in-memory representation.
dgan · 5h ago
i gave up on python dataclasses & json. Using protobufs object within the application itself. I also have a "...Mixin" class for almost every wire model, with extra methods
Automatic, statically typed deserialization is worth the trouble in my opinion
thisguy47 · 7h ago
I'd like to see a comparison of ijson vs just `json.load(f)`. `ujson` would also be interesting to see.
A great feature of pydantic are the validation hooks that let you intercept serialization/deserialization of specific fields and augment behavior.
For example if you are querying a DB that returns a column as a JSON string, trivial with Pydantic to json parse the column are part of deser with an annotation.
Pydantic is definitely slower and not a 'zero cost abstraction', but you do get a lot for it.
This looks highly reminiscent (though not exactly the same, pedants) of why people used to get excited about using SAX instead of DOM for xml parsing.
1. Inefficient parser implementation. It's just... very easy to allocate way too much memory if you don't think about large-scale documents, and very difficult to measure. Common problem with many (but not all) JSON parsers.
2. CPython in-memory representation is large compared to compiled languages. So e.g. 4-digit integer is 5-6 bytes in JSON, 8 in Rust if you do i64, 25ish in CPython. An empty dictionary is 64 bytes.
Are you able to share a snippet that reproduces what you're seeing?
You can have nested dataclasses, as well as specify custom serializers/loaders for things which aren't natively supported by json.
Calling `x: str = json.dumps(MyClass(...).serialize())` will get you json you can recover to the original object, nested classes and custom types and all, with `MyClass.load(json.loads(x))`
I know nothing about your context, but in what context would a single model need to support so many permutations of a data structure? Just because software can, doesn't mean it should.
Just tracking payments through multiple tax regions will explode the places where things need to be tweaked.
If you're using dataclasses it's less of an issue because dataclasses.asdict.
Automatic, statically typed deserialization is worth the trouble in my opinion
The linked-from-original-article ijson article was the inspiration for the talk: https://pythonspeed.com/articles/json-memory-streaming/
For example if you are querying a DB that returns a column as a JSON string, trivial with Pydantic to json parse the column are part of deser with an annotation.
Pydantic is definitely slower and not a 'zero cost abstraction', but you do get a lot for it.