Show HN: TheorIA – An Open Curated Physics Dataset (Equations,Explanations,JSON)
Why? Physics is rich with beautiful, formal results — but most of them are trapped in PDFs, LaTeX, or lecture notes. That makes it hard to:
- train symbolic/physics-aware ML models,
- build derivation-checking tools,
- or even just teach physics interactively.
THEORIA fills that gap. Each entry includes:
A result name (e.g., Lorentz transformations)
Clean equations (AsciiMath)
Straightforward step-by-step derivation with reasoning
Symbol definitions & assumptions
Programmatic validation using sympy
References, arXiv-style domain tags, and contributor metadata
Everything is in open, self-contained JSON files. No scraping, no PDFs, just clear structured data for physics learners, teachers, and ML devs.
Contributors Wanted: We’re tiny right now and trying to grow. If you’re into physics or symbolic ML:
Add an entry (any result you love)
Review others' derivations
Build tools on top of the dataset
GitHub https://github.com/theoria-dataset/theoria-dataset/
Licensed under CC-BY 4.0, and we welcome educators, students, ML people, or just anyone who thinks physics deserves better data.
Not sure if it fits but I still have ~20k currated step by step solution for mathematics (pedagogical math) "lying" around from my previous startup. They are all hand currated. And could even be used for fine tuning or so.
Here are some details: The dataset has 20.600 Abstract Exercises which turn into 1.193.958 Concrete Exercises.
An Abstract Exercise looks like this: a + b = c A Concrete Exercise looks like this: 2 + 3 = 5 Tital compiled file size (JSONL): 11.6GB
And here is an explorer to see some of the data https://curriculum.amy.app/ToM
For example, imagine the entry for the standard equation, should all the derivation and symbolic implementation done as a unique entry? It will be difficult to separate it in logical entries that reference each others, and many physical ideas are fundamentally different, leading to divergences.
I have the impression that it should be easier to just parse reference books and format each paragraph/section as an entry, and maybe build a graph. (considering the reference book as authoritative on the subject)
The idea of automatically parsing books is very nice and possibly faster, but note that:
- there are already various datasets of physics papers and such content - the result will be quite different versus what we intent here, which is to have a high quality dataset of physics results with clear derivations (whenever derivation exist)
Maybe we can still use your idea to achieve the last point in some way… maybe there is a book that is already formatted as the dataset and we could use it as a starting point. But I don’t know any.