Very cool I'm going to try and play with this later. It looks like llm-consortium [0] but with some nice new features, like confidence gating and pluggable verifiers.
So, if a response confidence is below a threshold it is eliminated entirely? Is that the gating?
[0] https://github.com/maitrix-org/llm-reasoners
[0] https://x.com/karpathy/status/1870692546969735361