Jeopardy Archive Trainer – Practice with 200K+ real questions + LLM answer validation
I built this as a trivia obsessive who wanted better practice than the existing online tools. It generates authentic Jeopardy rounds from the full historical dataset and uses LLMs to handle answer validation intelligently.
The interesting technical challenge: Traditional string matching fails spectacularly for Jeopardy answers. Users might answer "France" when the correct response is "What is France?" or vice versa. Even worse, there are valid variations like "French Republic" or abbreviations. I solved this by integrating both Ollama and OpenAI for semantic answer comparison.
Stack:
- FastAPI backend with SQLite (200K+ questions from Kaggle)
- React frontend with responsive game UI
- Docker containerization with automatic model
pulling
- Supports both Ollama (local) and OpenAI for
answer validation
- Fallback to string comparison if no LLM available
Features:
- Generates realistic Jeopardy/Double
Jeopardy/Final Jeopardy rounds
- Smart answer validation that understands context
and variations
- Score tracking and clean game interface
- One-command Docker deployment
Had to architect the LLM integration to be provider-agnostic since Ollama performance on my hardware wasn't optimal. The backend automatically switches between OpenAI and Ollama based on environment configuration, with graceful fallback to string matching if neither is available.
Currently using this for my own practice and it's been way more effective than memorizing question banks. The LLM validation makes it feel like playing the actual game.
Stack:
- FastAPI backend with SQLite (200K+ questions from Kaggle) - React frontend with responsive game UI - Docker containerization with automatic model pulling - Supports both Ollama (local) and OpenAI for answer validation - Fallback to string comparison if no LLM available
Features:
- Generates realistic Jeopardy/Double Jeopardy/Final Jeopardy rounds - Smart answer validation that understands context and variations - Score tracking and clean game interface - One-command Docker deployment
Had to architect the LLM integration to be provider-agnostic since Ollama performance on my hardware wasn't optimal. The backend automatically switches between OpenAI and Ollama based on environment configuration, with graceful fallback to string matching if neither is available.
Currently using this for my own practice and it's been way more effective than memorizing question banks. The LLM validation makes it feel like playing the actual game.