This is a project I built during the YC hackathon. The idea was to make fine-tuning as simple as describing vibe coding, but with proper rails - no notebooks, no GPU setup, pytorch problems, venv chaos, etc, no ML background required. You tell the system “train a model that does X” (or “make customer support sound like a surfer”), and it:
• searches Hugging Face for relevant datasets
• generates synthetic examples to fill the gaps
• runs a LoRA fine-tune 3 epoch run
• streams logs/metrics live
• then saves + optionally pushes to the Hub (will exports GGUF for mobile testing too)
What I’m trying to explore is:
• Make it easy to any professional or researcher of any field to create a purpose built model -> classifiers for example;
• Synthetic data -> Everyone uses it, but no public pipeline yet (datasetdirector.com should be a standalone for that - still on development)
• Fast iteration -> training in a coffee break instead of a weekend.
What I need to know is if this is interesting and valuable to anyone!
This is a project I built during the YC hackathon. The idea was to make fine-tuning as simple as describing vibe coding, but with proper rails - no notebooks, no GPU setup, pytorch problems, venv chaos, etc, no ML background required. You tell the system “train a model that does X” (or “make customer support sound like a surfer”), and it:
• searches Hugging Face for relevant datasets • generates synthetic examples to fill the gaps • runs a LoRA fine-tune 3 epoch run • streams logs/metrics live • then saves + optionally pushes to the Hub (will exports GGUF for mobile testing too)
What I’m trying to explore is: • Make it easy to any professional or researcher of any field to create a purpose built model -> classifiers for example; • Synthetic data -> Everyone uses it, but no public pipeline yet (datasetdirector.com should be a standalone for that - still on development) • Fast iteration -> training in a coffee break instead of a weekend.
What I need to know is if this is interesting and valuable to anyone!