Show HN: Open-source AI prompt engineering workbench with systematic evaluation
I built PromptForge after getting frustrated with the manual trial-and-error approach to prompt engineering. Most tools are just text editors, but I wanted something that brings software engineering discipline to prompt development.
Key technical features: - AI-assisted prompt generation (stop starting from scratch) - Systematic evaluation engine that auto-generates test suites - Multi-model support (Claude, GPT-4, Azure OpenAI) - Go backend with SQLite for performance and simplicity - One-line Docker deployment
The evaluation system automatically tests for robustness, safety, factual accuracy, and creativity across different scenarios. Think unit testing but for prompts.
Currently #1 on Product Hunt today after 5+ hours, which validates that developers need better prompt engineering tools.
Open source (GPLv3): https://github.com/insaaniManav/prompt-forge Demo: https://demo.arcade.software/bFGTYb7AuRV33Kei7ZFQ PH URL : https://www.producthunt.com/products/promptforge-2 Would love feedback from the HN community, especially on the evaluation methodology and technical approach.
No comments yet