Show HN: Llmswap v3.0 – CLI and SDK for OpenAI, Claude, Gemini, Watsonx
Started this during a hackathon when constantly switching between Claude for code review and GPT-4 for debugging. Got tired of managing different API clients and keys.
After the hackathon, kept building on it. Added CLI when I wanted to pipe error logs directly to AI for analysis. Added caching when I realized I was testing the same prompts repeatedly during development and burning API credits. Added code review features when our team started using it in CI/CD.
New in v3.0 - Professional CLI: 1. llmswap ask "Debug this SQL query performance issue" 2. llmswap review app.py --focus security 3. llmswap chat # Interactive AI sessions 4. llmswap debug --error "ConnectionTimeout in production"
Python SDK features: 1. Multi-provider support (OpenAI, Claude, Gemini, IBM watsonx, Ollama) 2. Response caching for cost reduction (great for development/testing) 3. Auto-fallback when providers fail 4. Async support with streaming 5. Thread-safe for production apps
Real-world use cases: 1. Developers: Code review, debugging, documentation 2. Content teams: Writing, analysis, research 3. Startups: Prototype quickly, switch providers based on cost/features 4. Students: Free local models (Ollama) + cloud when needed 5. Enterprise: IBM watsonx integration for regulated environments
Three versions later, it's become our go-to tool for anything AI-related in development workflows. The CLI works great in automation - we use it for automated code review and log analysis in CI/CD.
Technical notes: 1. Zero dependencies for basic CLI usage 2. Works offline with local models (Ollama) 3. Production-ready with error handling and retries 4. Full backward compatibility from v1.0
Real-world adoption: 5k+ PyPI downloads
Install: pip install llmswap
No comments yet