I built a full SaaS without writing a single line of code using Cursor, Claude 4
3 baranoncel 3 5/29/2025, 12:05:00 PM
Over the past few weeks, I decided to test the limits of AI-assisted development by building PodGen.io - an AI podcast generator - without manually writing any code. Here's what I learned.
The Setup
Tools: Cursor IDE with Claude 4 (Sonnet) in "max mode"
Cost: ~$300 extra for max mode (worth every penny)
Product: Full-stack SaaS with Stripe payments, AI integrations, user auth, etc.
What Actually Works
The combination is genuinely impressive. I went from idea to deployed product entirely through natural language conversations. Claude handled:
Next.js/React frontend with complex state management
Supabase backend integration and database design
Stripe checkout flows and webhook handling
OpenAI API integration for script generation
FAL.ai integration for voice synthesis
User authentication and authorization
Responsive design and mobile optimization
SEO optimization and structured data
Multi-language internationalization
The Pain Points
Rate Limits: Hit Claude's limits constantly. Had to pace development and sometimes wait hours to continue.
Context Breaking: Long conversations (4-5 times) completely broke Cursor. Had to start fresh chats and re-explain the entire codebase structure. This was the biggest productivity killer.
Debugging: When something broke, explaining the issue and getting the right fix took multiple iterations. A human developer would spot certain issues instantly.
Complex Logic: Some business logic required very detailed explanations and multiple refinement rounds.
What Surprised Me
Integration Complexity: Setting up Stripe webhooks, handling async operations, managing user states - all handled correctly
Code Quality: The generated code follows best practices, includes error handling, and is genuinely maintainable
Architecture Decisions: Claude made sensible choices about file structure, component organization, and data flow
The Result
PodGen.io converts any content (YouTube, PDFs, blogs, etc.) into AI-generated podcasts. It has:
Credit-based pricing system
50+ AI voices in 25+ languages
Multi-format content processing
Podcast distribution integration
Full user dashboard and history
Total development time: ~3 weeks of conversations with Claude.
Key Takeaways
It actually works for building complete products, not just prototypes
Rate limits are the real bottleneck, not AI capability
Context management in long projects needs better tooling
Domain knowledge still matters - knowing what to ask for is crucial
The $300 was worth it - max mode's capabilities justify the cost
This isn't just "AI helping with coding" - it's AI doing the coding while you focus on product decisions and business logic. We're closer to natural language programming than I expected.
Would be curious to hear others' experiences with similar approaches. The limiting factors seem more infrastructural (rate limits, context windows) than fundamental AI capabilities.
Live at: https://podgen.io
But, how does it maintain and adhere to file structure, dependecies, code, the external libs and and and... I can't imagine. By chance, do you have a memo of all the steps you've done?
I started with OpenAI's O3 Deep Search to create the initial prompts and cursor.rules file. This was crucial because O3 helped me think through the entire project architecture and create detailed prompts that would guide Claude throughout the development process.
First, I used O3 Deep Search to brainstorm the complete product concept. I asked it to help me define the user journey, technical requirements, and all the integrations I'd need. From this, O3 generated comprehensive prompts for each development phase and helped me create a cursor.rules file that would keep Claude focused on the right patterns and coding standards.
The cursor.rules file was key because it told Claude exactly how to structure code, what libraries to use, naming conventions, and architectural decisions. This prevented the context breaking issues that would normally happen in long conversations.
Once I had the prompts and rules set up, I moved everything to Cursor with Claude 4. The workflow became really smooth because Claude already knew the full context from the rules file. I could just say "build the user authentication system" and it would follow all the established patterns. The file structure and dependencies were maintained because the cursor.rules file specified exactly how to organize everything. Claude knew to put components in the right folders, use the existing UI patterns, and maintain consistent imports across the codebase.
When I hit rate limits or had to start new conversations, I just referenced back to the original prompts from O3 and the cursor.rules file. This kept everything coherent even when switching contexts.
The external libraries and integrations worked because O3 had helped me plan out almost all the API connections upfront. I just give it Fal AI models. Claude knew exactly which APIs to use for each feature because it was all defined in the initial architecture.
The process was basically: O3 Deep Search for planning and prompt creation, then Cursor with Claude 4 for all the actual coding. The O3 planning phase was what made the difference - without that upfront architecture work, the long development process would have fallen apart.
So my advice is start with O3 to create your master plan and cursor.rules, then move to Cursor for execution. The planning phase is what makes the magic happen.
I already thought about https://codemap4ai.com/ which, in my understanding, creates something similar to the cursor.rules file that defines project structure, etc.. and I can use this then. But it still doesn't solve the problem of a good prompt! Which of course can be achieved with LLMs themselves.
Ok, I'll lock up my door and throw the key out of my window with a written notice on it "if you find it, use it in three weeks" and during the three weeks I do excessive talking about some ideas to Claude and hope to finish a "big beautiful prompt" for the further actions ... :)
Thank you very much for taking the time!