Show HN: Good old emails and LLMs for automating job tracking
Why electricity prices are surging for U.S. households (cnbc.com)
Show HN: Overengineering Linksie – a link paywall generator
I took a different approach for Linksie. I made a conscious choice to "over-engineer" it, not for complexity's sake, but to build a stable, scalable foundation and to aggressively upskill in areas where I was weak.
For me, over-engineering was a conscious choice to shift from the typical startup mindset of "ship features at all costs" to "with a little extra time, could I build something that scales more elegantly and remains stable for longer?" It was about thinking like a founding CTO: if I had to hire engineers tomorrow, what is the foundation I'd want in place for them?
It ended up as a monorepo containing containerized services and shared packages.
Application Services: Frontend: Next.js (Pages Router) Backend: HonoJS API Key Libraries: BetterAuth, Tailwind CSS, Headless UI, Tanstack Query/Form, Stripe
Worker Services: A dedicated container for node-pg-migrate database migrations. A job queue worker for asynchronous tasks (e.g., our referral system).
Shared Packages: Internal libraries for shared types and database clients (PostgreSQL, Redis) to ensure consistency between the API and workers. The entire stack is containerized with Docker and spins up locally with a single docker-compose up command. The codebase is currently around 30k lines of code.
The SDLC: Automation from Day One
I wanted a professional software development lifecycle from the start. CI/CD: On merge to main, a GitHub Actions pipeline runs tests, builds all container images, pushes them to Google Artifact Registry, and deploys everything to a dedicated staging environment. This includes running database migrations automatically. Production: After verification on staging, a manual approval in the GitHub UI triggers the exact same pipeline targeted at the production environment.
The Infrastructure: 90% Terraform
I chose GCP over AWS primarily for Cloud Run's developer experience and auto-scaling. The entire infrastructure is provisioned with Terraform.
Compute: Cloud Run for all services and jobs (with min/max instances set). Data: Cloud SQL (Postgres) and Memorystore (Redis). Networking: VPC, Cloud Load Balancer, Cloud DNS. Secrets & Artifacts: Secrets Manager and Artifact Registry. External: Cloudflare for public DNS and R2 for storage.
The "Why": Justifying the Upfront Investment
I know the common wisdom is to use Vercel, Supabase, etc., and get to market faster and cheaper. I chose this path for two main reasons:
1. I Despise Vendor Lock-in: PaaS providers like Vercel are fantastic, but they are for-profit entities that can and will change their pricing and priorities. We've all seen the horror stories of unexpected six-figure bills. We write modular code to avoid lock-in; I believe the same principle should apply to infrastructure. Owning the stack gives me control and predictable costs.
2. A Deliberate Opportunity for Growth: As a full-stack engineer, my DevOps and IaC knowledge was purely conceptual. This project forced me to learn Terraform, container networking, VPCs, and cloud architecture hands-on. The argument to "just hire someone later" is a weak one if you don't know how to evaluate their work. This experience filled a massive gap in my skillset. Even if the SaaS fails, the knowledge gained has been invaluable. I went from zero to proficient with Terraform in about a week, largely thanks to AI-assisted learning.
Retrospective & What I'd Do Differently
Would I do it the exact same way again? No. The current infrastructure has hefty costs for a pre-revenue project. My next iteration would be more pragmatic: Drop the managed Redis cache, use a cheaper DB option, eliminate the dedicated staging environment
Open to thoughts, suggestions, improvements!
No comments yet