Show HN: Building an AI-native mini-OS for developers
What this is We built Vibemind because context switching is wasting everyone’s life. Instead of tabs and apps, you get a single workspace that spawns tiny agents (Planner, Coder, Researcher, Automator), links knowledge into a live graph, and can operate parts of your desktop through OCR + scripted flows. It’s part note-taking, part agent orchestra, part automation playground.
Why it might matter to you (technical folks) • Agent-first architecture: each task is an agent with capabilities and failure-memory, so retries get smarter. • Knowledge graph at runtime: nodes are live (files, API responses, chat snippets) — queries return provenance, not guesses. • OCR UI automation: pick a UI region, teach an agent, and it repeats actions reliably even on dynamic pages. • Developer-first: CLI + tiny SDK so you can extend agents, add custom fitness functions, or run components locally.
Current status & numbers (honest) • Prototype: frontend + working agent orchestration, knowledge-graph POC, OCR automation demo. • Team: small, product-driven. Open to early contributors. • Waitlist: limited early seats (beta invites will be staggered). We’re not pretending we have millions of users; we have a focused demo and want feedback from people who break things.
Privacy & safety (short) You can run agents locally or in our hosted environment. We log actions for reproducibility but plan fine-grained export/delete controls. We’re building with minimal data retention by default.
What we want from HN readers • Try the demo if you’re curious. • Sign the waitlist if you want early access and can give feedback. • Tell us what would make you replace a dozen tabs with one canvas.