Show HN: Kuse 2.0 – AI Visual Folder: Chaos In, Genius Out
After 6 months of exploration and user feedback, we completely rewrote our AI whiteboard into a visual folder. Our first demo post is here for context: https://news.ycombinator.com/item?id=40776562
Why the change?
- Users loved dropping in files, videos, links, or spreadsheets → highlighting what matters → prompting → and getting structured, editable results. That was the core success point.
- In Kuse 1.0, everything (prompts + results) lived as cards on the canvas, which quickly became overwhelming.
What’s new in 2.0?
- Doubled down on the “success point”: only your sources + significant AI-generated results live on the canvas now.
- Introduced visual folders to organize context. These also provide spatial cues (“Visual Context Engineering”) that help the AI agent reason more effectively.
We’re seeing it used for study, research, reporting, content generation, and more. Some examples are on our X account. https://x.com/kuseHQ Would love to hear your thoughts, feedback, or critiques!
Right now I’m using AI coding tools + Obsidian to handle some docs and manage my own context, haha — kind of feels like a scrappy MVP version of what you guys are building~
I like the idea of “Visual Context Engineering” — giving AI spatially organized cues instead of dumping everything in a flat thread. It reminds me of how humans use folders, tabs, or even physical desks to manage mental context. The ability to highlight what matters and collapse the rest seems like a real productivity multiplier.
One thought: how does Kuse handle collaboration? The folder metaphor works great for solo workflows, but I wonder if multiple users dropping in files, highlighting, and prompting would cause the same chaos you tried to solve in 1.0. Maybe the "visual folder" could extend into a shared canvas where provenance (who did what, and why) stays transparent.
Overall, this looks like a thoughtful iteration. Curious to see if the visual folder model becomes a general UX pattern for AI tools, the way "chat thread" did in the last wave.
Two key questions: 1.When you say “only your sources + significant AI‑generated results live on the canvas now,” what criteria define “significant”? Is this determined by user feedback, an internal model, or do users manually flag outputs? 2.How customizable is the layout or structure of the canvas? Can users group, reorder, or annotate sections to reflect their own workflow or context?
Overall, I’m impressed with the evolution from messy inputs to structured deliverables—but I’d love clarity on how much control users retain over the final presentation.