Show HN: I built a playground to showcase what Flux Kontext is good at
After spending some time with the new `flux kontext dev` model, I realized its most powerful capabilities aren't immediately obvious. Many people might miss its true potential by just scratching the surface.
I went deep and curated a collection of what I think are its most interesting use cases – things like targeted text removal, subtle photo restoration, and creative style transfers.
I felt that simply writing about them wasn't enough. The best way to understand the value is to see it and try it for yourself.
That's why I built FluxKontextLab (https://fluxkontextlab.com).
On the site, I've presented these curated examples with before-and-after comparisons. More importantly, there's an interactive playground right there, so you can immediately test these ideas or your own prompts on your own images.
My goal is to share what this model is capable of beyond the basics.
It's still an early project. I'd love for you to take a look and share your thoughts or any cool results you generate.
About a month ago I put together a quick before/after set of images that I used Kontext to edit. It even works on old grainy film footage.
https://specularrealms.com/ai-transcripts/experiments-with-f...
> My goal is to share what this model is capable of beyond the basics.
You might be interested to know that it looks like it has limited support for being able to upload/composite multiple images together.
https://fal.ai/models/fal-ai/flux-pro/kontext/max/multi
[1] https://github.com/timothybrooks/instruct-pix2pix
It's crazy how fast genai moves, now you can do all that with just flux and the end result looks extremely high quality
These models look fantastic, we've finally got something solid in the public sphere that goes beyond stable diffusion style word vomit for prompting. It was obviously coming sooner or later, but happily it seems to be here. It is unfortunate for the public that, as far as I can see, they didn't actually open the weights up since they aren't free for commercial use.
You're right, my backend logs show that most requests are succeeding, which means there must be an error happening somewhere between the front-end and the server that I'm not catching properly yet.
Based on this, implementing a more robust error logging system is now my top priority. I'll get on it right away so I can find and fix these issues for everyone. Thanks again for giving it a try.
To keep the project sustainable in the long run, I'm exploring some options, like potentially offering a paid tier for heavy users or more advanced features. For now, I'm focused on improving the core experience and will do my best to keep costs low so it remains accessible to as many people as possible.
However, my plan is to eventually deploy the model on my own server. I'll be sure to document the entire process—from setup to optimization—and share it as a detailed guide on the site for anyone interested!
I’ve got to admit, I chuckled to myself at the absurdity of the phrase “AI precision”, given how badly these things are known to go off the rails. Sure, sure, things have improved a lot in the last few years, and Kontext’s limitations make such problems far less likely to occur, but still, permit me to be amused. :-)
… but then too, do compare https://fluxkontextlab.com/pages/home/showcase/2/1.jpg and https://fluxkontextlab.com/pages/home/showcase/2/0.webp closely, there are material differences. A few of the most notable ones: the picture is reframed, with a significant amount invented at the bottom (which has realism concerns that you can see when you actually examine it); fog effects have been reduced (perhaps implied by “restore … its clear texture”, which seems a weird instruction to me); and something’s gone wrong with the right wing of the pigeon at the bottom that’s facing the camera.
I think it would be nice to, in each case, align the two as well as possible (even the Product Display example) and present them in such a way that you can rigorously compare the beginning and end points, and see what modifications have been made, intended and unintended.