In this era of AI-generated image tools, maintaining style consistency and character continuity in multiple rounds of editing has always been a "stuck point" in the minds of users. On May 29, Black Forest Labs released the new FLUX.1 Kontext, claiming that it can complete complex tasks such as image editing, generation, and style transfer through a unified model, finding a balance between accuracy and consistency. I conducted in-depth tests as soon as possible and compiled the results into this article, hoping to provide you with a reference for judging whether it is worth a try.
proc0 · 21h ago
I only tried a few times. My test is a walk cycle for animating ( i try a spritesheet but also separate images that need to animate). Any animator can do this, it's basic but somehow it has stomped every diffusion model I've tried.
Still extremely impressive, but I just want some frames to use as animation and you would think that kontext would retain the context to get the same character but with the left foot forward instead of the right, and it just refuses. If anyone has tips on making animation frames please share.
Still extremely impressive, but I just want some frames to use as animation and you would think that kontext would retain the context to get the same character but with the left foot forward instead of the right, and it just refuses. If anyone has tips on making animation frames please share.