LTXVideo 13B AI video generation

120 zoudong376 32 5/10/2025, 11:59:10 AM ltxv.video ↗

Comments (32)

coldcode · 1h ago
Every browser I tried on my Mac does not show any of the videos. You only see the top animation.

Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation

There are a couple of JS errors, which I presume keep the videos from appearing.

jsheard · 35m ago
That's the least of the problems with how they've optimized their assets, there's about 250MB of animated GIFs on the HF page (actual 1989 vintage GIFs, not modern videos pretending to be GIFs). AI people just can't get enough of wasting bandwidth apparently, at least this time they're just running up Huggingface's AWS bill rather than scraping random websites to death.
dingdingdang · 33m ago
Super AI tech to the rescue!
soared · 1h ago
On iOS the unmute button will unmute and play the video. The play button did not work for me.
hobs · 1h ago
https://pub.wanai.pro/ltxv_hero.mp4 same, but the video does work, just some problems in the site
esafak · 39m ago
wanai says it uses Alibaba's Wan2.1 video generation model. What's going on here? It LTXV somehow related? https://huggingface.co/blog/LLMhacker/wanai-wan21
terhechte · 1h ago
It says `Coming Soon` for the `inference.py` for the quantized version. Does anyone happen to know how to modify the non-quantized version [0] to work?

[0] https://github.com/Lightricks/LTX-Video/blob/main/configs/lt...

shakna · 1h ago
> Hi , i'm using default image to video workflow with default settings and i'm getting pixalated image to video output full of squares , how to fix this ?

[0] https://github.com/Lightricks/LTX-Video/issues/163

pwillia7 · 2h ago
Will have to test this out and it looks like it runs on consumer hardware which is cool. I tried making a movie[1] with LTXV several months ago and had a good time but 30x faster generations sounds necessary.

[1]: https://www.youtube.com/watch?v=_18NBAbJSqQ

jl6 · 28m ago
The example videos look very short, maybe 1-2 seconds each. Is that the limit?
snagadooker · 9m ago
The model supports both multi-scale rendering and autoregressive generation. With multi-scale rendering, you can generate a low-resolution preview of 200-300 frames and then upscale to higher resolutions (with or without tiling). The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:: https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/e...
givinguflac · 2h ago
The requirements say:

NVIDIA 4090/5090 GPU 8GB+ VRAM (Full Version)

I have a 3070 w 8GB of VRAM.

Is there any reason I couldn’t run it (albeit slower) on my card?

GTP · 2h ago
Just try it and see.
mycall · 1h ago
Will this work with ROCm instead of CUDA?
turnsout · 1h ago
Or MLX/Apple?
echelon · 1h ago
No way. AMD is lightyears behind in software support.
roenxi · 1h ago
That isn't really what being behind implies. We've known how to multiply matrices since ... at least the 70. And video processing isn't a wild new task for our friends at AMD. I'd expect that this would run on an AMD card.

But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.

blkhawk · 43m ago
I have a 9070 XT... rockm ATM is unoptimized for it and the generation speed is less than what it should be if AMD isn't fudging the specs. Also the memory management is dire / buggy and will cause random OOMs on one ruin then be fine the next. splitting workflow helps so you can have one OOM crash in between. VAEs also crash from OOM. This is all just software issues because vram isn't released properly on AMD.

*OOM = Out Of Memory Error

snagadooker · 7m ago
2B model was running well on AMD, fingers crossed with 13B too: https://www.reddit.com/user/kejos92/comments/1hjkkmx/ltxv_in...
Zambyte · 1h ago
Specifically for video? Ollama runs great on my 7900 XTX.
unicornporn · 19m ago
Uncanny as always. What do people use these for? Except for flooding Meta platforms with slop, that is...
turnsout · 58m ago
I just tried out the model via LTX Studio, and it's extremely impressive for a 13B model, let alone one that allegedly performs in real-time.
october8140 · 1h ago
Videos are also not loading on GitHub.
moralestapia · 37m ago
It runs on a single consumer GPU.

Wow.

Invictus0 · 2h ago
Could've made a better website in Wix, lol. Did they forget to add the videos?
linsomniac · 2h ago
I got a bunch of videos on the page, it looked fine to me.
sergiotapia · 1h ago
FWIW I only see the hero video on the website and no other content except text. Is this a bug?
soared · 1h ago
I wish groups would stop following OpenAI/etc’s naming conventions of having things like “13B” in the product name.
strangescript · 1h ago
In open source its super useful to be able to immediately have an idea of how big the model is and what kind of hardware it could potentially run on.
ericrallen · 54m ago
Seems a bit unfair (or maybe just ill-informed?) to lump this in with the confusing mess that is model naming at OpenAI.

The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.

The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.

It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.

drilbo · 21m ago
>It’s likely their is a higher parameter count model in the works and this makes it easy to distinguish between the two.

in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)

re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.

smlacy · 59m ago
Why? Isn't it one of the most important aspects of this product?