The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.
I assume it's for SEO or supply chain attacks or overcharging for subscriptions.
liuliu · 1h ago
Hi! Draw Things should be able to add support in the next 2 weeks after we get video feature a little bit more polished out with existing video models (Wan 2.1, Hunyuan etc).
ronreiter · 2h ago
I work with the Lightricks team.
This is not an official page created by Lightricks, and we do not know who the owner of this page is or why he created it.
xg15 · 2h ago
what's going on here?
simonw · 43m ago
This is so weird. The domain has whois information withheld and the site is hosted on Vercel.
OP seems to be making tons of these "fan" pages for AI tools according to his HN submission history. It's also the same design every time. Smells fishy.
coldcode · 4h ago
Every browser I tried on my Mac does not show any of the videos. You only see the top animation.
Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation
There are a couple of JS errors, which I presume keep the videos from appearing.
jsheard · 3h ago
That's the least of the problems with how they've optimized their assets, there's about 250MB of animated GIFs on the Huggingface page (actual 1989 vintage GIFs, not modern videos pretending to be GIFs). AI people just can't get enough of wasting bandwidth apparently, at least this time it's another AI company footing the bill for all the expensive AWS egress they're burning through for no reason.
dingdingdang · 3h ago
Super AI tech to the rescue!
soared · 3h ago
On iOS the unmute button will unmute and play the video. The play button did not work for me.
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.
UPDATE: This is text on an unofficial website unaffiliated with the project. BUT https://www.lightricks.com/ has "LTXV open source video model" in a big header at the top of my page so my complaint still stands, even though the FAQ copy I'm critiquing here is likely not the fault of Lightricks themselves.
So it's open weights, not open source.
Open weights is great! No need to use the wrong term for it.
- Section 2: entities with annual revenues of at least $10,000,000 (the “Commercial
Entities”) are eligible to obtain a paid commercial use license, subject to the terms and provisions of a
different license (the “Commercial Use Agreement”)
- Section 6: To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this Agreement, update the Model through electronic means, or modify the Output of the Model based on updates
This is an easy fix: change that FAQ entry to:
> Is LTXV-13B open weights?
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model is open weights and the underlying code is open source (Apache 2.0), allowing for community development and customization.
Are weights even copyrightable? I'm not sure what these licenses do, other than placate corporate legal or pretend to have some kind of open source equivalent for AI stuff.
terhechte · 4h ago
It says `Coming Soon` for the `inference.py` for the quantized version. Does anyone happen to know how to modify the non-quantized version [0] to work?
Is there any reason I couldn’t run it (albeit slower) on my card?
washadjeffmad · 1h ago
Sure, just offload to system RAM, and don't use your system driver's fallback, but a specific implementation like MultiGPU.
It won't speed it up, but using a quantization that fits in VRAM will prevent the offload penalty.
mycall · 4h ago
Will this work with ROCm instead of CUDA?
turnsout · 4h ago
Or MLX/Apple?
echelon · 4h ago
No way. AMD is lightyears behind in software support.
roenxi · 3h ago
That isn't really what being behind implies. We've known how to multiply matrices since ... at least the 70s. And video processing isn't a wild new task for our friends at AMD. I'd expect that this would run on an AMD card.
But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.
blkhawk · 3h ago
I have a 9070 XT... rockm ATM is unoptimized for it and the generation speed is less than what it should be if AMD isn't fudging the specs. Also the memory management is dire / buggy and will cause random OOMs on one ruin then be fine the next. splitting workflow helps so you can have one OOM crash in between. VAEs also crash from OOM. This is all just software issues because vram isn't released properly on AMD.
*OOM = Out Of Memory Error
zorgmonkey · 1h ago
Sometimes it is a little more work to get stuff setup, but it works fine I've run plenty of models on my 7900 XTX wan2.1 14B, flux 1.dev and whisper. (wan and flux were with comfyui and whisper with whisper.cpp)
any idea how i could implement that for comfyUI on the 9070? Going to try to apply whats in the reddit post to my venv and see if it does anything.
Zambyte · 3h ago
Specifically for video? Ollama runs great on my 7900 XTX.
GTP · 4h ago
Just try it and see.
pwillia7 · 5h ago
Will have to test this out and it looks like it runs on consumer hardware which is cool. I tried making a movie[1] with LTXV several months ago and had a good time but 30x faster generations sounds necessary.
> Hi , i'm using default image to video workflow with default settings and i'm getting pixalated image to video output full of squares , how to fix this ?
The example videos look very short, maybe 1-2 seconds each. Is that the limit?
snagadooker · 2h ago
The model supports both multi-scale rendering and autoregressive generation. With multi-scale rendering, you can generate a low-resolution preview of 200-300 frames and then upscale to higher resolutions (with or without tiling).
The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:
Could've made a better website in Wix, lol. Did they forget to add the videos?
linsomniac · 4h ago
I got a bunch of videos on the page, it looked fine to me.
sergiotapia · 3h ago
FWIW I only see the hero video on the website and no other content except text. Is this a bug?
soared · 3h ago
I wish groups would stop following OpenAI/etc’s naming conventions of having things like “13B” in the product name.
strangescript · 3h ago
In open source its super useful to be able to immediately have an idea of how big the model is and what kind of hardware it could potentially run on.
ericrallen · 3h ago
Seems a bit unfair (or maybe just ill-informed?) to lump this in with the confusing mess that is model naming at OpenAI.
The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.
The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.
It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.
drilbo · 2h ago
>It’s likely their is a higher parameter count model in the works and this makes it easy to distinguish between the two.
in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)
re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.
smlacy · 3h ago
Why? Isn't it one of the most important aspects of this product?
If you’re looking for the official LTXV model and working ComfyUI flows, make sure to visit the right sources:
- Official site: https://www.lightricks.com
- Model + Playground: https://huggingface.co/Lightricks/LTX-Video
The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.
I am surprised that it can run on consumer hardware.
I assume it's for SEO or supply chain attacks or overcharging for subscriptions.
This is not an official page created by Lightricks, and we do not know who the owner of this page is or why he created it.
Best hint is the submission history of https://news.ycombinator.com/submitted?id=zoudong376 which shows similar unofficial sites for other projects.
Best case, an overenthusiastic fan, worst case, some bad actor trying to establish a "sleeper page".
For more information about the model, refer to these sources:
Model repository: https://github.com/Lightricks/LTX-Video
ComfyUI integration: https://github.com/Lightricks/ComfyUI-LTXVideo
Early community LoRAs: https://huggingface.co/Lightricks/LTXV-LoRAs
Banadoco Discord server, an excellent place for discussing LTXV and other open models (Wan/Hunyuan):
https://discord.com/channels/1076117621407223829/13693260067...
Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation
There are a couple of JS errors, which I presume keep the videos from appearing.
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.
UPDATE: This is text on an unofficial website unaffiliated with the project. BUT https://www.lightricks.com/ has "LTXV open source video model" in a big header at the top of my page so my complaint still stands, even though the FAQ copy I'm critiquing here is likely not the fault of Lightricks themselves.
So it's open weights, not open source.
Open weights is great! No need to use the wrong term for it.
From https://static.lightricks.com/legal/LTXV-2B-Distilled-04-25-... it looks like the key non-open-source terms (by the OSI definition which I consider to be canon) are:
- Section 2: entities with annual revenues of at least $10,000,000 (the “Commercial Entities”) are eligible to obtain a paid commercial use license, subject to the terms and provisions of a different license (the “Commercial Use Agreement”)
- Section 6: To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this Agreement, update the Model through electronic means, or modify the Output of the Model based on updates
This is an easy fix: change that FAQ entry to:
> Is LTXV-13B open weights?
> Yes, LTXV-13B is available under the LTXV Open Weights License. The model is open weights and the underlying code is open source (Apache 2.0), allowing for community development and customization.
Here's where the code became Apache 2.0 6 months ago: https://github.com/Lightricks/LTX-Video/commit/cfbb059629b99...
[0] https://github.com/Lightricks/LTX-Video/blob/main/configs/lt...
NVIDIA 4090/5090 GPU 8GB+ VRAM (Full Version)
I have a 3070 w 8GB of VRAM.
Is there any reason I couldn’t run it (albeit slower) on my card?
It won't speed it up, but using a quantization that fits in VRAM will prevent the offload penalty.
But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.
*OOM = Out Of Memory Error
[1]: https://www.youtube.com/watch?v=_18NBAbJSqQ
[0] https://github.com/Lightricks/LTX-Video/issues/163
The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:
https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/e...
Wow.
https://x.com/lenscowboy/status/1920353671352623182
https://x.com/lenscowboy/status/1920513512679616600
https://discord.com/channels/1076117621407223829/13693260067...
The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.
The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.
It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.
in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)
re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.