“It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”.
If it isn’t covered (after all it’s the AI that drew all the pictures) then anyone using such service to produce a movie would be screwed - anyone could copy it or its characters).
I’m leaving out the problem of whether the service was trained on copyright material or not.
kachapopopow · 22m ago
Some of these are very obviously trained on webtoons and manga, probably pixiv as well. This is very clear due to seeing CG buildings and other misc artifacts. So this is obviously trained on copyrighted material.
Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting. So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.
While being very pro AI in terms of any kind of trademaking and copyright, it still make me wonder what will happen to all the people who provided us with entertainment and if the quality continue to increase or if we're going to start losing challenging styles because "it's too hard for ai" and everything will start 'felling' the same.
It doesn't feel the same as people being replaced with computer and machines, this feels like the end of a road.
vunderba · 2h ago
From the paper:
> a variable-length training approach is adopted, with training durations ranging from 2 to 8 seconds. This strategy enables our model to generate 720p
video clips with flexible lengths between 2 and 8 seconds.
I'd like to see it benched against FramePack which in my experience also handles 2d animation pretty well and doesn't suffer from the usual duration limitations of other models.
We’re so close to finally being able to generate our own Haruhi season 3… what a time to be alive.
dvh · 28m ago
Or fix NGE
veonik · 2h ago
Dude… are you telling me it isnt actually finished? I am watching season 1 for the first time…
darylteo · 2m ago
No it's not. 4 of 10 volumes.
The IP is likely doa anyway as it's on indefinite hiatus
stonecharioteer · 49m ago
Shit i haven't heard of this anime in over 10 years. That was a shot of nostalgia
isaacimagine · 5h ago
I tested this out with a promotional illustration from Neon Genesis Evangelion. The model works quite well, but there are some temporal artifacts w.r.t. the animation of the hair as the head turns:
I know there is a huge market for those excited for infinite anime music videos and all things anime.
This is great for an abundance of content and everyone will become anime artists now.
Japan is truly is embracing AI and there will be new jobs for everyone thanks to the boom AI is creating as well as Jevons paradox which will create huge demand.
Even better if this open source.
smusamashah · 4h ago
There are so many glitches even on the very first example. Arm of the shirt glitching, moving hair disappear and appear out of no where. Rest is just moving arm and clouds.
throwaway314155 · 6h ago
Says it's open source but I'm having trouble finding a link to weights and/or code?
Looks incredibly impressive btw. Not sure it's wise to call it `AniSora` but I don't really know.
For the record, the dev branch of SD.Nexy already supports it.
userbinator · 4h ago
I wonder if the entropy of model weights and their size causes statistical false positives to appear often?
throwaway314155 · 3h ago
I imagine it has more to do with whether or not the file appears to have executable python code in it, as a .pth file is usually just a a pickled python object and these can be manipulated to load arbitrary python code when loaded.
echelon · 5h ago
This is not the first time I've heard of checkpoints being used to distribute malware. In fact, I've heard this was a popular vector from shady international groups.
I wouldn't expect this from Bilibili's Index Team, though, given how high profile they are. It's probably(?) a false positive. Though I wouldn't use it personally, just to be safe.
The safetensors format should be used by everyone. Raw pth files and pickle files should be shunned and abandoned by the industry. It's a bad format.
echelon · 5h ago
> Not sure it's wise to call it `AniSora` but I don't really know.
Given that OpenAI call themselves "Open", I think it's great and hilarious that we're reusing their names.
There was OpenSora from around this time last year:
And there are a lot of other products calling themselves "Sora" as well.
It's also interesting to note that OpenAI recently redirected sora.com, which used to be its own domain, to sora.chatgpt.com.
pests · 1h ago
> OpenAI recently redirected sora.com, which used to be its own domain, to sora.chatgpt.com.
Probably to share cookies.
echelon · 1h ago
Cookies are such a mess.
We need cross-domain cookies. Google took them away so they could further entrench their analytics and ads platform. Abuse of monopoly power.
babuloseo · 4h ago
So we can finally remake Akame Ga kill?
MattRix · 1h ago
I might be missing something, but it feels weird that it’s named after Sora?
chii · 1h ago
sora is the japanese word for sky, and it's not that uncommon a name.
s0rr0wskill · 3h ago
can i generate hentai
topato · 2h ago
Inquisitive minds need to know!
But seriously, I had the same thought, considering the general lack of guardrails surrounding high-profile Chinese genAI models... Eventually, someone will know the answer... It's inevitable...
washadjeffmad · 5h ago
>Powered by the enhanced Wan2.1-14B foundation model for superior stability.
Wan2.1 is great. Does this mean anisora is also 16fps?
hatsunearu · 3h ago
So was this trained on existing anime? Ain't no way the corpus was licensed legally.
Lerc · 3h ago
The right to train models on copyrighted data has yet to be determined.
mythz · 2h ago
China doesn't know what you're talking about.
tonyhart7 · 3h ago
"animated video generation model presented by Bilibili."
You understand that china has "different" view on copyright,license etc right??
yorwba · 1h ago
Not that different. Bilibili is a big, above-board video streaming service; they definitely have distribution rights to a large collection of anime content. (They also have YouTube-style user uploads where proper licensing is less likely.)
It's the equivalent of Crunchyroll putting out a video generation model. If the rightsholders disagree with this usage, it'll come up during the negotiations for new releases.
dbacar · 1h ago
Do you think all that all the big guys just asked people while training their models?
SiempreViernes · 1h ago
Really? We've all seen the stories on how Meta sourced book content from Anna's Archive and still you try to claim things are different in China?
mattigames · 1h ago
Not like chatgtp and sora which as all we all known are fully trained in public licensed content free of copyright.
mitthrowaway2 · 53m ago
Exactly, that's why they aren't able to replicate the Studio Ghibli style.
ekianjo · 2h ago
There are very few models out there that are not trained on data protected by copyright. So nothing new for the past 3 years
Current stance:
https://www.copyright.gov/newsnet/2025/1060.html
“It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”.
If it isn’t covered (after all it’s the AI that drew all the pictures) then anyone using such service to produce a movie would be screwed - anyone could copy it or its characters).
I’m leaving out the problem of whether the service was trained on copyright material or not.
Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting. So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.
While being very pro AI in terms of any kind of trademaking and copyright, it still make me wonder what will happen to all the people who provided us with entertainment and if the quality continue to increase or if we're going to start losing challenging styles because "it's too hard for ai" and everything will start 'felling' the same.
It doesn't feel the same as people being replaced with computer and machines, this feels like the end of a road.
> a variable-length training approach is adopted, with training durations ranging from 2 to 8 seconds. This strategy enables our model to generate 720p video clips with flexible lengths between 2 and 8 seconds.
I'd like to see it benched against FramePack which in my experience also handles 2d animation pretty well and doesn't suffer from the usual duration limitations of other models.
https://lllyasviel.github.io/frame_pack_gitpage
The IP is likely doa anyway as it's on indefinite hiatus
https://goto.isaac.sh/neon-anisora
Prompt: The giant head turns to face the two people sitting.
Oh, there is a docs page with more examples:
https://pwz4yo5eenw.feishu.cn/docx/XN9YdiOwCoqJuexLdCpcakSln...
I know there is a huge market for those excited for infinite anime music videos and all things anime.
This is great for an abundance of content and everyone will become anime artists now.
Japan is truly is embracing AI and there will be new jobs for everyone thanks to the boom AI is creating as well as Jevons paradox which will create huge demand.
Even better if this open source.
Looks incredibly impressive btw. Not sure it's wise to call it `AniSora` but I don't really know.
> This model has 1 file scanned as unsafe. testvl-pre76-top187-rec69.pth
Hm, perhaps I'll wait for this to get cleared up?
https://huggingface.co/Disty0/Index-anisora-5B-diffusers
For the record, the dev branch of SD.Nexy already supports it.
I wouldn't expect this from Bilibili's Index Team, though, given how high profile they are. It's probably(?) a false positive. Though I wouldn't use it personally, just to be safe.
The safetensors format should be used by everyone. Raw pth files and pickle files should be shunned and abandoned by the industry. It's a bad format.
Given that OpenAI call themselves "Open", I think it's great and hilarious that we're reusing their names.
There was OpenSora from around this time last year:
https://github.com/hpcaitech/Open-Sora
And there are a lot of other products calling themselves "Sora" as well.
It's also interesting to note that OpenAI recently redirected sora.com, which used to be its own domain, to sora.chatgpt.com.
Probably to share cookies.
We need cross-domain cookies. Google took them away so they could further entrench their analytics and ads platform. Abuse of monopoly power.
But seriously, I had the same thought, considering the general lack of guardrails surrounding high-profile Chinese genAI models... Eventually, someone will know the answer... It's inevitable...
Wan2.1 is great. Does this mean anisora is also 16fps?
You understand that china has "different" view on copyright,license etc right??
It's the equivalent of Crunchyroll putting out a video generation model. If the rightsholders disagree with this usage, it'll come up during the negotiations for new releases.