Available to the world except the European Union, the UK, and South Korea
Not sure what led to that choice. I'd have expected either the U.S. & Canada to be in there, or not these.
3. DISTRIBUTION.
[...]
c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan”; [...]
What's that doing in the license? What's the implications of a license-listed "encouragement"?
b3lvedere · 1m ago
"You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan” "
Is this the new 'please like and subscribe/feed us your info' method?
NitpickLawyer · 1h ago
> Not sure what led to that choice.
It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.
It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.
flanked-evergl · 51m ago
If I was Russia and/or China and I wanted to eliminate EU as a potential rival economically and militarily, then I don't think I could have come up with a better way to do it than EU regulations. If it was not for the largess of the US, then EU would become a vassal of Russia and/or China. And I think the US is running out of good will very rapidly. The EU could, of course, shape up, but it won't.
Cthulhu_ · 16m ago
I'd rather be free and my data safe than be an economic world leader. False dichotomy, I know, but I don't mind the people before money mindset.
flanked-evergl · 11m ago
This is a false dichotomy, you can have privacy and still be militarily and economically relevant.
But say that you were right, and you have to choose between privacy and relevance, if you choose privacy, then once you are entirely economically dependent on Russia (Europe is still paying more in energy money to Russia than in aid to Ukraine) and China — when Europe is a vassal — it won't be able to make its own laws anymore.
EU is fully invested into virtue signalling over actual tangible results. People keep saying how much stronger EU's economy is than Russia's, and how Russia is basically a gas station with Nukes, but the thing is, even with EU's "strong" economy Russia has them by the balls. They have to go hat in hand begging the US to step in because they can't do anything themselves, and the US is not going to keep propping up EU long term, especially not with how hostile the Europeans are towards Americans.
I live in Europe, I don't want Europe to become a vassal of China/Russia - but if something drastically does not change it will. Russia is Europe's Carthage, Russia must fall. There is no future with a Russia as it is today and a Europe as it is today in it, not because of Europe, but because of Russia. If Europe does not eliminate Russia, Russia will eliminate Europe. I have no doubts about this.
But as things stand, there just seems no way in which we practically can counter Russia at all. If Europe had determination, it would have sent Troops into Ukraine and created a no-fly zone — it should do that, but here we are.
mushufasa · 1h ago
The EU and others listed are actively trying to regulate AI. Permissive OSS libraries' "one job" is to disclaim liability. This is interesting that they are just prohibiting usage altogether in jurisdictions where the definition of liability is uncertain & worrying to the authors.
amelius · 1h ago
That would be an extremely lazy way of writing a license.
jandrewrogers · 42m ago
Unlikely laziness, since they went to the effort of writing a custom license in the first place.
A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.
amelius · 25m ago
Maybe, but if you cannot say something simple as "here is something you can use for free, use at your own risk, we are not liable for anything", then that is a clear indication of the bankruptcy of the law, imho.
Since the law is very well developed in the EU, I think the people who wrote the license were just lazy.
notpushkin · 26m ago
I don’t get it. Couldn’t they just write a liability disclaimer clause that covers that, without explicitly calling out particular jurisdictions? E.g. “you are solely responsible for ensuring your use of the model is lawful and agree to indemnify the authors or whatever. If you can’t do that in your jurisdiction, you can’t use the model.”
NitpickLawyer · 4m ago
The problem is that AI act covers entities releasing AI software as open source. That has never been the case so far, so while they're still figuring it out, better safe than sorry.
nickpsecurity · 13m ago
It's a careful way of running a business with potential users in highly-regulated markets. They don't know their regulations or laws. They don't want to invest labor in complying with them.
So, they reduced their liability by prohibiting usage of the model to show those jurisdictions' decision makers they were complying. I considered doing the same thing for EU. Although, I also considered one mught partner with an EU company if they are willing to make models legal in their jurisdiction. Just as a gift to Europeans mainly but maybe also a profit-sharing agreement.
wkat4242 · 49m ago
I wonder if you can still download and use it here in the EU.. I don't care about licensing legalese, but I guess you have to sign up somewhere to get the goods?
EU has very difficult AI and data regulations, not sure about South Korea
NullCascade · 1h ago
Maybe private Chinese AI labs consider EU/UK regulators a bigger threat than US anti-China hawks.
londons_explore · 2m ago
> The minimum GPU memory required is 60GB for 540p.
We're about to see next gen games requiring these as minimum system requirements...
stargrazer · 1h ago
It explicitly says using a single picture. Wouldn't the world become even more expressive if multiple pictures could be added, such as in a photogrammetry scenario?
btbuildem · 27m ago
I had the same question!
I will have to try this, I have a super edge use case: incomplete bathymetric depth map (lidar boat could not access some areas), coincidentally the most interesting areas are not in the data. My second piece of data is from flyover video (areas of interest where water is also clear enough to see the bottom). With enough video I can mostly remove the water-borne artifacts (ripples, reflections etc) and enhance the river bottom imagery enough to attempt photogrammetric reconstruction. The bottleneck here is that it takes multiple angles to do that, and the visibility through water is highly dependent on the angle of sunlight vs angle of camera.
Instead of doing multiple flyovers at different times of day to try and get enough angles for a mesh reconstruction, maybe this can do it relatively well from one angle!
iamsaitam · 1h ago
Interesting that they chose the color red in the comparison table to determine the best score of that entry.
FartyMcFarter · 1h ago
Just like the stock market in China. Red means the price is going up, green means it's going down.
jsheard · 1h ago
That's also why the stonks-going-up emoji traditionally has a red line, Japan shares that convention.
By the way, people might think this has to do with communism but it’s cultural and way before the 20th century. Red is associated with happiness and celebration.
MengerSponge · 1h ago
Almost like the communists chose what iconography to use!
mananaysiempre · 1h ago
The (blood-)red flag as an anti-monarchist symbol originates in the French Revolution, was adopted by the Bolshevik faction (“the Reds”) in the Russian Civil War, and spread from there.
kridsdale1 · 1h ago
And ironically the news networks in 2000 chose red to show Bush’s electoral votes vs Gore, and thus we retain the notion of Red States and Blue States, even though it’s backwards.
Cthulhu_ · 15m ago
Cultural differences, as others have pointed out; I find it fascinating. And also it doesn't impact my day at all.
idiotsecant · 1h ago
It would be a very uninteresting choice in china. Color is partially a cultural construction. Red doesn't mean the same thing there that it does in the west.
geeunits · 1h ago
You'll notice it in every piece of western propaganda too. From movies to fashion. Red is the china call
bilsbie · 1h ago
I’m waiting like crazy for one of these to show up on vr.
kridsdale1 · 1h ago
Check out visionOS 26’s Immersive Photo mode. Any photo in your iCloud library gets converted by an on device model to (I assume) a Gaussian Splat 3D scene that you can pan and dolly around in. It’s the killer feature that justifies the whole cost of Vision Pro. The better the source data the better it works.
I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.
jsheard · 1h ago
Please don't hold your breath, they're still pretty far from high-res 120fps with consistent stereo and milliseconds of latency.
jimmySixDOF · 15m ago
While discussing Google Genie v3 and AndroidXR, Bilawal Sidhu said : "to create an even faster, lower latency pipeline to go from like 24 fps to like 100 fps. I could see that being more of an engineering problem than a research one at this point."
Isn't it picture to 3D model?
You'd generate the environment/model ahead of time and then "dive in" to the photo
jsheard · 1h ago
I suppose that's an option yeah, but when people envision turning this kind of thing into a VR holodeck I think they're expecting unbounded exploration and interactivity, which precludes pre-baking everything. Flattening the scene into a static diorama kind of defeats the point.
throwmeaway222 · 1m ago
I actually would rather it be a 3d model so that I don't need to believe they're microwaving a goddamn full size whale for 45 minutes (worth of electricity)
forrestthewoods · 10m ago
Spin the camera 1080 degrees in place you cowards!!
These clips are very short and don’t rotate the camera more than like 45 degrees. Genie3 also cheats and only rotate the camera 90 degrees.
It’s always important to pay attention to what models don’t do. And in this case it’s turn the bloody camera around.
I refuse to accept any model to be a “world model” if it can’t pass a simple “spin in place” test.
Bah hum bug.
ambitiousslab · 1h ago
This is not open source. It is weights-available.
Also, there is no training data, which would be the "preferred form" of modification.
From their license: [1]
If, on the Tencent HunyuanWorld-Voyager version release date, the monthly active users of all products or services made available by or for Licensee is greater than 1 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.
You must not use the Tencent HunyuanWorld-Voyager Works or any Output or results of the Tencent HunyuanWorld-Voyager Works to improve any other AI model (other than Tencent HunyuanWorld-Voyager or Model Derivatives thereof).
As well as an acceptable use policy:
Tencent endeavors to promote safe and fair use of its tools and features, including Tencent HunyuanWorld-Voyager. You agree not to use Tencent HunyuanWorld-Voyager or Model Derivatives:
1. Outside the Territory;
2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
3. To harm Yourself or others;
4. To repurpose or distribute output from Tencent HunyuanWorld-Voyager or any Model Derivatives to harm Yourself or others;
5. To override or circumvent the safety guardrails and safeguards We have put in place;
6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
9. To intentionally defame, disparage or otherwise harass others;
10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
11. To generate or disseminate personal identifiable information with the purpose of harming others;
12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
13. To impersonate another individual without consent, authorization, or legal right;
14. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
19. For military purposes;
20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.
heod749 · 1h ago
>The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.
Or, those countries are trying to regulate AI.
Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).
thrance · 56m ago
Peak American thinking: megacorps and dictatorships stealing data with no respect whatsoever for privacy and not giving anything back is good. Any attempt to defend oneself from that is foolish and should be mocked. I wish you people could realize you're getting fucked over as much as the rest of us.
NitpickLawyer · 1h ago
> This is not open source. It is weights-available.
> Also, there is no training data, which would be the "preferred form" of modification.
This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.
stefan_ · 29m ago
Thats a very novel (and obviously wrong) interpretation of preferred form. The full sentence is "preferred form of modification" and obviously weights don't allow that.
tbrownaw · 1h ago
> Also, there is no training data, which would be the "preferred form" of modification.
Isn't fine-tuning a heck of a lot cheaper?
Nevermark · 32m ago
Fine tuning with original data plus fine tuning data has more predictable results.
Just training on new data moves a model away from its previous behavior, to an unpredictably degree.
You can’t even reliably test for the change without the original data.
htrp · 55m ago
outside of ai2, not sure anyone actually truly is open-source ai models (training logs, data etc).
I think at this point, open source is practically shorthand for weights available
indiantinker · 7m ago
Matrix
NullCascade · 1h ago
What is currently the best model (or multi-model process) to go from text-to-3D-asset?
Ideally based on FOSS models.
neutronicus · 1h ago
Piggybacking ... what about text-to-sprite-sheet? Or even text-and-single-source-image-to-sprite-sheet?
nzach · 19m ago
I've never done this task specifically, but I imagine the new google model (Gemini 2.5 Flash Image) is what you want. It has really good character consistency, so you should be able to paste a single sprite and ask it to generate the rest.
geokon · 1h ago
Seems the kind of thing StreetView data would have been perfect to train on.
I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream
kridsdale1 · 1h ago
Why the past tense? Google is holding on to all of that, going back years.
Cthulhu_ · 10m ago
Yeah, they have all the raw data (Google is a self-confessed data hoarder, after all), I'm sure they have research projects where they use AI and similar to stitch street view images together.
I also wouldn't be surprised if their Street View cars / people record video instead of stills these days. Assuming they started capturing stuff in 2007 (and it was probably a lot earlier), storage technology has improved at least tenfold in terms of storage (probably more), video processing too.
amelius · 1h ago
Can I use this to replace a LiDAR?
incone123 · 35m ago
It's generating a 3d world from a photo or other image, rather than giving you a 3d model of the real world.
amelius · 33m ago
Look at the examples. It can generate a depth map.
ENGNR · 31m ago
Depends how many liberties it takes in imagining the world
Lidar is direct measurement
garbthetill · 50m ago
if it does, then elon really won the bet of no lidar
HeWhoLurksLate · 15m ago
there's a huge difference between "feature that mostly works and is kinda neat" and "5000 pound robot relies on this to work all the time or people will probably get hurt at minimum" in how much you should trust a feature.
Doesn't really matter if an imgTo3d script gets a face's depth map inverted, kinda problematic if your car doesn't think there's something where there is.
Cthulhu_ · 9m ago
I wasn't aware there was a competition or a bet.
krystofee · 50m ago
I think its a matter of time when we will have photorealistic playable computer games generated by these engines.
netsharc · 3m ago
Yeah, MS Flight Simulator with a world that's "inspired by" ours... The original 2020 version had issues with things like the Sydney Harbour Bridge (did it have the Opera House?), using AI to generate 3D models of these things based on pictures would be crazy (of course they'd generate once, on 1st request).
So if you're the first to approach the Opera House, it would ask the engine for 3D models of the area, and it would query its image database, see the fancy opera house, and generate its own interpretation.. if there's no data (e.g. a landscape in the middle of Africa), it'd use the satellite image plus typical fauna of the region..
And hopefully AI-powered NPCs to fight against/interact with.
Cthulhu_ · 13m ago
I believe there's games that have that already. My concern is that it's all going to be sameish slop. Read ten AI generated stories and you've read them all.
It could work, but they would have to both write unique prompts for each NPC (instead of "generate me 100 NPC personality prompts") and limit the possible interactions and behaviours.
But, emergent / generative behaviour would be interesting to a point. There's plenty of roguelikes / roguelites where this could work in, given their generative behaviours.
user_7832 · 1h ago
I see a lot of skeptical folks here... isn't this the first such model? I remember seeing a lot of image to 3d models before, but they'd all produce absurd results in a few moments. This seems to produce really good output in comparison.
explorigin · 1h ago
If you click on the link, they show a comparison chart with other similar models.
neuronic · 1h ago
> isn't this the first such model?
The linked Github page has a comparison with other world models...
SirHackalot · 1h ago
> Minimum: The minimum GPU memory required is 60GB for 540p.
Cool, I guess… If you have tens of thousands of $ to drop on a GPU for output that’s definitely not usable in any 3D project out-of-the-box.
Is more approachable than one might think, as you can currently find two of these for less than 1,000 USD.
esafak · 1h ago
How much performance penalty is there for doubling up? What about 4x?
kittoes · 1h ago
I just found out about these last week and haven't received the hardware yet, so I can't give you real numbers. That said, one can probably expect at least a 10-30% penalty when the cards need to communicate with one another. Other workloads that don't require constant communication between cards can actually expect a performance boost. Your mileage will vary.
HPsquared · 1h ago
I assume it can be split between multiple GPUs, like LLMs can. Or hire an H100 for like $3/hr.
y-curious · 1h ago
I mean, still awesome that it's OSS. Can probably just rent GPU time online for this
mingtianzhang · 3h ago
What's your opinion on modeling the world? Some people think the world is 3D, so we need to model the 3D world. Some people think that since human perception is 2D, we can just model the 2D view rather than the underlying 3D world, since we don't have enough 3D data to capture the world but we have many 2D views.
Fixed question: Thanks a lot for the feedback that human perception is not 2D. Let me rephrase the question: since all the visual data we see on computers can be represented as 2D images (indexed by time, angle, etc.), and we have many such 2D datasets, do we still need to explicitly model the underlying 3D world?
AIPedant · 2h ago
Human perception is not 2D, touch and proprioception[1] are three-dimensional senses.
And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.
Two of them, giving us stereo vision. We are provided visual cues that encode depth. The ideal world model would at least have this. A world model for a video game on a monitor might be able to get away with no depth information, but a) normal engines do have this information and it would make sense to provide as much data to a general model as possible, and b) the models wouldn't work on AR/VR. Training on stereo captures seems like a win all around.
WithinReason · 1h ago
> We are provided visual cues that encode depth. The ideal world model would at least have this.
None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.
reactordev · 1h ago
Incorrect. My sense of touch can be activated in 3 dimensions by placing my hand near a heat source. Which radiates in 3 dimensions.
Nevermark · 15m ago
You are still sensing heat across 2 dimensions of skin.
The 3rd dimension gets inferred from that data.
(Unless you have a supernatural sensory aura!)
AIPedant · 7m ago
The point is that knowing where your hand is in space relative to the rest of your body is a distinct sense which is directly three-dimensional. This information is not inferred, it is measured with receptors in your joints and ligaments.
AIPedant · 1h ago
It is simply wrong to describe touch and proprioception receptors as 2D.
a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.
b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.
Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion
Nevermark · 24m ago
I think you mean 0D for individual receptors.
Point (I.e. single point/element) receptors, that encode a single magnitude of perception, each.
The cochlea could be thought of 1D. Magnitude (audio volume) measured across 1D = N frequencies. So a 1D vector.
Vision and (locally) touch/pressure/heat maps would be 2D, together.
AIPedant · 16m ago
No, the sensors measure a continuum of force or displacement along a line or rotational axis, 1D is correct.
Nevermark · 9m ago
That would be a different use of dimension.
The measurement of any one of those is a 0 dimensional tensor, a single number.
But then you are right, what. is being measured by that one sensor is 1 dimensional.
But all single sensors measure across a 1 dimensional variable. Whether it’s linear pressure, rotation, light intensity, audio volume at 1 frequency, etc.
2OEH8eoCRo0 · 1h ago
And the brain does sensor fusion to build a 3d model that we perceive. We don't perceive in 2d
There are other sensors as well. Is the inner ear a 2d sensor?
AIPedant · 1h ago
Inner ear is a great example! I mentioned in another comment that if you want to be reductive the sensors in the inner ear - the hairs themselves - are one dimensional, but the overall sense is directly three dimensional. (In a way it's six dimensional since it includes direct information about angular momentum, but I don't think it actually has six independent degrees of freedom. E.g. it might be hard to tell the difference between spinning right-side-up and upside-down with only the inner ear, you'll need additional sense information.)
echelon · 1h ago
The GPCRs [1] that do most of our sense signalling are each individually complicated machines.
Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...
In any case, when all of our senses are integrated, we have rich n-dimensional input.
It's simple: Those who think that human perception is 2D are wrong.
rubzah · 2h ago
It's 2D if you only have one eye.
__alexs · 2h ago
It's not even 2D with one eye. We can estimate distance purely from your eyes focal point.
yeoyeo42 · 1h ago
with one eye you have temporal parallax, depth cues (ordering of objects in your vision), lighting cues, relative size of objects (things further away are smaller) together with your learned comparison size etc.
you're telling me my depth perception is not creating a 3D model of the world in my brain?
No comments yet
KaiserPro · 2h ago
So a lot of text to "world" engines have been basically 2d, in that they create a static background and add sprites in to create the illusion of 3D.
I'm not entirely convinced that this isn't one of those, or if its not it sure as shit was trained on one.
imtringued · 2h ago
2D models don't have object persistence, because they store information in the viewport. Back when OpenAI released their Sora teasers, they had some scenes where they did a 360° rotation and it produced a completely different backdrop.
Is this the new 'please like and subscribe/feed us your info' method?
It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.
It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.
But say that you were right, and you have to choose between privacy and relevance, if you choose privacy, then once you are entirely economically dependent on Russia (Europe is still paying more in energy money to Russia than in aid to Ukraine) and China — when Europe is a vassal — it won't be able to make its own laws anymore.
Start on the right, and click through the options. At the end you'll get a sort of assessment of what you need to do.
I live in Europe, I don't want Europe to become a vassal of China/Russia - but if something drastically does not change it will. Russia is Europe's Carthage, Russia must fall. There is no future with a Russia as it is today and a Europe as it is today in it, not because of Europe, but because of Russia. If Europe does not eliminate Russia, Russia will eliminate Europe. I have no doubts about this.
But as things stand, there just seems no way in which we practically can counter Russia at all. If Europe had determination, it would have sent Troops into Ukraine and created a no-fly zone — it should do that, but here we are.
A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.
Since the law is very well developed in the EU, I think the people who wrote the license were just lazy.
So, they reduced their liability by prohibiting usage of the model to show those jurisdictions' decision makers they were complying. I considered doing the same thing for EU. Although, I also considered one mught partner with an EU company if they are willing to make models legal in their jurisdiction. Just as a gift to Europeans mainly but maybe also a profit-sharing agreement.
We're about to see next gen games requiring these as minimum system requirements...
I will have to try this, I have a super edge use case: incomplete bathymetric depth map (lidar boat could not access some areas), coincidentally the most interesting areas are not in the data. My second piece of data is from flyover video (areas of interest where water is also clear enough to see the bottom). With enough video I can mostly remove the water-borne artifacts (ripples, reflections etc) and enhance the river bottom imagery enough to attempt photogrammetric reconstruction. The bottleneck here is that it takes multiple angles to do that, and the visibility through water is highly dependent on the angle of sunlight vs angle of camera.
Instead of doing multiple flyovers at different times of day to try and get enough angles for a mesh reconstruction, maybe this can do it relatively well from one angle!
https://blog.emojipedia.org/why-does-the-chart-increasing-em...
I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.
https://youtu.be/VslvofY16I0&t=886
These clips are very short and don’t rotate the camera more than like 45 degrees. Genie3 also cheats and only rotate the camera 90 degrees.
It’s always important to pay attention to what models don’t do. And in this case it’s turn the bloody camera around.
I refuse to accept any model to be a “world model” if it can’t pass a simple “spin in place” test.
Bah hum bug.
Also, there is no training data, which would be the "preferred form" of modification.
From their license: [1]
As well as an acceptable use policy: [1] https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob...Or, those countries are trying to regulate AI.
Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).
> Also, there is no training data, which would be the "preferred form" of modification.
This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.
Isn't fine-tuning a heck of a lot cheaper?
Just training on new data moves a model away from its previous behavior, to an unpredictably degree.
You can’t even reliably test for the change without the original data.
I think at this point, open source is practically shorthand for weights available
Ideally based on FOSS models.
I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream
I also wouldn't be surprised if their Street View cars / people record video instead of stills these days. Assuming they started capturing stuff in 2007 (and it was probably a lot earlier), storage technology has improved at least tenfold in terms of storage (probably more), video processing too.
Lidar is direct measurement
Doesn't really matter if an imgTo3d script gets a face's depth map inverted, kinda problematic if your car doesn't think there's something where there is.
So if you're the first to approach the Opera House, it would ask the engine for 3D models of the area, and it would query its image database, see the fancy opera house, and generate its own interpretation.. if there's no data (e.g. a landscape in the middle of Africa), it'd use the satellite image plus typical fauna of the region..
It could work, but they would have to both write unique prompts for each NPC (instead of "generate me 100 NPC personality prompts") and limit the possible interactions and behaviours.
But, emergent / generative behaviour would be interesting to a point. There's plenty of roguelikes / roguelites where this could work in, given their generative behaviours.
The linked Github page has a comparison with other world models...
Cool, I guess… If you have tens of thousands of $ to drop on a GPU for output that’s definitely not usable in any 3D project out-of-the-box.
Is more approachable than one might think, as you can currently find two of these for less than 1,000 USD.
Fixed question: Thanks a lot for the feedback that human perception is not 2D. Let me rephrase the question: since all the visual data we see on computers can be represented as 2D images (indexed by time, angle, etc.), and we have many such 2D datasets, do we still need to explicitly model the underlying 3D world?
And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.
[1] https://en.wikipedia.org/wiki/Proprioception
None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.
The 3rd dimension gets inferred from that data.
(Unless you have a supernatural sensory aura!)
a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.
b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.
Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion
Point (I.e. single point/element) receptors, that encode a single magnitude of perception, each.
The cochlea could be thought of 1D. Magnitude (audio volume) measured across 1D = N frequencies. So a 1D vector.
Vision and (locally) touch/pressure/heat maps would be 2D, together.
The measurement of any one of those is a 0 dimensional tensor, a single number.
But then you are right, what. is being measured by that one sensor is 1 dimensional.
But all single sensors measure across a 1 dimensional variable. Whether it’s linear pressure, rotation, light intensity, audio volume at 1 frequency, etc.
There are other sensors as well. Is the inner ear a 2d sensor?
Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...
In any case, when all of our senses are integrated, we have rich n-dimensional input.
- stereo vision for depth
- monocular vision optics cues (shading, parallax, etc.)
- proprioception
- vestibular sensing
- binaural hearing
- time
I would not say that we sense in three dimensions. It's much more.
[1] https://en.m.wikipedia.org/wiki/G_protein-coupled_receptor
https://theoatmeal.com/comics/mantis_shrimp
No comments yet
I'm not entirely convinced that this isn't one of those, or if its not it sure as shit was trained on one.