Available to the world except the European Union, the UK, and South Korea
Not sure what led to that choice. I'd have expected either the U.S. & Canada to be in there, or not these.
3. DISTRIBUTION.
[...]
c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan”; [...]
What's that doing in the license? What's the implications of a license-listed "encouragement"?
NitpickLawyer · 10h ago
> Not sure what led to that choice.
It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.
It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.
flanked-evergl · 10h ago
If I was Russia and/or China and I wanted to eliminate EU as a potential rival economically and militarily, then I don't think I could have come up with a better way to do it than EU regulations. If it was not for the largess of the US, then EU would become a vassal of Russia and/or China. And I think the US is running out of good will very rapidly. The EU could, of course, shape up, but it won't.
cedilla · 8h ago
It's hard not to react sarcastically to this. But I will try:
There's nothing special about EU regulations vis-a-vis other laws. China, Russia and the US also have laws, many of which are also perceived as overly bureaucratic.
rafaelmn · 8h ago
Identifying something as a critical competitive industry, then place a bunch of hurdles in front of it's development, and sit confused when we get left behind - that's the EU special.
cloudrkt · 4h ago
Left behind on what exactly? Privacy laws? Consumer rights?
cooper_ganglia · 3h ago
Technology, AI, semiconductors, cloud computing, consumer electronics, social media, app ecosystems, e-commerce, military technology, energy independence, venture capital, unicorns and scaling, banking innovation, space exploration, biotech and pharma...
pipes · 2h ago
Growth industries.
OtherShrezzing · 8h ago
I think you have an over-aggrandised opinion of Russia's geopolitical, military, and economic power.
viccis · 8h ago
>then EU would become a vassal of Russia
Russia is currently struggling to make inroads on invading its relatively small neighbor, so I really doubt it would be able to make a bunch of nuclear powers who have a nuclear alliance its "vassal"
I understand that Russia's not fighting just Ukraine but rather Ukraine with massive US and EU assistance but my point still stands.
flanked-evergl · 6h ago
It's not struggling as much as Ukraine. Russia, if it was struggling, would accept a negotiated peace. It's quite clear that the last thing Russia wants is peace.
wkat4242 · 6h ago
It's because lives are meaningless to the Russian government. They'll just throw more literal bodies at the problem. Nobody's going to stop them as they're a dictatorship. And now they're even getting North Koreans as extra cannon fodder.
Ukraine doesn't have that "benefit".
holoduke · 5h ago
Ukraine is used by the west as connonfodder by western institutions that control western politicians. The west literally doesn't care about life's of ordinary citizens. Specialy if its outside of its country. From supporting cruel regimes, to supporting genocide in Israël, to cheap labour without worker rights and so on. The west isnt a grain better than anybody else.
wredcoll · 3h ago
'The west' can do bad things while not actually being worse. It's a complicated world.
netsharc · 2h ago
I was going to say "you're nuts!", but... I wouldn't say Ukraine is being used as cannonfodder, but the EU is very interested in Russia not winning in Ukraine, because if Putin wins, the EU will have a big refugee crisis (although "slightly better" refugees since they're white and share a similar culture, compared to the reception of brown and Muslim refugees).
Also the EU pays for countries like Turkey and Libya to prevent refugee ships from coming to their continent. If that means sinking those ships with people on them, well...
wredcoll · 3h ago
I love how the constant comeback to this is "well they're sorta-kinda winning the war" as if maybe barely defeating ukraine is some kind of mark of global dominance.
lawlessone · 6h ago
>It's not struggling as much as Ukraine.
OK but Ukraine isn't trying to invade a small country next door and claim a global superpower status.
It's expected they would struggle against a much larger neighbor invading them.
Russia is struggling where nobody expected it to struggle.
Cthulhu_ · 9h ago
I'd rather be free and my data safe than be an economic world leader. False dichotomy, I know, but I don't mind the people before money mindset.
flanked-evergl · 9h ago
This is a false dichotomy, you can have privacy and still be militarily and economically relevant.
But say that you were right, and you have to choose between privacy and relevance, if you choose privacy, then once you are entirely economically dependent on Russia (Europe is still paying more in energy money to Russia than in aid to Ukraine) and China — when Europe is a vassal — it won't be able to make its own laws anymore.
londons_explore · 9h ago
Almost every country or group of countries today is either a world leader, nearly a world leader, or a vassal state of one of the first two.
jimbokun · 7h ago
It's not clear that the inscrutable process described is providing either.
staplers · 8h ago
I'd rather be free and my data safe than be an economic world leader.
You often need the latter to maintain the former.
Y_Y · 6h ago
How often?
ekianjo · 9h ago
With chat control you won't get either.
bee_rider · 8h ago
We’ll see if these LLMs end up having a real use, once the “giving away investor money” business model dries up. They really might! But it seems early to say that the EU has missed out on anything, before we see what the thing is.
In general, it is hard to compare the US and the EU; we got a head start while the rest of the world was rebuilding itself from WW2. That started up some feedback loops. We can mess up and siphon too much off a loop, destroying it, and still be ahead. They can be setting up loops without benefitting from them yet.
jimbokun · 7h ago
They seem to be improving a lot on their defense spending, at least.
Will take them a while to get out from under the US umbrella. But acknowledging the problem is the first step.
flanked-evergl · 1h ago
I'm grateful that Europe is increasing defence spending, but I'm cynical regarding Europe because so far, it's been absolutely no hindrance to Russia's expansionism, and in some ways it has inadvertently provided Russia with material assistance while Russia engages with expansionism.
Spending on defense is not the same as. Norway is spending more on everything all the time and getting worse outcomes all the time. We spend more on police than ever, even per capita, and crime is up, we spend more on military than ever, and our actual metrics are down. I think with most of Europe the defense spending is the same, I hope I'm wrong, but if you up regulation then you have to spend more to get the same results, and Europe has runaway regulation in addition to people who try to hijack institutions for other purposes.
myhf · 8h ago
In the 1940s, the CIA wrote the Simple Sabotage Field Manual [1] explaining methods to damage their rivals' operations through largely bureaucratic means.
Today, we have fully automated the methods from this manual in the form of LLM Chatbots, which we have for some reason deployed against ourselves.
Comments like yours remind me that while HN is a competent technological forum, it's best to never, ever, engage in serious macro-economic/ int. politics discussions as the average user engaging in the latter topics is so far off base with common knowledge in these areas, any insider wouldn't find common ground.
Overconfidence bias is real.
Knowing your circle of competence is a gift.
wredcoll · 3h ago
Ah yes, famous industrial/scientific powerhouse... russia???
lawlessone · 6h ago
>If I was Russia and/or China and I wanted to eliminate EU as a potential rival economically and militarily, then I don't think I could have come up with a better way to do it than EU regulations.
Personally I'm not too worried anyone is going to become a global superpower from generative AI slop.
suddenlybananas · 9h ago
The EU is a vassal of the US, that is its entire raison d'être.
t43562 · 6h ago
I think some in the US see it the opposite way - a system for preventing the US from dominating it piecemeal. This explains their support for "free speech" for the various neo-(no we're not nazi!) parties in the EU.
falcor84 · 8h ago
What? How did you arrive at that?
suddenlybananas · 8h ago
To be honest, it's so blatantly obvious (especially with the recent meeting of European leaders and Trump) that I find it difficult to understand your surprise. I mean Christ, Europe is teeming with American bases.
llbbdd · 7h ago
Yeah I don't know how this isn't just the common understanding of the situation. The EU/UK is constantly working around whatever the US wants to do, and the US does whatever it wants.
wredcoll · 3h ago
At the risk of trying for nuance online, therebis a rather large difference between america being (the only) superpower and a country being a vassal.
falcor84 · 7h ago
raison d'être means "reason for existence" and none of what you said supports that assertion
lawlessone · 5h ago
>(especially with the recent meeting of European leaders and Trump)
I got the same impression seeing Trump meet Putin. The US is a vassal state of Russia.
llbbdd · 7h ago
I'm amazed that they could pull themselves together enough to publish an app at all.
i don’t think it is incorrect to select an architect of this regulation as one of the most influential people on AI
jimbokun · 7h ago
Doesn't have to be influential in a positive sense.
whimsicalism · 2h ago
exactly
flanked-evergl · 9h ago
EU is fully invested into virtue signalling over actual tangible results. People keep saying how much stronger EU's economy is than Russia's, and how Russia is basically a gas station with Nukes, but the thing is, even with EU's "strong" economy Russia has them by the balls. They have to go hat in hand begging the US to step in because they can't do anything themselves, and the US is not going to keep propping up EU long term, especially not with how hostile the Europeans are towards Americans.
I live in Europe, I don't want Europe to become a vassal of China/Russia - but if something drastically does not change it will. Russia is Europe's Carthage, Russia must fall. There is no future with a Russia as it is today and a Europe as it is today in it, not because of Europe, but because of Russia. If Europe does not eliminate Russia, Russia will eliminate Europe. I have no doubts about this.
But as things stand, there just seems no way in which we practically can counter Russia at all. If Europe had determination, it would have sent Troops into Ukraine and created a no-fly zone — it should do that, but here we are.
switchbak · 5h ago
"If Europe does not eliminate Russia, Russia will eliminate Europe" - this aggressive warmongering is what led to the Russian invasion (NATO expansion), and is actively making the world a much less safe place to be.
flanked-evergl · 2h ago
The west has done everything to make nice with Russia for decades, every new American president since at least Bush 2 thinks that this time he will be the one to charm the Russians, and every time the Russians shit the bed. The Russians don't want to make nice, they want to destroy the west. And I don't want to be destroyed.
suddenlybananas · 8h ago
Russia is not a serious threat to Europe. France has nukes and a competent army, and Russia has been shown to be a relatively weak power in its difficulties invading Ukraine. Even if they win the war ultimately, it took so long that it is difficult to imagine them winning in a war with serious powers like France.
OKRainbowKid · 8h ago
Militarily, I agree. But Russia is actively (and successfully) eroding out democracy and societal coherence. They don't need to win militarily if they can instead promote infighting and help corrupt and russia-friendly parties rise to power.
mvuijlst · 7h ago
This is obviously more a risk in the US than in Europe.
suddenlybananas · 7h ago
Blaming the rise of the far-right on Russia is a bit absurd. Immigration isn't popular even if you think it should be.
cycomanic · 7h ago
There is plenty of evidence that Putin is propping up far right actors all over Europe, is running considerable Desinformation campaigns but yeah it's all those pesty immigrants fault /s
holoduke · 5h ago
Evidence by who? Der Spiegel, BBC, La Monde? All in the pockets of warmongering western institutions. Do not believe those networks.
suddenlybananas · 7h ago
I sincerely think you need to understand that people can have different beliefs and values than you without that being the result of misinformation.
const_cast · 4h ago
If those beliefs just so happen to perfectly mirror Russian propaganda then it's probably just Russian propaganda.
When it's getting to a point where far-right leaders appear to care more about the prosperity of Russia than their own nation or their allies... yeah it's probably misinformation. At best. At worst, it's targeted propaganda - lots of bots online!
wkat4242 · 6h ago
France has a few nukes, less than 300. And mainly sub based (and only a handful subs). Nothing compared to Russia which has 5500. They could be taken out of play.
flanked-evergl · 7h ago
> Russia is not a serious threat to Europe.
Ukraine will all the backing of Europe is making no progress, if this was true, Russia would be expelled from Ukraine tomorrow, as it should be. Ukraine is an embarrassment for Europe, it strongly suggests that Europe is basically meaningless on the global stage.
And the most embarrassing of all is, Europe is still buying gas from Russia.
switchbak · 5h ago
"Ukraine will all the backing of Europe is making no progress" - Far from "making no progress", Ukraine is slowly getting eroded. Russia has serious problems in sustaining this conflict, but Ukraine's are far more serious and near-term.
"suggests that Europe is basically meaningless on the global stage" ... it will take many years of deep military investment to provide a proper counter to Russian aggression. As of right now, Europe has been shown to be in a very weak and exposed position. This was obvious years ago, and should not be a surprise today. This is true of most of the NATO member states.
That said, simply because Ukraine is unable to expel Russia does not mean that it is a grand threat to Europe proper. Perhaps some eastern countries face some limited conflict, but I'm not convinced by this "domino theory" that Russia would engage in a WWII style invasion of Poland, Finland, etc.
flanked-evergl · 4h ago
I'm entirely convinced that Putin and others in Russian leadership will stop at nothing to destroy western civilization, China is mostly indifferent to it, Russia is antagonistic to it. They know our weaknesses, they know how to get under our skin and we seem to have no defence to this. If Europe ever gets out of this slump it has to take out Russia, if not, Russia will eventually take out Europe.
holoduke · 5h ago
Man that sounds just like a victim of propaganda. Ever looked at your own governments? Are they really serving your needs?
crimsoneer · 8h ago
I mean, neither the UK nor South Korea are in the EU, nor does it have equivalent laws. I suspect ongoing push from US and China that nobody has the right to be involved in AI regulation that isn't them and just general vibes.
jonas21 · 7h ago
South Korea has a number of unusual regulations, including extremely strict restrictions on spatial data [1] and an AI law that, among other things, requires foreign companies to have a representative physically in South Korea to answer to the government [2]. So it's not too surprising to see it on the list.
The UK has their chat thing where if you provide chat (even with bots!) you have to basically be a megacorp to afford the guardrails they think "the kids" need. It's not clear if open source models fall into that, but who's gonna read 300+ pages of insanity to make sure?
mushufasa · 10h ago
The EU and others listed are actively trying to regulate AI. Permissive OSS libraries' "one job" is to disclaim liability. This is interesting that they are just prohibiting usage altogether in jurisdictions where the definition of liability is uncertain & worrying to the authors.
ezoe · 2h ago
Geological copyleft?
amelius · 10h ago
That would be an extremely lazy way of writing a license.
jandrewrogers · 10h ago
Unlikely laziness, since they went to the effort of writing a custom license in the first place.
A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.
amelius · 9h ago
Maybe, but if you cannot say something simple as "here is something you can use for free, use at your own risk, we are not liable for anything", then that is a clear indication of the bankruptcy of the law, imho.
Since the law is very well developed in the EU, I think the people who wrote the license were just lazy.
Miraste · 8h ago
The AI act being "well developed" means it's dense enough that compliance can't be done without the backing of a major corporation's legal team. Tencent is a major corporation, but this is a janky research project that's not part of a product. The researchers don't have legal knowledge of EU regulations, and they probably have limited or zero access to anyone who does. Cutting off EU countries is the safe and responsible choice.
notpushkin · 9h ago
I don’t get it. Couldn’t they just write a liability disclaimer clause that covers that, without explicitly calling out particular jurisdictions? E.g. “you are solely responsible for ensuring your use of the model is lawful and agree to indemnify the authors or whatever. If you can’t do that in your jurisdiction, you can’t use the model.”
NitpickLawyer · 9h ago
The problem is that AI act covers entities releasing AI software as open source. That has never been the case so far, so while they're still figuring it out, better safe than sorry.
nickpsecurity · 9h ago
It's a careful way of running a business with potential users in highly-regulated markets. They don't know their regulations or laws. They don't want to invest labor in complying with them.
So, they reduced their liability by prohibiting usage of the model to show those jurisdictions' decision makers they were complying. I considered doing the same thing for EU. Although, I also considered one mught partner with an EU company if they are willing to make models legal in their jurisdiction. Just as a gift to Europeans mainly but maybe also a profit-sharing agreement.
b3lvedere · 9h ago
"You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan” "
Is this the new 'please like and subscribe/feed us your info' method?
wkat4242 · 10h ago
I wonder if you can still download and use it here in the EU.. I don't care about licensing legalese, but I guess you have to sign up somewhere to get the goods?
Thanks!! I saw there wasn't all that much on github so I missed that part.
whimsicalism · 10h ago
EU has very difficult AI and data regulations, not sure about South Korea
NullCascade · 10h ago
Maybe private Chinese AI labs consider EU/UK regulators a bigger threat than US anti-China hawks.
BryanLegend · 9h ago
So in this case is North Korea more free than South Korea?
stargrazer · 10h ago
It explicitly says using a single picture. Wouldn't the world become even more expressive if multiple pictures could be added, such as in a photogrammetry scenario?
btbuildem · 9h ago
I had the same question!
I will have to try this, I have a super edge use case: incomplete bathymetric depth map (lidar boat could not access some areas), coincidentally the most interesting areas are not in the data. My second piece of data is from flyover video (areas of interest where water is also clear enough to see the bottom). With enough video I can mostly remove the water-borne artifacts (ripples, reflections etc) and enhance the river bottom imagery enough to attempt photogrammetric reconstruction. The bottleneck here is that it takes multiple angles to do that, and the visibility through water is highly dependent on the angle of sunlight vs angle of camera.
Instead of doing multiple flyovers at different times of day to try and get enough angles for a mesh reconstruction, maybe this can do it relatively well from one angle!
loudmax · 8h ago
This does sound interesting, but is generative AI the right tool for this use case? A generative AI model sounds great for making a video game or even exploring historical photos, where introducing invented artifacts is a feature not a bug. In your case, wouldn't hallucinations be a problem?
btbuildem · 6h ago
I agree with you that it would be "made up" content, but I don't know how else to fill in the missing data. The area not scanned by LiDAR is just upstream from and directly beneath a set of whitewater rapids.
I can guesstimate the shape of the bottom by the behaviour of the flow, and hand-model the missing parts of the mesh. I thought outsourcing that to a generative model would be a nice shortcut -- and who knows, likely it'll synthesize it more true-to-nature than I would.
Miraste · 8h ago
That sounds quite interesting. Why are you trying to reconstruct a river bottom?
btbuildem · 6h ago
The shape river bottom causes a few standing waves / rapids to form. I am fascinated by it and want to better understand the hows and whys of it.
llbbdd · 8h ago
I'm also very curious. Searching for missing persons? Buried treasure?
ilaksh · 7h ago
There are other models that do that, such as photogrammetry models.
But someone could possibly extend the work so it was a few photos rather than one or many. The way you ask the question makes it sound like you think it was a trivial detail they just forgot about.
iamsaitam · 11h ago
Interesting that they chose the color red in the comparison table to determine the best score of that entry.
FartyMcFarter · 11h ago
Just like the stock market in China. Red means the price is going up, green means it's going down.
jsheard · 11h ago
That's also why the stonks-going-up emoji traditionally has a red line, Japan shares that convention.
By the way, people might think this has to do with communism but it’s cultural and way before the 20th century. Red is associated with happiness and celebration.
MengerSponge · 11h ago
Almost like the communists chose what iconography to use!
mananaysiempre · 10h ago
The (blood-)red flag as an anti-monarchist symbol originates in the French Revolution, was adopted by the Bolshevik faction (“the Reds”) in the Russian Civil War, and spread from there.
kridsdale1 · 10h ago
And ironically the news networks in 2000 chose red to show Bush’s electoral votes vs Gore, and thus we retain the notion of Red States and Blue States, even though it’s backwards.
Cthulhu_ · 9h ago
Cultural differences, as others have pointed out; I find it fascinating. And also it doesn't impact my day at all.
jjcm · 7h ago
As already mentioned, red is a positive color in east Asia. What's actually more surprising to me is that yellow is the 3rd color after green.
It's interesting to me that this breaks convention with the visual spectrum.
IE
red ~700nm
green ~550nm
yellow ~580nm
Weird that they aren't in order.
idiotsecant · 11h ago
It would be a very uninteresting choice in china. Color is partially a cultural construction. Red doesn't mean the same thing there that it does in the west.
geeunits · 11h ago
You'll notice it in every piece of western propaganda too. From movies to fashion. Red is the china call
forinti · 7h ago
In 1995 I went to a talk on Image Processing by an Indian professor. I asked him if there were any methods for improving low resolution images, just to make them look better (I think this was in the context of TV transmissions). He said you couldn't make up information.
Well, 30 years later, you can generate a video from a photograph.
IanCal · 4h ago
Also you can get a lot more information from images than you think, and even more from video. Superresolution was the term iirc.
You can’t make up information but you can use knowledge of the subject to accurately fill things in and other assumptions to plausibly fill things in.
Terr_ · 6h ago
While there's been a lot of technological progress, I think that story confuses different meanings of "could" and "information".
From a photo of someone's face and shoulders, a child can add "information" by extending it to a stick-figure body with crayons. However it's not information from the original event that was recorded.
Then there's the difference between strictly capable versus permissible or wise. A researcher "can't" make up data, a journalist "can't" invent quotes, a US President "can't" declare himself dictator, etc.
bilsbie · 11h ago
I’m waiting like crazy for one of these to show up on vr.
kridsdale1 · 10h ago
Check out visionOS 26’s Immersive Photo mode. Any photo in your iCloud library gets converted by an on device model to (I assume) a Gaussian Splat 3D scene that you can pan and dolly around in. It’s the killer feature that justifies the whole cost of Vision Pro. The better the source data the better it works.
I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.
bee_rider · 7h ago
That is neat.
Although, I can think of some old family photos where half the people in them are dead by now (nothing catastrophic, just time). I wonder how it would feel to walk around in that sort of photo.
jsheard · 10h ago
Please don't hold your breath, they're still pretty far from high-res 120fps with consistent stereo and milliseconds of latency.
geokon · 10h ago
Isn't it picture to 3D model?
You'd generate the environment/model ahead of time and then "dive in" to the photo
jsheard · 10h ago
I suppose that's an option yeah, but when people envision turning this kind of thing into a VR holodeck I think they're expecting unbounded exploration and interactivity, which precludes pre-baking everything. Flattening the scene into a diorama kind of defeats the point.
throwmeaway222 · 9h ago
I actually would rather it be a 3d model so that I don't need to believe they're microwaving a goddamn full size whale for 45 minutes (worth of electricity)
andoando · 7h ago
You just need to prerender the 3d world. If its truly exportable as a 3D model, rerendering it real time based on input is trivial
jimmySixDOF · 9h ago
While discussing Google Genie v3 and AndroidXR, Bilawal Sidhu said : "to create an even faster, lower latency pipeline to go from like 24 fps to like 100 fps. I could see that being more of an engineering problem than a research one at this point."
Based on just about every Two Minute Papers video, the engineering/research attack the latency from both sides. The hardware grants steady improvements and an occasional paper is published with a new/improved approach that decimates the compute required.
dannersy · 8h ago
That would be the most motion sickness inducing thing you could possible do in its current state. The fov on these videos is super wonky.
chamomeal · 8h ago
So could it actually turn around, like a full 360, and the image would stay the same? It looks super cool but the videos I saw just pan a little one way or the other
tzumaoli · 4h ago
It could in theory. The model generates a depth image per frame, so each pixel becomes a small 3D point. It also assumes that the 3D scene is static. From this, you can then simply register all the frames into a huge 3D point cloud by unprojecting the pixels to 3D and render it anyway you like (using a classical 3D renderer) and it will be consistent.
Though, a problem is that if the generated video itself has inconsistent information, e.g., the object changes color between frames, then your point cloud would just be "consistently wrong". In practice this will lead to some blurry artifacts because you blend different inconsistent colors together. So when you turn around you will still see the same thing, but that thing is uglier and blurrier because it blends between inconsistent coloring.
It will also be difficult to put a virtual object into the generated scene, because you don't have the lighting information and the virtual object can't blend its color with the environment well.
Overall cool idea but obviously more interesting problems to be solved!
ambitiousslab · 11h ago
This is not open source. It is weights-available.
Also, there is no training data, which would be the "preferred form" of modification.
From their license: [1]
If, on the Tencent HunyuanWorld-Voyager version release date, the monthly active users of all products or services made available by or for Licensee is greater than 1 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.
You must not use the Tencent HunyuanWorld-Voyager Works or any Output or results of the Tencent HunyuanWorld-Voyager Works to improve any other AI model (other than Tencent HunyuanWorld-Voyager or Model Derivatives thereof).
As well as an acceptable use policy:
Tencent endeavors to promote safe and fair use of its tools and features, including Tencent HunyuanWorld-Voyager. You agree not to use Tencent HunyuanWorld-Voyager or Model Derivatives:
1. Outside the Territory;
2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
3. To harm Yourself or others;
4. To repurpose or distribute output from Tencent HunyuanWorld-Voyager or any Model Derivatives to harm Yourself or others;
5. To override or circumvent the safety guardrails and safeguards We have put in place;
6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
9. To intentionally defame, disparage or otherwise harass others;
10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
11. To generate or disseminate personal identifiable information with the purpose of harming others;
12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
13. To impersonate another individual without consent, authorization, or legal right;
14. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
19. For military purposes;
20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.
heod749 · 10h ago
>The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.
Or, those countries are trying to regulate AI.
Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).
wkat4242 · 6h ago
Why do you think regulation is bad?
We didn't regulate adtech and now we're stuck with pervasive tracking that's hurting society and consumer privacy. Better to be more cautious with AI too so we can prevent negative societal effects rather than trying to roll them back when billions of euros are already at play, and thus the corporate lobby and interests in keeping things as they are.
We didn't regulate social media algorithms which started optimising for hate (as it's the best means of "engagement") and it led to polarisation in society, the worst effects of which can be seen in the US itself. The country is tearing itself apart. And we see the effects in Europe too. Again, something we should have nipped in the bud.
And the problem isn't mainly the tech. It's the perverse business models behind it, which don't care about societal diruption. That's pretty hard to predict, hence the caution.
thrance · 10h ago
Peak American thinking: megacorps and dictatorships stealing data with no respect whatsoever for privacy and not giving anything back is good. Any attempt to defend oneself from that is foolish and should be mocked. I wish you people could realize you're getting fucked over as much as the rest of us.
llbbdd · 8h ago
They are giving things back, that's what a company that sells products is. And the EU/UK should learn something from all this before they have to figure out how to translate all their road signs to Russian or Chinese.
onestay42 · 8h ago
"Yes the company did steal all the wood from the forest—illegally—but at least they're selling us furniture!"
llbbdd · 7h ago
I struggle to believe that anybody actually cares in this manner, because of the prevalence of bad faith analogies like this one. The trees are still there, and we get furniture. I am not Harper-Collins, I am not Random House. I didn't have a problem when collecting and presenting data like this was called a "search engine" and I don't know why I should believe it's worse now that it can also talk to me.
onestay42 · 21m ago
Those are very good points. I suppose it depends on one's view of IP. I think your comment has actually changed my mind—at least a little bit. Thank you.
NitpickLawyer · 10h ago
> This is not open source. It is weights-available.
> Also, there is no training data, which would be the "preferred form" of modification.
This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.
stefan_ · 9h ago
Thats a very novel (and obviously wrong) interpretation of preferred form. The full sentence is "preferred form of modification" and obviously weights don't allow that.
tbrownaw · 10h ago
> Also, there is no training data, which would be the "preferred form" of modification.
Isn't fine-tuning a heck of a lot cheaper?
Nevermark · 9h ago
Fine tuning with original data plus fine tuning data has more predictable results.
Just training on new data moves a model away from its previous behavior, to an unpredictably degree.
You can’t even reliably test for the change without the original data.
htrp · 10h ago
outside of ai2, not sure anyone actually truly is open-source ai models (training logs, data etc).
I think at this point, open source is practically shorthand for weights available
imiric · 8h ago
> 7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
> 8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
"Do as I say, not as I do."
> 15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
This, and other clauses, effectively prohibit the use of this system within any jurisdiction.
What a ridiculous policy.
londons_explore · 9h ago
> The minimum GPU memory required is 60GB for 540p.
We're about to see next gen games requiring these as minimum system requirements...
NullCascade · 10h ago
What is currently the best model (or multi-model process) to go from text-to-3D-asset?
Ideally based on FOSS models.
neutronicus · 10h ago
Piggybacking ... what about text-to-sprite-sheet? Or even text-and-single-source-image-to-sprite-sheet?
nzach · 9h ago
I've never done this task specifically, but I imagine the new google model (Gemini 2.5 Flash Image) is what you want. It has really good character consistency, so you should be able to paste a single sprite and ask it to generate the rest.
SXX · 8h ago
This is possible, but mostly for generating assets in the somewhat same style that you already have. Problem is that AI models are not good at tracking state of multiple entities on one image including 2.5 Flash Image.
If you actually want something consistent you should really generate images one by one and provide extensive description of what you expect to see on each frame
And if you want to make something like animation it's only really possible if you basically generate thousand of "garbage" images and then edit together what fits.
maelito · 7h ago
What I'm interested with is taking Panoramax pictures (a free StreetView alternative) and recreate 3D navigable scenes from them.
geokon · 10h ago
Seems the kind of thing StreetView data would have been perfect to train on.
I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream
kridsdale1 · 10h ago
Why the past tense? Google is holding on to all of that, going back years.
Cthulhu_ · 9h ago
Yeah, they have all the raw data (Google is a self-confessed data hoarder, after all), I'm sure they have research projects where they use AI and similar to stitch street view images together.
I also wouldn't be surprised if their Street View cars / people record video instead of stills these days. Assuming they started capturing stuff in 2007 (and it was probably a lot earlier), storage technology has improved at least tenfold in terms of storage (probably more), video processing too.
forrestthewoods · 9h ago
Spin the camera 1080 degrees in place you cowards!!
These clips are very short and don’t rotate the camera more than like 45 degrees. Genie3 also cheats and only rotate the camera 90 degrees.
It’s always important to pay attention to what models don’t do. And in this case it’s turn the bloody camera around.
I refuse to accept any model to be a “world model” if it can’t pass a simple “spin in place” test.
Bah hum bug.
amelius · 10h ago
Can I use this to replace a LiDAR?
ENGNR · 9h ago
Depends how many liberties it takes in imagining the world
Lidar is direct measurement
incone123 · 9h ago
It's generating a 3d world from a photo or other image, rather than giving you a 3d model of the real world.
amelius · 9h ago
Look at the examples. It can generate a depth map.
incone123 · 7h ago
Yes, which may be fine or not depending on the end goal of the person I replied to. Some applications need the certainty of LiDAR and others can tolerate it if the model makes some mistakes.
gs17 · 7h ago
Yes, but that's not a novel part of this. We've been able to do that for a while (a long while if you count binocular or time-of-flight vision systems).
garbthetill · 10h ago
if it does, then elon really won the bet of no lidar
odie5533 · 9h ago
All he had to do was remove the LIDAR and wait 15-20 years for the tech to catch up. I'm sure Tesla owners don't mind waiting. They're used to it by now.
HeWhoLurksLate · 9h ago
there's a huge difference between "feature that mostly works and is kinda neat" and "5000 pound robot relies on this to work all the time or people will probably get hurt at minimum" in how much you should trust a feature.
Doesn't really matter if an imgTo3d script gets a face's depth map inverted, kinda problematic if your car doesn't think there's something where there is.
Cthulhu_ · 9h ago
I wasn't aware there was a competition or a bet.
forrestthewoods · 7h ago
… absolutely not no this sentence doesn’t even make sense.
Good grief orange site sometimes I swear.
user_7832 · 10h ago
I see a lot of skeptical folks here... isn't this the first such model? I remember seeing a lot of image to 3d models before, but they'd all produce absurd results in a few moments. This seems to produce really good output in comparison.
explorigin · 10h ago
If you click on the link, they show a comparison chart with other similar models.
neuronic · 10h ago
> isn't this the first such model?
The linked Github page has a comparison with other world models...
krystofee · 10h ago
I think its a matter of time when we will have photorealistic playable computer games generated by these engines.
netsharc · 9h ago
Yeah, MS Flight Simulator with a world that's "inspired by" ours... The original 2020 version had issues with things like the Sydney Harbour Bridge (did it have the Opera House?), using AI to generate 3D models of these things based on pictures would be crazy (of course they'd generate once, on 1st request).
So if you're the first to approach the Opera House, it would ask the engine for 3D models of the area, and it would query its image database, see the fancy opera house, and generate its own interpretation.. if there's no data (e.g. a landscape in the middle of Africa), it'd use the satellite image plus typical fauna of the region..
And hopefully AI-powered NPCs to fight against/interact with.
Cthulhu_ · 9h ago
I believe there's games that have that already. My concern is that it's all going to be sameish slop. Read ten AI generated stories and you've read them all.
It could work, but they would have to both write unique prompts for each NPC (instead of "generate me 100 NPC personality prompts") and limit the possible interactions and behaviours.
But, emergent / generative behaviour would be interesting to a point. There's plenty of roguelikes / roguelites where this could work in, given their generative behaviours.
gadders · 7h ago
I guess for combat, you would want ones that could sensibly work together and adapt, possibly different levels of aggression, stealth etc Even as good as FEAR would be something.
indiantinker · 9h ago
Matrix
pbd · 8h ago
This is genuinely exciting.
bglazer · 8h ago
Please don’t post chatgpt output
SirHackalot · 11h ago
> Minimum: The minimum GPU memory required is 60GB for 540p.
Cool, I guess… If you have tens of thousands of $ to drop on a GPU for output that’s definitely not usable in any 3D project out-of-the-box.
Is more approachable than one might think, as you can currently find two of these for less than 1,000 USD.
esafak · 10h ago
How much performance penalty is there for doubling up? What about 4x?
kittoes · 10h ago
I just found out about these last week and haven't received the hardware yet, so I can't give you real numbers. That said, one can probably expect at least a 10-30% penalty when the cards need to communicate with one another. Other workloads that don't require constant communication between cards can actually expect a performance boost. Your mileage will vary.
HPsquared · 11h ago
I assume it can be split between multiple GPUs, like LLMs can. Or hire an H100 for like $3/hr.
y-curious · 11h ago
I mean, still awesome that it's OSS. Can probably just rent GPU time online for this
mingtianzhang · 12h ago
What's your opinion on modeling the world? Some people think the world is 3D, so we need to model the 3D world. Some people think that since human perception is 2D, we can just model the 2D view rather than the underlying 3D world, since we don't have enough 3D data to capture the world but we have many 2D views.
Fixed question: Thanks a lot for the feedback that human perception is not 2D. Let me rephrase the question: since all the visual data we see on computers can be represented as 2D images (indexed by time, angle, etc.), and we have many such 2D datasets, do we still need to explicitly model the underlying 3D world?
AIPedant · 12h ago
Human perception is not 2D, touch and proprioception[1] are three-dimensional senses.
And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.
Two of them, giving us stereo vision. We are provided visual cues that encode depth. The ideal world model would at least have this. A world model for a video game on a monitor might be able to get away with no depth information, but a) normal engines do have this information and it would make sense to provide as much data to a general model as possible, and b) the models wouldn't work on AR/VR. Training on stereo captures seems like a win all around.
WithinReason · 10h ago
> We are provided visual cues that encode depth. The ideal world model would at least have this.
None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.
soulofmischief · 9h ago
Increasing the fidelity and richness of training data does not go against the bitter lesson.
The model can learn 3D representation on its own from stereo captures, but there is still richer, more connected data to learn from with stereo captures vs monocular captures. This is unarguable.
You're needlessly making things harder by forcing the model to also learn to estimate depth from monocular images, and robbing it of a channel for error-correction in the case of faulty real-world data.
WithinReason · 9h ago
Stereo images have no explicit 3D information and are just 2D sensor data. But even if you wanted to use stereo data, you would restrict yourself to stereo datasets and wouldn't be able to use 99.9% of video data out there to train on which wasn't captured in stereo, that's the part that's against the Bitter Lesson.
soulofmischief · 5h ago
You don't have to restrict yourself to that, you can create synthetic data or just train on both kinds of data.
I still don't understand what the bitter lesson has to do with this. First of all, it's only a piece of writing, not dogma, and second of all it concerns itself with algorithms and model structure itself, increasing the amount of data available to train on does not conflict with it.
reactordev · 11h ago
Incorrect. My sense of touch can be activated in 3 dimensions by placing my hand near a heat source. Which radiates in 3 dimensions.
Nevermark · 9h ago
You are still sensing heat across 2 dimensions of skin.
The 3rd dimension gets inferred from that data.
(Unless you have a supernatural sensory aura!)
AIPedant · 9h ago
The point is that knowing where your hand is in space relative to the rest of your body is a distinct sense which is directly three-dimensional. This information is not inferred, it is measured with receptors in your joints and ligaments.
Nevermark · 7h ago
No it is inferred.
You are inferring 3D positions based on many sensory signals combined.
From mechanoreceptors and proprioceptors located in our skin, joints, and muscles.
We don’t have 3-element position sensors, nor do we have 3-d sensor volumes, in terms of how information is transferred to the brain. Which is primarily in 1D (audio) or 2D (sensory surface) layouts.
From that we learn a sense of how our body is arranged very early in life.
EDIT: I was wrong about one thing. Muscle nerve endings are distributed throughout the muscle volume. So 3D positioning is not sensed, but we do have sensor locations distributed in rough and malleable 3D topologies.
Those don’t give us any direct 3D positioning. In fact, we are notoriously bad at knowing which individual muscles we are using. Much less what feeling correspond to what 3D coordinate within each specific muscle, generally. But we do learn to identify anatomical locations and then infer positioning from all that information.
reactordev · 2h ago
Your analysis is incorrect again. Having sensors spread out across a volume is, by definition, measuring 3D space. It’s a volume. Not a surface. Humans are actually really good at knowing which muscles we are using. It’s called body sculpting. Lifting. Body building. And all of that. So nice try.
AIPedant · 11h ago
It is simply wrong to describe touch and proprioception receptors as 2D.
a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.
b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.
Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion
Nevermark · 9h ago
I think you mean 0D for individual receptors.
Point (I.e. single point/element) receptors, that encode a single magnitude of perception, each.
The cochlea could be thought of 1D. Magnitude (audio volume) measured across 1D = N frequencies. So a 1D vector.
Vision and (locally) touch/pressure/heat maps would be 2D, together.
AIPedant · 9h ago
No, the sensors measure a continuum of force or displacement along a line or rotational axis, 1D is correct.
Nevermark · 9h ago
That would be a different use of dimension.
The measurement of any one of those is a 0 dimensional tensor, a single number.
But then you are right, what. is being measured by that one sensor is 1 dimensional.
But all single sensors measure across a 1 dimensional variable. Whether it’s linear pressure, rotation, light intensity, audio volume at 1 frequency, etc.
2OEH8eoCRo0 · 11h ago
And the brain does sensor fusion to build a 3d model that we perceive. We don't perceive in 2d
There are other sensors as well. Is the inner ear a 2d sensor?
AIPedant · 10h ago
Inner ear is a great example! I mentioned in another comment that if you want to be reductive the sensors in the inner ear - the hairs themselves - are one dimensional, but the overall sense is directly three dimensional. (In a way it's six dimensional since it includes direct information about angular momentum, but I don't think it actually has six independent degrees of freedom. E.g. it might be hard to tell the difference between spinning right-side-up and upside-down with only the inner ear, you'll need additional sense information.)
echelon · 11h ago
The GPCRs [1] that do most of our sense signalling are each individually complicated machines.
Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...
In any case, when all of our senses are integrated, we have rich n-dimensional input.
It's simple: Those who think that human perception is 2D are wrong.
rubzah · 11h ago
It's 2D if you only have one eye.
__alexs · 11h ago
It's not even 2D with one eye. We can estimate distance purely from your eyes focal point.
yeoyeo42 · 11h ago
with one eye you have temporal parallax, depth cues (ordering of objects in your vision), lighting cues, relative size of objects (things further away are smaller) together with your learned comparison size etc.
you're telling me my depth perception is not creating a 3D model of the world in my brain?
No comments yet
KaiserPro · 11h ago
So a lot of text to "world" engines have been basically 2d, in that they create a static background and add sprites in to create the illusion of 3D.
I'm not entirely convinced that this isn't one of those, or if its not it sure as shit was trained on one.
imtringued · 11h ago
2D models don't have object persistence, because they store information in the viewport. Back when OpenAI released their Sora teasers, they had some scenes where they did a 360° rotation and it produced a completely different backdrop.
It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.
It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.
There's nothing special about EU regulations vis-a-vis other laws. China, Russia and the US also have laws, many of which are also perceived as overly bureaucratic.
Russia is currently struggling to make inroads on invading its relatively small neighbor, so I really doubt it would be able to make a bunch of nuclear powers who have a nuclear alliance its "vassal"
I understand that Russia's not fighting just Ukraine but rather Ukraine with massive US and EU assistance but my point still stands.
Ukraine doesn't have that "benefit".
Also the EU pays for countries like Turkey and Libya to prevent refugee ships from coming to their continent. If that means sinking those ships with people on them, well...
OK but Ukraine isn't trying to invade a small country next door and claim a global superpower status.
It's expected they would struggle against a much larger neighbor invading them.
Russia is struggling where nobody expected it to struggle.
But say that you were right, and you have to choose between privacy and relevance, if you choose privacy, then once you are entirely economically dependent on Russia (Europe is still paying more in energy money to Russia than in aid to Ukraine) and China — when Europe is a vassal — it won't be able to make its own laws anymore.
In general, it is hard to compare the US and the EU; we got a head start while the rest of the world was rebuilding itself from WW2. That started up some feedback loops. We can mess up and siphon too much off a loop, destroying it, and still be ahead. They can be setting up loops without benefitting from them yet.
Will take them a while to get out from under the US umbrella. But acknowledging the problem is the first step.
Spending on defense is not the same as. Norway is spending more on everything all the time and getting worse outcomes all the time. We spend more on police than ever, even per capita, and crime is up, we spend more on military than ever, and our actual metrics are down. I think with most of Europe the defense spending is the same, I hope I'm wrong, but if you up regulation then you have to spend more to get the same results, and Europe has runaway regulation in addition to people who try to hijack institutions for other purposes.
Today, we have fully automated the methods from this manual in the form of LLM Chatbots, which we have for some reason deployed against ourselves.
[1] https://en.wikipedia.org/wiki/Simple_Sabotage_Field_Manual
Overconfidence bias is real.
Knowing your circle of competence is a gift.
Personally I'm not too worried anyone is going to become a global superpower from generative AI slop.
I got the same impression seeing Trump meet Putin. The US is a vassal state of Russia.
Start on the right, and click through the options. At the end you'll get a sort of assessment of what you need to do.
I live in Europe, I don't want Europe to become a vassal of China/Russia - but if something drastically does not change it will. Russia is Europe's Carthage, Russia must fall. There is no future with a Russia as it is today and a Europe as it is today in it, not because of Europe, but because of Russia. If Europe does not eliminate Russia, Russia will eliminate Europe. I have no doubts about this.
But as things stand, there just seems no way in which we practically can counter Russia at all. If Europe had determination, it would have sent Troops into Ukraine and created a no-fly zone — it should do that, but here we are.
When it's getting to a point where far-right leaders appear to care more about the prosperity of Russia than their own nation or their allies... yeah it's probably misinformation. At best. At worst, it's targeted propaganda - lots of bots online!
Ukraine will all the backing of Europe is making no progress, if this was true, Russia would be expelled from Ukraine tomorrow, as it should be. Ukraine is an embarrassment for Europe, it strongly suggests that Europe is basically meaningless on the global stage.
And the most embarrassing of all is, Europe is still buying gas from Russia.
"suggests that Europe is basically meaningless on the global stage" ... it will take many years of deep military investment to provide a proper counter to Russian aggression. As of right now, Europe has been shown to be in a very weak and exposed position. This was obvious years ago, and should not be a surprise today. This is true of most of the NATO member states.
That said, simply because Ukraine is unable to expel Russia does not mean that it is a grand threat to Europe proper. Perhaps some eastern countries face some limited conflict, but I'm not convinced by this "domino theory" that Russia would engage in a WWII style invasion of Poland, Finland, etc.
[1] https://en.wikipedia.org/wiki/Restrictions_on_geographic_dat...
[2] https://cset.georgetown.edu/publication/south-korea-ai-law-2...
The UK has their chat thing where if you provide chat (even with bots!) you have to basically be a megacorp to afford the guardrails they think "the kids" need. It's not clear if open source models fall into that, but who's gonna read 300+ pages of insanity to make sure?
A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.
Since the law is very well developed in the EU, I think the people who wrote the license were just lazy.
So, they reduced their liability by prohibiting usage of the model to show those jurisdictions' decision makers they were complying. I considered doing the same thing for EU. Although, I also considered one mught partner with an EU company if they are willing to make models legal in their jurisdiction. Just as a gift to Europeans mainly but maybe also a profit-sharing agreement.
Is this the new 'please like and subscribe/feed us your info' method?
I will have to try this, I have a super edge use case: incomplete bathymetric depth map (lidar boat could not access some areas), coincidentally the most interesting areas are not in the data. My second piece of data is from flyover video (areas of interest where water is also clear enough to see the bottom). With enough video I can mostly remove the water-borne artifacts (ripples, reflections etc) and enhance the river bottom imagery enough to attempt photogrammetric reconstruction. The bottleneck here is that it takes multiple angles to do that, and the visibility through water is highly dependent on the angle of sunlight vs angle of camera.
Instead of doing multiple flyovers at different times of day to try and get enough angles for a mesh reconstruction, maybe this can do it relatively well from one angle!
I can guesstimate the shape of the bottom by the behaviour of the flow, and hand-model the missing parts of the mesh. I thought outsourcing that to a generative model would be a nice shortcut -- and who knows, likely it'll synthesize it more true-to-nature than I would.
But someone could possibly extend the work so it was a few photos rather than one or many. The way you ask the question makes it sound like you think it was a trivial detail they just forgot about.
https://blog.emojipedia.org/why-does-the-chart-increasing-em...
It's interesting to me that this breaks convention with the visual spectrum.
IE
red ~700nm
green ~550nm
yellow ~580nm
Weird that they aren't in order.
Well, 30 years later, you can generate a video from a photograph.
You can’t make up information but you can use knowledge of the subject to accurately fill things in and other assumptions to plausibly fill things in.
From a photo of someone's face and shoulders, a child can add "information" by extending it to a stick-figure body with crayons. However it's not information from the original event that was recorded.
Then there's the difference between strictly capable versus permissible or wise. A researcher "can't" make up data, a journalist "can't" invent quotes, a US President "can't" declare himself dictator, etc.
I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.
Although, I can think of some old family photos where half the people in them are dead by now (nothing catastrophic, just time). I wonder how it would feel to walk around in that sort of photo.
https://youtu.be/VslvofY16I0&t=886
Though, a problem is that if the generated video itself has inconsistent information, e.g., the object changes color between frames, then your point cloud would just be "consistently wrong". In practice this will lead to some blurry artifacts because you blend different inconsistent colors together. So when you turn around you will still see the same thing, but that thing is uglier and blurrier because it blends between inconsistent coloring.
It will also be difficult to put a virtual object into the generated scene, because you don't have the lighting information and the virtual object can't blend its color with the environment well.
Overall cool idea but obviously more interesting problems to be solved!
Also, there is no training data, which would be the "preferred form" of modification.
From their license: [1]
As well as an acceptable use policy: [1] https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob...Or, those countries are trying to regulate AI.
Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).
We didn't regulate adtech and now we're stuck with pervasive tracking that's hurting society and consumer privacy. Better to be more cautious with AI too so we can prevent negative societal effects rather than trying to roll them back when billions of euros are already at play, and thus the corporate lobby and interests in keeping things as they are.
We didn't regulate social media algorithms which started optimising for hate (as it's the best means of "engagement") and it led to polarisation in society, the worst effects of which can be seen in the US itself. The country is tearing itself apart. And we see the effects in Europe too. Again, something we should have nipped in the bud.
And the problem isn't mainly the tech. It's the perverse business models behind it, which don't care about societal diruption. That's pretty hard to predict, hence the caution.
> Also, there is no training data, which would be the "preferred form" of modification.
This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.
Isn't fine-tuning a heck of a lot cheaper?
Just training on new data moves a model away from its previous behavior, to an unpredictably degree.
You can’t even reliably test for the change without the original data.
I think at this point, open source is practically shorthand for weights available
> 8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
"Do as I say, not as I do."
> 15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
This, and other clauses, effectively prohibit the use of this system within any jurisdiction.
What a ridiculous policy.
We're about to see next gen games requiring these as minimum system requirements...
Ideally based on FOSS models.
If you actually want something consistent you should really generate images one by one and provide extensive description of what you expect to see on each frame
And if you want to make something like animation it's only really possible if you basically generate thousand of "garbage" images and then edit together what fits.
I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream
I also wouldn't be surprised if their Street View cars / people record video instead of stills these days. Assuming they started capturing stuff in 2007 (and it was probably a lot earlier), storage technology has improved at least tenfold in terms of storage (probably more), video processing too.
These clips are very short and don’t rotate the camera more than like 45 degrees. Genie3 also cheats and only rotate the camera 90 degrees.
It’s always important to pay attention to what models don’t do. And in this case it’s turn the bloody camera around.
I refuse to accept any model to be a “world model” if it can’t pass a simple “spin in place” test.
Bah hum bug.
Lidar is direct measurement
Doesn't really matter if an imgTo3d script gets a face's depth map inverted, kinda problematic if your car doesn't think there's something where there is.
Good grief orange site sometimes I swear.
The linked Github page has a comparison with other world models...
So if you're the first to approach the Opera House, it would ask the engine for 3D models of the area, and it would query its image database, see the fancy opera house, and generate its own interpretation.. if there's no data (e.g. a landscape in the middle of Africa), it'd use the satellite image plus typical fauna of the region..
It could work, but they would have to both write unique prompts for each NPC (instead of "generate me 100 NPC personality prompts") and limit the possible interactions and behaviours.
But, emergent / generative behaviour would be interesting to a point. There's plenty of roguelikes / roguelites where this could work in, given their generative behaviours.
Cool, I guess… If you have tens of thousands of $ to drop on a GPU for output that’s definitely not usable in any 3D project out-of-the-box.
Is more approachable than one might think, as you can currently find two of these for less than 1,000 USD.
Fixed question: Thanks a lot for the feedback that human perception is not 2D. Let me rephrase the question: since all the visual data we see on computers can be represented as 2D images (indexed by time, angle, etc.), and we have many such 2D datasets, do we still need to explicitly model the underlying 3D world?
And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.
[1] https://en.wikipedia.org/wiki/Proprioception
None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.
The model can learn 3D representation on its own from stereo captures, but there is still richer, more connected data to learn from with stereo captures vs monocular captures. This is unarguable.
You're needlessly making things harder by forcing the model to also learn to estimate depth from monocular images, and robbing it of a channel for error-correction in the case of faulty real-world data.
I still don't understand what the bitter lesson has to do with this. First of all, it's only a piece of writing, not dogma, and second of all it concerns itself with algorithms and model structure itself, increasing the amount of data available to train on does not conflict with it.
The 3rd dimension gets inferred from that data.
(Unless you have a supernatural sensory aura!)
You are inferring 3D positions based on many sensory signals combined.
From mechanoreceptors and proprioceptors located in our skin, joints, and muscles.
We don’t have 3-element position sensors, nor do we have 3-d sensor volumes, in terms of how information is transferred to the brain. Which is primarily in 1D (audio) or 2D (sensory surface) layouts.
From that we learn a sense of how our body is arranged very early in life.
EDIT: I was wrong about one thing. Muscle nerve endings are distributed throughout the muscle volume. So 3D positioning is not sensed, but we do have sensor locations distributed in rough and malleable 3D topologies.
Those don’t give us any direct 3D positioning. In fact, we are notoriously bad at knowing which individual muscles we are using. Much less what feeling correspond to what 3D coordinate within each specific muscle, generally. But we do learn to identify anatomical locations and then infer positioning from all that information.
a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.
b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.
Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion
Point (I.e. single point/element) receptors, that encode a single magnitude of perception, each.
The cochlea could be thought of 1D. Magnitude (audio volume) measured across 1D = N frequencies. So a 1D vector.
Vision and (locally) touch/pressure/heat maps would be 2D, together.
The measurement of any one of those is a 0 dimensional tensor, a single number.
But then you are right, what. is being measured by that one sensor is 1 dimensional.
But all single sensors measure across a 1 dimensional variable. Whether it’s linear pressure, rotation, light intensity, audio volume at 1 frequency, etc.
There are other sensors as well. Is the inner ear a 2d sensor?
Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...
In any case, when all of our senses are integrated, we have rich n-dimensional input.
- stereo vision for depth
- monocular vision optics cues (shading, parallax, etc.)
- proprioception
- vestibular sensing
- binaural hearing
- time
I would not say that we sense in three dimensions. It's much more.
[1] https://en.m.wikipedia.org/wiki/G_protein-coupled_receptor
https://theoatmeal.com/comics/mantis_shrimp
No comments yet
I'm not entirely convinced that this isn't one of those, or if its not it sure as shit was trained on one.