Rendering a game in real time with AI

101 jschomay 92 8/28/2025, 12:10:14 PM blog.jeffschomay.com ↗

Comments (92)

gavmor · 4h ago
This is awesome! I can definitely see it delivering value, especially over with procedurally generated terrain with 100s or 1000s of different terrain types and combinations—especially if user-defined properties get involved, ie what artists can't predict or prepare for, producing materials via something like https://infinite-craft.gg/
panki27 · 9h ago
I'm pretty sure the generation could easily run locally on a low-to-mid tier graphics card.

While it might take a bit longer to generate, you're still saving network and authentication latency.

stego-tech · 9h ago
…but then you just have a graphics card, built to render graphics, that you could tap instead through traditional tooling that’s already widely known and which produces consistent output via local assets.

While the results of the experiment here are interesting from an academic standpoint, it’s the same issue as remote game streaming: the amount of time you have to process input from the player, render visuals and sound, and transmit it back to the player precludes remote rendering for all but the most latency-insensitive games and framerates. It’s publishers and IP owners trying to solve the problem of ownership (in that they don’t want anyone to own anything, ever) rather than tackling any actually important issues (such as inefficient rendering pipelines, improving asset compression and delivery methods, improving the sandboxing of game code, etc).

Trying to make AI render real-time visuals is the wrongest use of the technology.

tliltocatl · 1h ago
Nah, vibe coding is the wrongest use of technology, this is way to go. Why? Because good rendering isn't the necessarily most physically accurate one. You might actually want non-realistic rendering, and (depending on the specific style) it might be hard to impossible to get the right one on traditional pipeline. E. g. take "cartoonish" look - toon shading is, frankly said, total crap, because the artists rely on explicitly non-physical geometry to provide visual clues. This is definitely the future. Render on a normal pipeline (maybe with no lighting model at all), then put it through a style transfer network.
johnisgood · 8h ago
I do not see how it replaces or substitutes network and authentication latency, especially in terms of a single player game in which neither is necessary.
echelon · 9h ago
This was built in September 2022, and it's still pretty mind-blowing today:

https://madebyoll.in/posts/game_emulation_via_dnn/demo/

https://madebyoll.in/posts/game_emulation_via_dnn/

Hook world state up to a server and you have multiplayer.

2025 update:

https://madebyoll.in/posts/world_emulation_via_dnn/

https://madebyoll.in/posts/world_emulation_via_dnn/demo/

jschomay · 6h ago
That's really cool, thanks for sharing!
steveruizok · 9h ago
We did a similar thing at tldraw with Draw Fast (https://drawfast.tldraw.com/) and it was very fun. Inspired a few knock offs too. We had to shut it down because it was getting popular on Russian Reddit. A related project Lens (https://lens.tldraw.com) also used the same technique, but in a collaborative drawing app.

At the peak, when we were streaming back video from Fal and getting <100ms of lag, the setup produced one of the most original creative experiences I’d ever had. I wish these sorts of ultra-fast image generators received more attention and research because they do open up some crazy UX.

jschomay · 6h ago
OP here, I remember both of the draw fast and lens demos! I'm pretty sure those were in the back of my subconscious, inspiring me to explore my take on real time rendering. Thanks for sharing your similar experience. I agree, this was a lot of fun to work on, and like one of the other commenters pointed out, experiencing it viscerally is a whole new kind of feeling, even with the consistency issues. I'd also like to see more experiments on what new kinds of UX could be possible with this tech.
hiatus · 9h ago
Is there any chance you'd open up the source for those projects so others can play with them?
Topfi · 8h ago
They've already shared it under a none-commercial use license: https://github.com/tldraw/draw-fast

The TLDraw team, from what I have seen, is really open in sharing their experiments under their own license. Not FOSS strictly, but I feel their licensing approach and decision is fair considering how they fund development, know that it's a contentious topic though for some.

xienze · 7h ago
According to that link, you're running the frontend locally but all the work is happening on fal.ai. So the interesting part is not open source.
echelon · 9h ago
LCM is what Krea used to gain massive momentum and raise their first $30M.

The tactile reaction to playing with this tech is that it feels utterly sci-fi. It's so freaking cool. Watching videos does not do it justice.

Not enough companies or teams are trying this stuff. This is really cool tech, and I doubt we've seen the peak of what real time rendering can do.

The rendering artifacts and quality make the utility for production use cases a little questionable, but it can certainly do "art therapy" and dreaming / ideation.

sovietmudkipz · 6h ago
Sci-fi readers who’ve read Ender’s Game will recognize this style of software as similar in concept to the Mind Game Ender Wiggins plays. In the book, the Mind Game renders a game world based on the subject’s mind (conscious, subconscious) in a mechanically similar way to how dreams work for us IRL.

I’m excited for AI rendered games.

theknarf · 5h ago
This is like playing a boardgame while exclusively looking at the board through a snapchat filter. Who wants this?
Martin_Silenus · 8h ago
This kind of lazy lamer behaviour is so unlike Vulkan and hackers...

Wait a minute. Where am I?

bob1029 · 8h ago
The power consumption of modern gaming is getting a bit out of hand. This AI stuff is taking it to the next level.

Ray tracing and other forms of real time global illumination are extremely resource intensive approaches to lighting a scene. Every client machine has to figure out how to light everything every single frame. Contrast this with baked global illumination where the cost is incurred exactly once and is amortized across potentially millions of machines.

We need more things like baked GI in gaming. This class of techniques makes the development iterations slower and more deliberate, but it also produces a far more efficient and refined product experience. I'd be very interested in quantifying the carbon impact of realtime vs baked lighting in gaming. It is likely a non-trivial figure at scale. Also bear in mind that baked GI is why games like the batman series still look so good in 2025, even when running on period hardware. You cannot replace your art team by consuming more electricity.

nh23423fefe · 2h ago
It doesn't seem like moralizing about resources matters for velocity or capability.

> You cannot replace your art team by consuming more electricity

This isn't true.

sjsdaiuasgdia · 10h ago
The "real-time" version looks awful with constantly shifting colors, inconsistently sized objects, and changing interpretations of the underlying data, resulting in what I would consider an unplayable game vs the original ASCII rendering.

The "better" version renders at a whopping 4 seconds per frame (not frames per second) and still doesn't consistently represent the underlying data, with shifting interpretations of what each color / region represents.

harph · 9h ago
It seems it's because OP is generating the whole screen every frame / every move. Of course that will give inconsistent results.

I wonder if this approach would work better:

1. generate the whole screen once

2. on update, create a mask for all changed elements of the underlying data

3. do an inpainting pass with this mask, with regional prompting to specify which parts have changed how

4. when moving the camera, do outpainting

This might not be possible with cloud based solutions, but I can see it being possible locally.

actuallyalys · 10h ago
Yeah, as interesting as the concept is, the lack of frame to frame consistency is a real problem. It also seems like the computing requirements would be immense—the article mentions burning through $10 in seconds.
elpocko · 9h ago
You can do this at home on your own computer with a 40x0 consumer GPU at 1-2 fps. You have to choose a suitable diffusion model, there are models that provide sub-second generation of 1024x1024 images. The computing requirements and electricity costs are the same as when running a modern game.
jschomay · 5h ago
OP here. Thanks for the feedback. I agree that frame to frame consistency is quite bad currently. I did address that in the post, hinting at some of the techniques others have mentioned here, like in/out-painting and masking previous frames. For me, the exciting parts of this experiment was finding the opportunities and limits of realtime generation, and exploring ways of grounding generated content in a solid yet player controlled world layer.
ozmodiar · 9h ago
I like the idea behind https://oasis-ai.org/ where you can actually try to take advantage of the 'dream logic' inconsistency of each frame being procedurally generated based on the last one. For example, instead of building a house, build the corner of a house, look at that, then look back up and check if it hallucinated the rest of your ephemeral house for you. Of course that uses AI as the entire gameplay loop and not just a graphics filter. It's also... not great, but an interesting concept that I could see producing a fun dream logic game in the future.
johnfn · 9h ago
> The "real-time" version looks awful, etc

Dang man it's just a guy showing off a neat thing he did for fun. This reaction seems excessive.

faeyanpiraat · 10h ago
Yeah, but I find this fascinating regardless.

This is getting into the direction of a kind of simulation where stuff is not determined by code but a kind of "real" physics.

roxolotl · 10h ago
Why does using a language/vision model feel more “real” to you than using equations which directly describe our understandings of physics?
corysama · 8h ago
Because our ability to simulate/render a realistic world in real time using direct equations is still very limited. We’re accustomed to these limitations and often feel “graphics are good enough”. But, we’ll always be decades behind “ILM in real time”.

The AI route has a good chance of moving us from decades behind ILM to merely “years behind ILM”.

curl-up · 10h ago
Not OP, but I have long thought of this type of approach (underlying "hard coded" object tracking + fuzzy AI rendering) to be the next step, so I'll respond.

The problem with using equations is that they seem to have plateaued. Hardware requirements for games today keep growing, and yet every character still has that awful "plastic skin", among all the other issues, and for a lot of people (me included) this creates heavy uncanny-valley effects that makes modern games unplayable.

On the other hand, images created by image models today look fully realistic. If we assume (and I fully agree that this is a strong and optimistic assumption) that it will soon be possible to run such models in real time, and that techniques for object permanence will improve (as they keep improving at an incredible phase right now), then this might finally bring us to the next level of realism.

Even if realism is not what you're aiming for, I think it's easy to imagine how this might change the game.

jsheard · 9h ago
You're comparing apples to oranges, holding up today's practical real-time rendering techniques against a hypothetical future neural method that runs many orders of magnitude faster than anything available today, and solves the issues of temporal stability, directability and overall robustness. If we grant "equation based" methods the same liberty then we should be looking ahead to real-time pathtracing research, which is much closer to anything practical than these pure ML experiments.

That's not to say ML doesn't have a place in the pipeline - pathtracers can pair very well with ML-driven heuristics for things like denoising, but in that case the underlying signal is still grounded in physics and the ML part is just papering over the gaps.

curl-up · 9h ago
The question was "why does it feel more real", and I answered that - because the best AI generated images today feel more real than the best 3D renders, even when they take all the compute in the world to finish. So I can imagine that trend going forward into real-time rendering as well.

I did not claim that AI-based rendering will overcome traditional methods, and have even explicitly said that this is a heavy assumption, but explained why I see it as exciting.

jsheard · 8h ago
I think we'll have to agree to disagree about well done 3D renders not feeling real. Movie studios still regularly underplay how much CGI they use for marketing purposes, and get away with it, because the CGI looks so utterly real that nobody even notices it until much later when the VFX vendors are allowed to give a peek behind the curtain.

e.g. Top Gun Mavericks much lauded "practical" jet shots, which were filmed on real planes, but then the pilots were isolated and composited into 100% CGI planes with the backdrops also being CGI in many cases, and huge swathes of viewers and press bought the marketing line that what they saw was all practical.

sjsdaiuasgdia · 9h ago
I find it odd that you're that bothered by uncanny valley effects from game rendering but apparently not by the same in image model outputs. They get little things wrong all the time and it puts me off the image almost instantly.
cbm-vic-20 · 9h ago
...where "ASCII" means an image made up of a grid of elements from a limited set of glyphs.
tantalor · 9h ago
And those glyphs are not ASCII

This is ASCII: https://commons.wikimedia.org/wiki/File:ASCII-Table-wide.svg

mason_mpls · 9h ago
If we’re talking about dwarf fortress it uses an old IBM charset, assuming this is some branch off that
babush · 3h ago
Beautiful
VagabundoP · 8h ago
How far are we from Ender's Game?
mason_mpls · 9h ago
Now dwarf fortress can eat your CPU, Memory, and GPU. Exciting news.
devinprater · 8h ago
Cool, if the game is promptable, one may even be able to make the game accessible.
g105b · 10h ago
I've been trying to achieve the opposite of this project: render scenes in ASCII/ANSI in the style of old BBS terminal games. I've had terrible success so far. All the AI models I've tried only understand the concept of "pixel art" and not ASCII/ANSI graphics such as what can be seen on https://www.bbsing.com/ , https://16colo.rs , or on Reddit's r/ANSIart/ .

If anyone has any tips for how I could achieve this, I would love to hear your ideas.

ticulatedspline · 3m ago
like this https://asciiart.club/ ? you don't really need AI for that.

if you have to use AI diffusion models are a poor choice for the target. might be easier to train an LLM to output actual text art.

elpocko · 10h ago
Do you mean you want to use AI to generate new scenes in ANSI-art style, or do you mean you want to use AI to render pre-existing scenes as ANSI art?

No comments yet

myflash13 · 9h ago
This is a pre-cursor to a dystopian future where reality will be a game generated in realtime at 60 FPS and streamed to your brain over Neuralink.
lm28469 · 9h ago
Some people on HN will surely cheer for it, it's the peak of efficiency, you don't even have to move anymore!
jebarker · 9h ago
In some ways that’s a lot like how consciousness works isn’t it?
mason_mpls · 9h ago
I think that’ll be one of the few good parts of it imho
coolKid721 · 9h ago
I do not get the point of this at all, why not just generate game assets and run them in an engine? With this format there would be no regularity that the thing you saw before will look the same (and that is not a fixable problem).

Actually figuring out and improving AI approaches for generating consistent and decent quality game assets is actually something that will be useful, this I have no idea the point of past a tech demo (and for some reason all the "ai game" people do this approach).

sho_hn · 9h ago
> I do not get the point of this at all

Dunno, this seems like an avenue definitely worth exploring.

Plenty of game applications today already have a render path of input -> pass through AI model -> final image. That's what the AI-based scaling and frame interpolation features like DLSS and FSR are.

In those cases, you have a very high-fidelity input doing most of the heavy lifting, and the AI pass filling in gaps.

Experiments like the OP's are about moving that boundary and "prompting" with a lower-fidelity input and having the model do more.

Depending on how well you can tune and steer the model, and where you place the boundary line, this might well be a compelling and efficient compute path for some applications, especially as HW acceleration for model workloads improves.

No doubt we will see games do variations of this theme, just like games have thoroughly explored other technology to generate assets from lower-fidelity seeds, e.g. classical proc gen. This is all super in the wheelhouse of game development.

Some kind of AI-first demoscene would be a pretty cool thing too. What's a trained model if not another fancy compressor?

coolKid721 · 7h ago
Compared to just coming up with and having solid systems for generating game assets? Actually having decent quality style consistent 3d models, texture work, animation, sound effects, etc (especially if it were built into say a game engine or something) would actually revolutionize indie game dev. Games are fundamentally artistic works so yes anything decent will actually require tailoring and crafting, AI set up to serve those people actually makes sense and is totally technically feasible are way easier problems to solve.

And no if you heavily visually modify something with AI models to the extent it significantly alters the appearance it simply has no way of being consistent unless you include the previously generated thing somehow which then has the huge problem of how do you maintain that over an 80 hour game? How do you inform the AI what visual elements (say text, interactive elements) are significant and can be modified and which can't? (You can't)

Actually using AI to generate assets, having a person go in to make sure they look good together and make sure they match then just saving them as textures so they function like normal game assets makes 10000x more sense then generating a user image then trying to extract "hey what did the wrapper of this candy bar look like" from one ai generated image and figuring out how to make sure that is consistent across that type of candy bar in the world and maintains that consistency throughout the entire game, instead of just you know, generating a texture of a candybar?

BizarroLand · 7h ago
I think you're making a lot of good points for the current SOTA.

That being said, it took us a few hundred years all in all just to work out paint, so if people keep working with this tech eventually a game designer could, in theory, lay out the skeleton of a game and tell an AI to do the rest, give it pointers and occasional original art to work into the system, and ship a completed playable game in days.

Whether it will be worth playing those games is an entirely different enchilada to microwave.

thwarted · 7h ago
> lay out the skeleton of a game and tell an AI to do the rest, give it pointers and occasional original art to work into the system, and ship a completed playable game in days

"But, think of the indie game designer!" is getting to be quite the take.

We have a machine that produces slop and the selling point is how fast it produces it? And how more people should be using it to spend less time on creative aspects? Would the world be a better place if GRRM "finished" his most well-known work sooner rather than never?

Something about the phrase "tell an AI to do the rest, give it pointers" reminds me of "The Sorcerer's Apprentice" from Fantasia. Not in the surface level dire-warning about laziness, automation, and losing control that story is telling but in that Mickey didn't spend any time thinking about what he was doing and the Wizard's disappointment at the end.

coolKid721 · 7h ago
This whole attitude is just the attitude of tech demos. Nothing good or worthwhile will be "completed playable game in a couple days" actually making anything good takes a huge amount of time and effort and thought. Empowering a small indie studio or solo indie dev so they could make something AA or AAA quality should be the actual goal. If you have 4 people able to make a skyrim level game in a year or two that's an insane feat, that should be the goal. Not someone who doesn't give a shit throwing some prompt and making some slop game that is exactly like 500 other slop games people generate with one prompt.

Like with that tech what kind of games would say random solo developers plugging at it and refining it be able to make in 4 years, that is the extremely compelling stuff. One person being able to make some auteur AAA quality game on their own, even if it takes a long time that might actually be good. If there are AI games those are the ones I'd want to play.

troupo · 8h ago
*Edit: misread the post I replied to, disregard comment contents*

> Plenty of game applications today already have a render path of input -> pass through AI model -> final image.

Where "plenty" equals zero?

> In those cases, you have a very high-fidelity input doing most of the heavy lifting, and the AI pass filling in gaps.

That is, in these cases you already have high-fidelity input in the form of an actual game, and "AI" contributions to the output are dubious at best.

Do you really believe it's DLSS that's doing all the heavy lifting in a game like Expedition 33 or Cyberpunk 2077?

corysama · 8h ago
Here’s a recent demo made by researchers at Nvidia trying to render the most detailed, realistic scene possible using AI tech to a small degree -mostly as asset compression, but also in material modeling beyond what direct approaches can currently handle in real time: https://m.youtube.com/watch?v=0_eGq38V1hk

Here’s a video from a rando on Reddit conveniently posted today after playing around for an afternoon: https://www.reddit.com/r/aivideo/comments/1n1piz4/enjoy_nano...

The Nvidia team carefully selected a scene to be feasible for their tech. The rando happened to select a similar-ish scene that is largely well suited for game tech.

Which scene looks more realistic? The rando’s by far. And, adding stylization is some AIs are very well established as being excellent at.

Yes, there are still lots of issues to overcome. But, I find the negativity in so many comments in here to be highly imbalanced.

coolKid721 · 7h ago
that rando's one looks like fucking garbage like a bunch of shitty B roll footage from an AI generated advertisement for some kind of pharmaceutical, the space and layout of the world clearly is constantly shifting and makes no sense. just generate a fucking map we already know how to do this, tracking geometry is an EASY problem for computers, don't force some AI shit to try to do it for some reason.
sensanaty · 7h ago
There's something so unsettling about that 2nd link you posted, and I don't mean from an "AI is impressive" POV (I think it looks like absolute garbage but will probably continue improving bit by bit). There's something even in the still frames which scratches my brain in a very unpleasant way, it's hard to describe the sensation. The closest thing is "disgust", maybe it's a uncanny valley type of effect.

Also, how are these 2 comparable at all? Obviously the video looks more "realistic", the first one is obviously a game demo of some kind and is stylized, whereas the latter looks like a terrible travel agency ad.

corysama · 6h ago
> how are these 2 comparable at all. the latter looks like a terrible travel agency ad.

Don't focus on the emotional context of the scenes. Look at the physical content. They are both contain stonework, plants and a human. They are both large, detailed scenes lit by strong sunlight and a bright sky. As far as rendering technology requirements go, they are very similar.

> scratches my brain in a very unpleasant way, it's hard to describe the sensation. The closest thing is "disgust"

There is strong discontinuous motion effect because of how the video tech is based on a sequences of "first frame, last frame" inputs spliced together. There are a couple seconds where her behavior gets uncanny valley --particularly her hands on the door. Wouldn't be a concern in an unrealistic video game. But, would be in a this-realistic game regardless of the tech. And, there is a very slight warble in the fine details. But, you have to really look for those to distinguish them from MPEG artifacts.

I expect what a lot of people here are feeling when they watch the video is disgust based 90% or more just on the fact that the video was AI generated :/

dagi3d · 9h ago
hack, learn and have fun, that's it.
abbycurtis33 · 9h ago
The tech will improve to far exceed the capabilities of a game engine. Real time improvisation and infinite choices, scope, etc.

It makes no sense when people say AI can't do this or that. It will do it next week.

GPerson · 9h ago
I’m looking forward to the day when magical thinking such as this gets grounded again. That is when the real work will start anew.
corysama · 8h ago
Having spent 20 years making game engines and the past 4 years playing with AI image gen, I believe you are right.

There have been musings for a while now that 3D rendering is going to switch from “lay down the scene’s albedo & specular parameters then do the lighting pass” to “lay down the scene’s latent parameters and then do the diffusion pass”.

Recently, the advances in “real time AI world models” have been coming ridiculously fast.

Put these together and it’s no stretch at all imagining a game built by having artists go nuts doing whatever they want with whatever Maya can handle as long as they also make proxy geometry of trivial complexity that can be conceptually associated with the final renders. Train the AI on the association. Render the proxy geometry the old fashioned way. AI that up in real time to the associated Maya-final-render approximation.

It’s not going to happen this week. But, in 5 years? Somebody’s gonna pull it off.

lukan · 9h ago
"It makes no sense when people say AI can't do this or that. It will do it next week."

So full self driving vecicles will be finally ready next week then? Great to hear, though to be honest, I remain sceptical.

kayamon · 9h ago
Waymos are driving themselves around several cities right now.
pololeono · 8h ago
Yes, go to rome. I will be impressed when we have self driving cars in rome.
AnotherGoodName · 7h ago
They are in San Francisco today so it's not like they are doing this on easy mode.
lukan · 6h ago
But are they doing it without human intervention?
suddenlybananas · 8h ago
Wait til it snows.
johnisgood · 8h ago
The discussion around self-driving cars often feels like shifting goalposts: each time one feature is achieved, a new requirement is added, perpetually delaying the "final" answer.

Self-driving cars are "here"... until someone adds another requirement.

sensanaty · 7h ago
I mean by your logic self driving cars were invented back when we put a steam engine on some tracks in the 1800s. Of course the goalposts shift when the hypesters are trying to sell you on an idea like "AI will be able to do literally everything next week".

Yes, Waymo can today drive around extremely dense car-friendly cities that are scanned and mapped in great detail weekly... They also still have to have remote human intervention all the time, and are freaked out by traffic cones being placed on the hood. I grew up in Indonesia and that's where I learned to drive, and trust me, if Waymo is ever able to navigate 100 meters on any road in Jakarta I'll happily concede and consider self-driving to be a solved problem.

johnisgood · 7h ago
No, that is not my logic. It completely misrepresents my logic. My comment was not equating hype with reality, it was about the constantly moving goalposts in discussions about _autonomous vehicles_.

In fact, I am not arguing that self-driving cars are perfect or global. I am pointing out how people keep changing the definition of "solved" which makes it look like the finish line keeps moving.

We do have what parent said, it is a reality. It is also the reality that it is not perfect, but somehow the latter is made to minimize or completely dismiss the former.

YeGoblynQueenne · 57m ago
There are no moving goalposts, though there is a certain lack of precision in the discussion, on both ends. There is a commonly accepted scale of self-driving car autonomy. By some accounts Waymo can be considered to be level 4, by others they are only level 3. By no account are they level 5, nor is anyone else.

Those are the goalposts and they've been in the same spot for quite a while now.

recursive · 8h ago
For me it's always been cars without steering wheels built on a factory line.
suddenlybananas · 7h ago
Operating in the snow is not a niche requirement.
johnisgood · 7h ago
No one said that it is, nor that self-driving cars are fully solved. My issue is with the snarky remarks of shifting goalposts. See my other reply.

I just do not think that your response is a valid response to a fact ("Waymos are driving themselves around several cities right now").

Janicc · 9h ago
You really need to update your language model because self driving cars have been driving around on their own for at least a year now
SCdF · 8h ago
So have they stopped having the >1 average remote drivers for each self driving vehicle as well?

The problem with these statements is language has so much context implicit in it. "driving around on their own" to me means with zero active oversight. "driving around" to me means not just in a small set of city streets, but as a replacement for human driving (eg anywhere a vehicle can physically fit). Obviously to you it means other things, but it's what makes these conversations and statements of fact challenging.

Workaccount2 · 8h ago
That >1 spec is from Cruise, who went defunct in 2023.

Tesla's have >1 but they are not really self-driving, but more "100% human supervised self-driving."

jdiff · 9h ago
"Full self driving" was the term used and I believe the distinction is relevant to the point being made.
sho_hn · 9h ago
I understand the point you're making, but I think it's not a good one.

The failure mode for getting a self-driving car right is grave. The failure mode for rendering game graphics imperfectly is to require a bit of suspension of disbelief (it's not a linear spectrum given the famous uncanney valley, etc., I'm aware). Games already have plenty of abstract graphics, invisible walls, and other cludges that require buy-in from users. It's a lot easier to scale that wall.

jdiff · 8h ago
Not my point, but I agree with it so.

The statement was one of capability. There are some things that the tech is flatly not capable of, and that it will take time to develop the capability of. Even if there were no safety concerns at all and we lived in a cotton candy bubble world, self driving cars still have hard failure modes. The tech is not capable, and will not develop the capability next week, either.

The point being made is that the tech is moving fast, at least according to the marketing, but a revolution is not happening ever week. "This is the worst it'll ever be" is an increasingly tired refrain when things seem to be stagnating more than ever. The mentioned behavior will take longer a good amount of time, it's silly to wait around for it when it is not unlikely it may never come.

127 · 9h ago
An interactive feedback loop that handles various edge cases of AI, rendering it, asset loading and display, keeping track of global data, user input, etc. -- is still a game engine.
fzeroracer · 8h ago
Why would I want any of that? Games are interesting because of the deliberate choices and limitations made by a developer. When you have a game that tries to do everything, you have a game that actually does nothing.
Workaccount2 · 8h ago
> (and that is not a fixable problem)

Genie 3 is incredible (relative to previous models) in this regard. Not a solve, but it is doing what it is not supposed to be able to do.[1]

[1]https://deepmind.google/discover/blog/genie-3-a-new-frontier...

127 · 9h ago
Visually speaking, there's always visual issues in tying disparate assets together in a seamless fashion. I can see how AI could be easily used to "hide the seams" so to speak. I think a hybrid approach would be an improvement definitely.
shortrounddev2 · 8h ago
AI can't even hide its own seams. The seams are kind of the defining characteristic of AI.
elestor · 5h ago
> I do not get the point of this at all

I think the point of this is just because it's cool. So, as you said, it only serves as a tech demo, but why not? Many things have no point. It's unreasonable, but it's cool.

turnsout · 9h ago
It's an interesting tech demo—I think one interesting use case for AI rendering is changing the style on the fly. For example, a certain power-up could change the look to a hyper-saturated comic book style. Definitely achievable with traditional methods, but because AI is prompt-based, you could combine or extend styles dynamically.
doawoo · 7h ago
[flagged]
mediaman · 7h ago
It's interesting to see software engineers realize that AI can be useful in the hands of competent engineers, but that LLMs tend to produce a mess in the hands of those with little software engineering knowledge. Then they're confused why generative AI asset creation doesn't look that good in the hands of people who have no art training.

I know someone creating a game, and she is using AI in asset creation. But she's also a highly skilled environment technical artist from the industry, and so the use of AI looks totally different than all these demos from non-artists: different types of narrow AI get injected into different little parts of the asset creation pipeline, and the result is a mix of traditional tools (Substance Designer + Painter, Blender, Maya) with AI support in moodboarding, geometry creation and some parts of texture creation. The result is a 2-5x speedup, but instead of looking like slop it looks like a stylistically distinctive, cohesive world with consistent art direction.

The common pattern is that people think AI will automate "other people," because they see its shortcomings in their own field. But because they don't understand the technical skill required in other fields, they assume AI will just "do it." Instead, it seems like AI can be a force multiplier for technically skilled people, but that it begins showing its weakness when asked to take over entire pipelines normally produced by technically skilled people, whether they be engineers or artists.

doawoo · 6h ago
Really interesting perspective honestly, I legitimately have no issue with machine learning being used as a tool, like all technology it has benefits!

It’s when it’s used as a replacement for creativity that really gets to me.

aenvoker · 5h ago
Having spent a lot of time talking to engineers working on AI tools, I find the idea that AI is intended to replace creativity comes entirely from internet comment sections and not from the engineers working on AI.