Ask HN: Why hasn't x86 caught up with Apple M series?
431 points by stephenheron 3d ago 614 comments
Ask HN: Best codebases to study to learn software design?
103 points by pixelworm 4d ago 90 comments
Rendering a game in real time with AI
100 jschomay 92 8/28/2025, 12:10:14 PM blog.jeffschomay.com ↗
While it might take a bit longer to generate, you're still saving network and authentication latency.
While the results of the experiment here are interesting from an academic standpoint, it’s the same issue as remote game streaming: the amount of time you have to process input from the player, render visuals and sound, and transmit it back to the player precludes remote rendering for all but the most latency-insensitive games and framerates. It’s publishers and IP owners trying to solve the problem of ownership (in that they don’t want anyone to own anything, ever) rather than tackling any actually important issues (such as inefficient rendering pipelines, improving asset compression and delivery methods, improving the sandboxing of game code, etc).
Trying to make AI render real-time visuals is the wrongest use of the technology.
https://madebyoll.in/posts/game_emulation_via_dnn/demo/
https://madebyoll.in/posts/game_emulation_via_dnn/
Hook world state up to a server and you have multiplayer.
2025 update:
https://madebyoll.in/posts/world_emulation_via_dnn/
https://madebyoll.in/posts/world_emulation_via_dnn/demo/
At the peak, when we were streaming back video from Fal and getting <100ms of lag, the setup produced one of the most original creative experiences I’d ever had. I wish these sorts of ultra-fast image generators received more attention and research because they do open up some crazy UX.
The TLDraw team, from what I have seen, is really open in sharing their experiments under their own license. Not FOSS strictly, but I feel their licensing approach and decision is fair considering how they fund development, know that it's a contentious topic though for some.
The tactile reaction to playing with this tech is that it feels utterly sci-fi. It's so freaking cool. Watching videos does not do it justice.
Not enough companies or teams are trying this stuff. This is really cool tech, and I doubt we've seen the peak of what real time rendering can do.
The rendering artifacts and quality make the utility for production use cases a little questionable, but it can certainly do "art therapy" and dreaming / ideation.
I’m excited for AI rendered games.
Wait a minute. Where am I?
Ray tracing and other forms of real time global illumination are extremely resource intensive approaches to lighting a scene. Every client machine has to figure out how to light everything every single frame. Contrast this with baked global illumination where the cost is incurred exactly once and is amortized across potentially millions of machines.
We need more things like baked GI in gaming. This class of techniques makes the development iterations slower and more deliberate, but it also produces a far more efficient and refined product experience. I'd be very interested in quantifying the carbon impact of realtime vs baked lighting in gaming. It is likely a non-trivial figure at scale. Also bear in mind that baked GI is why games like the batman series still look so good in 2025, even when running on period hardware. You cannot replace your art team by consuming more electricity.
> You cannot replace your art team by consuming more electricity
This isn't true.
The "better" version renders at a whopping 4 seconds per frame (not frames per second) and still doesn't consistently represent the underlying data, with shifting interpretations of what each color / region represents.
I wonder if this approach would work better:
1. generate the whole screen once
2. on update, create a mask for all changed elements of the underlying data
3. do an inpainting pass with this mask, with regional prompting to specify which parts have changed how
4. when moving the camera, do outpainting
This might not be possible with cloud based solutions, but I can see it being possible locally.
Dang man it's just a guy showing off a neat thing he did for fun. This reaction seems excessive.
This is getting into the direction of a kind of simulation where stuff is not determined by code but a kind of "real" physics.
The AI route has a good chance of moving us from decades behind ILM to merely “years behind ILM”.
The problem with using equations is that they seem to have plateaued. Hardware requirements for games today keep growing, and yet every character still has that awful "plastic skin", among all the other issues, and for a lot of people (me included) this creates heavy uncanny-valley effects that makes modern games unplayable.
On the other hand, images created by image models today look fully realistic. If we assume (and I fully agree that this is a strong and optimistic assumption) that it will soon be possible to run such models in real time, and that techniques for object permanence will improve (as they keep improving at an incredible phase right now), then this might finally bring us to the next level of realism.
Even if realism is not what you're aiming for, I think it's easy to imagine how this might change the game.
That's not to say ML doesn't have a place in the pipeline - pathtracers can pair very well with ML-driven heuristics for things like denoising, but in that case the underlying signal is still grounded in physics and the ML part is just papering over the gaps.
I did not claim that AI-based rendering will overcome traditional methods, and have even explicitly said that this is a heavy assumption, but explained why I see it as exciting.
e.g. Top Gun Mavericks much lauded "practical" jet shots, which were filmed on real planes, but then the pilots were isolated and composited into 100% CGI planes with the backdrops also being CGI in many cases, and huge swathes of viewers and press bought the marketing line that what they saw was all practical.
This is ASCII: https://commons.wikimedia.org/wiki/File:ASCII-Table-wide.svg
If anyone has any tips for how I could achieve this, I would love to hear your ideas.
if you have to use AI diffusion models are a poor choice for the target. might be easier to train an LLM to output actual text art.
No comments yet
Actually figuring out and improving AI approaches for generating consistent and decent quality game assets is actually something that will be useful, this I have no idea the point of past a tech demo (and for some reason all the "ai game" people do this approach).
Dunno, this seems like an avenue definitely worth exploring.
Plenty of game applications today already have a render path of input -> pass through AI model -> final image. That's what the AI-based scaling and frame interpolation features like DLSS and FSR are.
In those cases, you have a very high-fidelity input doing most of the heavy lifting, and the AI pass filling in gaps.
Experiments like the OP's are about moving that boundary and "prompting" with a lower-fidelity input and having the model do more.
Depending on how well you can tune and steer the model, and where you place the boundary line, this might well be a compelling and efficient compute path for some applications, especially as HW acceleration for model workloads improves.
No doubt we will see games do variations of this theme, just like games have thoroughly explored other technology to generate assets from lower-fidelity seeds, e.g. classical proc gen. This is all super in the wheelhouse of game development.
Some kind of AI-first demoscene would be a pretty cool thing too. What's a trained model if not another fancy compressor?
And no if you heavily visually modify something with AI models to the extent it significantly alters the appearance it simply has no way of being consistent unless you include the previously generated thing somehow which then has the huge problem of how do you maintain that over an 80 hour game? How do you inform the AI what visual elements (say text, interactive elements) are significant and can be modified and which can't? (You can't)
Actually using AI to generate assets, having a person go in to make sure they look good together and make sure they match then just saving them as textures so they function like normal game assets makes 10000x more sense then generating a user image then trying to extract "hey what did the wrapper of this candy bar look like" from one ai generated image and figuring out how to make sure that is consistent across that type of candy bar in the world and maintains that consistency throughout the entire game, instead of just you know, generating a texture of a candybar?
That being said, it took us a few hundred years all in all just to work out paint, so if people keep working with this tech eventually a game designer could, in theory, lay out the skeleton of a game and tell an AI to do the rest, give it pointers and occasional original art to work into the system, and ship a completed playable game in days.
Whether it will be worth playing those games is an entirely different enchilada to microwave.
"But, think of the indie game designer!" is getting to be quite the take.
We have a machine that produces slop and the selling point is how fast it produces it? And how more people should be using it to spend less time on creative aspects? Would the world be a better place if GRRM "finished" his most well-known work sooner rather than never?
Something about the phrase "tell an AI to do the rest, give it pointers" reminds me of "The Sorcerer's Apprentice" from Fantasia. Not in the surface level dire-warning about laziness, automation, and losing control that story is telling but in that Mickey didn't spend any time thinking about what he was doing and the Wizard's disappointment at the end.
Like with that tech what kind of games would say random solo developers plugging at it and refining it be able to make in 4 years, that is the extremely compelling stuff. One person being able to make some auteur AAA quality game on their own, even if it takes a long time that might actually be good. If there are AI games those are the ones I'd want to play.
> Plenty of game applications today already have a render path of input -> pass through AI model -> final image.
Where "plenty" equals zero?
> In those cases, you have a very high-fidelity input doing most of the heavy lifting, and the AI pass filling in gaps.
That is, in these cases you already have high-fidelity input in the form of an actual game, and "AI" contributions to the output are dubious at best.
Do you really believe it's DLSS that's doing all the heavy lifting in a game like Expedition 33 or Cyberpunk 2077?
Here’s a video from a rando on Reddit conveniently posted today after playing around for an afternoon: https://www.reddit.com/r/aivideo/comments/1n1piz4/enjoy_nano...
The Nvidia team carefully selected a scene to be feasible for their tech. The rando happened to select a similar-ish scene that is largely well suited for game tech.
Which scene looks more realistic? The rando’s by far. And, adding stylization is some AIs are very well established as being excellent at.
Yes, there are still lots of issues to overcome. But, I find the negativity in so many comments in here to be highly imbalanced.
Also, how are these 2 comparable at all? Obviously the video looks more "realistic", the first one is obviously a game demo of some kind and is stylized, whereas the latter looks like a terrible travel agency ad.
Don't focus on the emotional context of the scenes. Look at the physical content. They are both contain stonework, plants and a human. They are both large, detailed scenes lit by strong sunlight and a bright sky. As far as rendering technology requirements go, they are very similar.
> scratches my brain in a very unpleasant way, it's hard to describe the sensation. The closest thing is "disgust"
There is strong discontinuous motion effect because of how the video tech is based on a sequences of "first frame, last frame" inputs spliced together. There are a couple seconds where her behavior gets uncanny valley --particularly her hands on the door. Wouldn't be a concern in an unrealistic video game. But, would be in a this-realistic game regardless of the tech. And, there is a very slight warble in the fine details. But, you have to really look for those to distinguish them from MPEG artifacts.
I expect what a lot of people here are feeling when they watch the video is disgust based 90% or more just on the fact that the video was AI generated :/
It makes no sense when people say AI can't do this or that. It will do it next week.
There have been musings for a while now that 3D rendering is going to switch from “lay down the scene’s albedo & specular parameters then do the lighting pass” to “lay down the scene’s latent parameters and then do the diffusion pass”.
Recently, the advances in “real time AI world models” have been coming ridiculously fast.
Put these together and it’s no stretch at all imagining a game built by having artists go nuts doing whatever they want with whatever Maya can handle as long as they also make proxy geometry of trivial complexity that can be conceptually associated with the final renders. Train the AI on the association. Render the proxy geometry the old fashioned way. AI that up in real time to the associated Maya-final-render approximation.
It’s not going to happen this week. But, in 5 years? Somebody’s gonna pull it off.
So full self driving vecicles will be finally ready next week then? Great to hear, though to be honest, I remain sceptical.
Self-driving cars are "here"... until someone adds another requirement.
Yes, Waymo can today drive around extremely dense car-friendly cities that are scanned and mapped in great detail weekly... They also still have to have remote human intervention all the time, and are freaked out by traffic cones being placed on the hood. I grew up in Indonesia and that's where I learned to drive, and trust me, if Waymo is ever able to navigate 100 meters on any road in Jakarta I'll happily concede and consider self-driving to be a solved problem.
In fact, I am not arguing that self-driving cars are perfect or global. I am pointing out how people keep changing the definition of "solved" which makes it look like the finish line keeps moving.
We do have what parent said, it is a reality. It is also the reality that it is not perfect, but somehow the latter is made to minimize or completely dismiss the former.
Those are the goalposts and they've been in the same spot for quite a while now.
I just do not think that your response is a valid response to a fact ("Waymos are driving themselves around several cities right now").
The problem with these statements is language has so much context implicit in it. "driving around on their own" to me means with zero active oversight. "driving around" to me means not just in a small set of city streets, but as a replacement for human driving (eg anywhere a vehicle can physically fit). Obviously to you it means other things, but it's what makes these conversations and statements of fact challenging.
Tesla's have >1 but they are not really self-driving, but more "100% human supervised self-driving."
The failure mode for getting a self-driving car right is grave. The failure mode for rendering game graphics imperfectly is to require a bit of suspension of disbelief (it's not a linear spectrum given the famous uncanney valley, etc., I'm aware). Games already have plenty of abstract graphics, invisible walls, and other cludges that require buy-in from users. It's a lot easier to scale that wall.
The statement was one of capability. There are some things that the tech is flatly not capable of, and that it will take time to develop the capability of. Even if there were no safety concerns at all and we lived in a cotton candy bubble world, self driving cars still have hard failure modes. The tech is not capable, and will not develop the capability next week, either.
The point being made is that the tech is moving fast, at least according to the marketing, but a revolution is not happening ever week. "This is the worst it'll ever be" is an increasingly tired refrain when things seem to be stagnating more than ever. The mentioned behavior will take longer a good amount of time, it's silly to wait around for it when it is not unlikely it may never come.
Genie 3 is incredible (relative to previous models) in this regard. Not a solve, but it is doing what it is not supposed to be able to do.[1]
[1]https://deepmind.google/discover/blog/genie-3-a-new-frontier...
I think the point of this is just because it's cool. So, as you said, it only serves as a tech demo, but why not? Many things have no point. It's unreasonable, but it's cool.
I know someone creating a game, and she is using AI in asset creation. But she's also a highly skilled environment technical artist from the industry, and so the use of AI looks totally different than all these demos from non-artists: different types of narrow AI get injected into different little parts of the asset creation pipeline, and the result is a mix of traditional tools (Substance Designer + Painter, Blender, Maya) with AI support in moodboarding, geometry creation and some parts of texture creation. The result is a 2-5x speedup, but instead of looking like slop it looks like a stylistically distinctive, cohesive world with consistent art direction.
The common pattern is that people think AI will automate "other people," because they see its shortcomings in their own field. But because they don't understand the technical skill required in other fields, they assume AI will just "do it." Instead, it seems like AI can be a force multiplier for technically skilled people, but that it begins showing its weakness when asked to take over entire pipelines normally produced by technically skilled people, whether they be engineers or artists.
It’s when it’s used as a replacement for creativity that really gets to me.