Sony's Mark Cerny Has Worked on "Big Chunks of RDNA 5" with AMD

60 ZenithExtreme 68 7/2/2025, 4:10:46 PM overclock3d.net ↗

Comments (68)

phkahler · 6h ago
Why not link to the original article here:

https://www.tomsguide.com/gaming/playstation/sonys-mark-cern...

DiabloD3 · 8h ago
There isn't an RDNA5 on the roadmap, though. It's been confirmed 4 is the last (and was really meant to be 3.5, but grew into what is assumed to be the PS5/XSX mid-gen refresh architecture).

Next is UDNA1, a converged architecture with it's older sibling, CDNA (formerly GCN).

Like, the article actually states this, but runs an RDNA 5 headline anyways.

blasphemers · 56m ago
Maybe read the article before commenting on it, it's not that long.

"Big chunks of RDNA 5, or whatever AMD ends up calling it, are coming out of engineering I am doing on the project"

greenknight · 2h ago
AMD does do semi-custom work.

Whats to stop sony being like we dont want UDNA 1, we want a iteration of RDNA 4.

For all we know, it IS RDNA 5... it just wont be available to the public.

Moto7451 · 1h ago
And their half step/semi-custom work can find their way back to APUs. RDNA 3.5 (the version marketed as such) is in the Zen 5 APUs with Mobile oriented improvements. It wouldn’t surprise me if a future APU gets RDNA 5. GCN had this sort of APU/Console relationship as well.
cubefox · 7h ago
It's just a name. I'm sure this is all pretty iterative work.
dragontamer · 5h ago
UDNA isn't a name but instead a big shift in strategy.

CDNA was for HPC / Supercomputers and Data center. GCN always was a better architecture than RDNA for that.

RDNA itself was trying to be more NVidia like. Fewer FLOPs but better latency.

Someone is getting the axe. Only one of these architectures will win out in the long run, and the teams will also converge allowing AMD to consolidate engineers to improving the same architecture.

We won't know what the consolidated team will release yet. But it's a big organizational shift that surely will affect AMDs architectural decisions.

timschmidt · 1h ago
My understanding was that CDNA and RDNA shared much if not most of their underlying architecture, and that the fundamental differences had more to do with CDNA supporting a greater variety of numeric representations to aid in scientific computing. Whereas RDNA really only needed fp32 for games.
whatever1 · 7h ago
PS5 was almost twice as fast as the PS4 pro, yet we did not see the generational leap we saw with the previous major releases.

It seems that we are the stage where incremental improvements in graphics will require exponentially more computing capability.

Or the game engines have become super bloated.

Edit: I stand corrected in previous cycles we had orders of magnitude improvement in FLOPS.

pjmlp · 7h ago
A reason was backwards compatibility, studios were already putting lots of money into PS4 and XBox One, thus PS5 and XBox X|S (two additional SKUs), were already too much.

Don't forget one reason that studios tend to favour consoles has been regular hardware, and that is no longer the case.

When middleware starts to be the option, it is relatively hard to have game features that are hardware specific.

cosmic_cheese · 7h ago
Less effort going into optimization also plays a factor. On average games are a lot less optimized than they used to be. The expectation seems to be that hardware advances will fix deficiencies in performance.

This doesn’t affect me too much since my backlog is long and by the time I play games, they’re old enough that current hardware trivializes them, but it’s disappointing nonetheless. It almost makes me wish for a good decade or so of performance stagnation to curb this behavior. Graphical fidelity is well past the point of diminishing returns at this point anyway.

martinald · 5h ago
We have had a decade of performance stagnation.

Compare PS1 with PS3 (just over 10 years apart).

PS1: 0.03 GFLOPS (approx given it didn't really do FLOPS per se) PS3: 230 GFLOPS

Nearly 1000x faster.

Now compare PS4 with PS5 pro (also just over 10 years apart):

PS4: ~2TFLOPS PS5 Pro: ~33.5TFLOPS

Bit over 10x faster. So the speed of improvement has fallen dramatically.

Arguably you could say the real drop in optimization happened in that PS1 -> PS3 era - everything went from hand optimized assembly code to running (generally) higher level languages and using abstrated graphics frameworks like DirectX and OpenGL. Just noone noticed because we had 1000x the compute to make up for it :)

Consoles/games got hit hard by first crypto and now AI needing GPUs. I suspect if it wasn't for that we'd have vastly cheaper and vastly faster gaming GPUs, but when you were making boatloads of cash off crypto miners and then AI I suspect the rate of progress fell dramatically for gaming at least (most of the the innovation I suspect went more into high VRAM/memory controllers and datacentre scale interconnects).

cosmic_cheese · 5h ago
Yeah there’s been a drop off for sure. Clearly it hasn’t been steep enough for game studios to not lean on anyway, though.

One potential forcing factor may be the rise of iGPUs, which have become powerful enough to play many titles well while remaining dramatically more affordable than their discrete counterparts (and sometimes not carrying crippling VRAM limits to boot), as well as the growing sector of PC handhelds like the Steam Deck. It’s not difficult to imagine that iGPUs will come to dominate the PC gaming sphere, and if that happens it’ll be financial suicide to not make sure your game plays reasonably well on such hardware.

martinald · 5h ago
I get the perhaps mistaken impression the biggest problem games developers have is making & managing absolutely enormous amounts of art assets at high resolution (textures, models, etc). Each time you increase resolution from 576p, to 720p to 1080p and now 4k+ you need a huge step up in visual fidelity of all your assets, otherwise it looks poor.

And given most of these assets are human made (well, until very recently) this requires more and more artists. So I wonder if games studios are more just art studios with a bit of programming bolted on, vs before with lower res graphics where you maybe had one artist for 10 programmers, now it is more flipped the other way. I feel that at some point over the past ~decade we hit a "organisational" wall with this and very very few studios can successfully manage teams of hundreds (thousands?) of artists effectively?

cosmic_cheese · 5h ago
That depends a lot on art direction and stylization. Highly stylized games scale up to high resolutions shockingly well even with less detailed, lower resolution models and textures. Breath of the Wild is one good example that looks great by modern standards at high resolutions, and there’s many others that manage to look a lot less dated than they are with similarly cartoony styles.

If “realistic” graphics are the objective though, then yes, better displays pose serious problems. Personally I think it’s probably better to avoid art styles that age like milk, though, or to go for a pseudo-realistic direction that is reasonably true to life while mixing in just enough stylization to scale well and not look dated at record speeds. Japanese studios seem pretty good at this.

SlowTao · 3h ago
It is not just GPU performance, it is that visually things are already very refined. A ten times leap in performance doesn't really show as ten times the visual spectical like it used to.

Like all this path tracing/ray tracing stuff, yes it is very cool and can add to a scene but most people can barely tell it is there unless you show it side by side. And that takes a lot of compute to do.

We are polishing an already very polished rock.

martinald · 2h ago
Yes but in the PS1 days we were doing a 1000x compute performance a decade.

I agree that 10x doesn't move much, but that's sort of my point - what could be done with 1000x?

jayd16 · 4h ago
By what metric can you say this with any confidence when game scope and fidelity has ballooned?
cosmic_cheese · 4h ago
Because optimized games aren’t completely extinct and there’s titles with similar levels of size, fidelity, and feature utilization with dramatically differing performance profiles.
cwbriscoe · 7h ago
A lot of the difference went into FPS rather than improved graphics.
adamwk · 5h ago
And loading times. I think people already forgot how long you had to wait on loading screens or how many faked loading (moving through a brush while the next area loads) there was on PS4
SlowTao · 3h ago
PS4 wasnt too terrible but jumping back to PS3... wow I completely forgot how memory starved that machine was. Working on it, we knew at the time but in retro spect it was just horrible.

Small RAM space with the hard CPU/GPU split (so no reallocation) feeding off a slow HDD which is being fed by an even slower Bluray disc, you are sitting around for a while.

ryao · 3h ago
Did you forget that on the N64, load times were near instantaneous?
derrasterpunkt · 3h ago
The N64 was cartridge based.
bentt · 6h ago
This is correct. Also, it speaks to what players actually value.
ThatMedicIsASpy · 5h ago
I have played through CP2077 with 40, 30 and 25 fps. A child doesn't care if Zelda runs with low FPS.

The only thing I value is a consistent stream of frames on a console.

adamwk · 5h ago
When given a choice, most users prefer performance over higher fidelity
teamonkey · 5h ago
I would like to see the stats for that.
jayd16 · 4h ago
> "When asked to decide on a mode, players typically choose performance mode about three-quarters of the time,

From PS5 Pro reveal https://youtu.be/X24BzyzQQ-8?t=172

jayd16 · 4h ago
Children eat dirt. I'm not sure "children don't care" is a good benchmark.
LikesPwsh · 6h ago
Also FPS just requires throwing more compute at it.

Excessively high detail models require extra artist time too.

kridsdale1 · 6h ago
Yes PS5 can output 120hz on hdmi. A perfect linear output to direct your more compute at.
CoolGuySteve · 4h ago
The current generation has a massive leap in storage speed but games need to be architected to stream that much data into RAM.

Cyberpunk is a good example of a game that straddled the in between, many of it's performance problems on the PS4 were due to constrained serialization speed.

Nanite and games like FF16 and Death Stranding 2 do a good job of drawing complex geometry and textures that wouldn't be possible on the previous generation

Vilian · 7m ago
Nanite is actively hurting performance
ryao · 3h ago
This is the result of an industry wide problem where technology just is not moving forward as quickly as it used to move. Dennard scaling is dead. Moore’s law is also dead for SRAM and IO logic. It is barely clinging to life for compute logic, but the costs are skyrocketing as each die shrink happens. The result is that we are getting anemic improvements. This issue is visible in Nvidia’s graphics offerings too. They are not improving from generation to generation like they did in the past, despite Nvidia turning as many knobs as they could to higher values to keep the party going (e.g. power, die area, price, etcetera).
timschmidt · 1h ago
vrighter · 7h ago
twice as fast, but asked to render 4x the pixels. Do the math
SlowTao · 3h ago
Well you see... I got nothing.

The path nowadays is to use all kinds of upscaling and temporal detail junk that is actively recreating late 90s LCD blur. Cool. :(

teamonkey · 6h ago
This article shows how great a leap there was between previous console generations.

https://www.gamespot.com/gallery/console-gpu-power-compared-...

treyd · 7h ago
> Or the game engines have become super bloated.

"Bloated" might be the wrong word to describe it, but there's some reason to believe that the dominance of Unreal is holding performance back. I've seen several discussions about Unreal's default rendering pipeline being optimized for dynamic realtime photorealistic-ish lighting with complex moving scenes, since that's much of what Epic needs for Fortnite. But most games are not that and don't make remotely effective use of the compute available to them because Unreal hasn't been designed around those goals.

TAA (temporal anti-aliasing) is an example of the kind of postprocessing effect that gamedevs are relying on to recover performance lost in unoptimized rendering pipelines, at the cost of introducing ghosting and loss of visual fidelity.

gmueckl · 6h ago
This is a very one-sided perspective on things. Any precomputed solution to lighting comes with enormous drawbacks across the board. The game needs to ship the precomputed data when storage is usually already tight. The iteration cycle for artists and level designers suchs when lighting is precomputed - they almost never see accurate graphics for their work while they are iterating because rebaking takes time away from their work. Game design become restricted to those limitations, too. Can't even think of having the player randomly rearranging big things in a level (e.g. building or tearing down a house) because the engine can't do it. Who knows what clever game mechanics are never thought of because of these types of limitations?

Fully dynamic interactive environments are liberating. Pursuing them in is the right thing to do.

andrekandre · 1h ago

  > Fully dynamic interactive environments are liberating. Pursuing them in is the right thing to do.
great video from digital foundry that goes into that (for doom: the dark ages)

https://www.youtube.com/watch?v=Ed4vNNQwCDU

mikepurvis · 7h ago
In principle, Epic's priorities for Unreal should be aligned to a lot of what we've seen in the PS3/4/5 generation as far as over-the-shoulder 3rd person action adventure games.

I mean, look at Uncharted, Tomb Raider, Spider-Man, God of War, TLOU, HZD, Ghost of Tsushima, Control, Assassins Creed, Jedi Fallen Order / Survivor. Many of those games were not made in Unreal, but they're all stylistically well suited to what Unreal is doing.

kridsdale1 · 6h ago
I agree. UE3 was made for Gears of War (pretty much) and as a result the components were there to make Mass Effect.
babypuncher · 6h ago
TAA isn't a crutch being used to hold up poor performance, it's an optimization to give games anti-aliasing that doesn't suck.

Your other options for AA are

* Supersampling. Rendering the game at a higher resolution than the display and downscaling it. This is incredibly expensive.

* MSAA. This samples ~~vertices~~surfaces more than once per pixel, smoothing over jaggies. This worked really well back before we started covering every surface with pixel shaders. Nowadays it just makes pushing triangles more expensive with very little visual benefit, because the pixel shaders are still done at 1x scale and thus still aliased.

* Post-process AA (FXAA,SMAA, etc). These are a post-process shader applied to the whole screen after the scene has been fully rendered. They often just use a cheap edge detection algorithm and try to blur them. I've never seen one that was actually effective at producing a clean image, as they rarely catch all the edges and do almost nothing to alleviate shimmering.

I've seen a lot of "tech" YouTubers try to claim TAA is a product of lazy developers, but not one of them has been able to demonstrate a viable alternative antialiasing solution that solves the same problem set with the same or better performance. Meanwhile TAA and its various derivatives like DLAA have only gotten better in the last 5 years, alleviating many of the problems TAA became notorious for in the latter '10s.

flohofwoe · 6h ago
Erm your description of MSAA isn't quite correct, it has nothing to do with vertices and doesn't increase vertex processing cost..

It's more similar to supersampling, but without the higher pixel shader cost (the pixel shader still only runs once per "display pixel", not once per "sample" like in supersampling).

A pixel shader's output is written to multiple (typically 2, 4 or 8) samples, with a coverage mask deciding which samples are written (this coverage mask is all 1s inside a triangle and a combo of 1s and 0s along triangle edges). After rendering to the MSAA render target is complete, an MSAA resolve operation is performed which merges samples into pixels (and this gives you the smoothed triangle edges).

wtallis · 6h ago
> solves the same problem set with the same or better performance

The games industry has spent the last decade adopting techniques that misleadingly inflate the simple, easily-quantified metrics of FPS and resolution, by sacrificing quality in ways that are harder to quantify. Until you have good metrics for quantifying the motion artifacts and blurring introduced by post-processing AA, upscaling, and temporal AA or frame generation, it's dishonest to claim that those techniques solve the same problem with better performance. They're giving you a worse image, and pointing to the FPS numbers as evidence that they're adequate is focusing on entirely the wrong side of the problem.

That's not to say those techniques aren't sometimes the best available tradeoff, but it's wrong to straight-up ignore the downsides because they're hard to measure.

cubefox · 6h ago
Yeah. Only problem is that overly aggressive TAA implementations blur the whole frame during camera rotation. The thing that is even better than standard TAA is a combination of TAA and temporal upscaling, called TSR in Unreal. Still better is the same system but performed by an ML model, e.g. DLSS. Though this requires special inference hardware inside the GPU.

In the past, MSAA worked reasonably well, but it was relatively expensive, doesn't apply to all forms of high frequency aliasing, and it doesn't work anymore with the modern rendering paradigm anyway.

silisili · 7h ago
AFAIK, this generation has been widely slammed as a failure due to lack of new blockbuster games. Most things that came out were either for PS4, or remasters of said games.

There have been a few decent sized games, but nothing at grand scale I can think of, until GTA6 next year.

jayd16 · 6h ago
There were the little details of a global pandemic and interest rates tearing through timelines and budgets.
ErneX · 6h ago
GTA VI is going to be a showcase on these consoles.
LorenDB · 8h ago
If the Playstation contributions are good enough, maybe RDNA4 -> RDNA5 will be just as good as RDNA3 -> RDNA4. As long as they get the pricing right, anyway.
monster_truck · 6h ago
We've known this for a while, it's an extension of the upscaling and frame generation AMD already worked on in conjunction with Sony for FSR 3 and to a much greater extent FSR 4. Previous articles also have highlighted their shared focus on BVH optimizations
erulabs · 7h ago
Excited to see how the software support for UDNA1 works out. Very hopeful we'll see some real competition to Nvidia soon in the datacenter. Unfortunately I think the risk is quite high: if AMD burns developers again with poor drivers and poor support, it's hard to see how they'll be able to shake the current stigma.
martinald · 5h ago
Take this with a pinch of salt, but the most recent ROCm release installed out of the box on my WSL2 machine and worked first time with llama.cpp. I even compiled llama.cpp from source with 0 issues. That has never happened ever in my 5+ years of having AMD GPUs. Every other time I've tried this it's either failed and required arcane workarounds, or just not worked entirely (including running on 'real' Linux).

I feel like finally they are turning the corner on software and drivers.

ZenithExtreme · 8h ago
AMD’s next-gen GPUs may have some PlayStation tech inside.
smcl · 6h ago
It looks like all of your comments are low-effort summaries like this. What’s going on here? Or is this a bot…
diggan · 5h ago
They're summarizing the submissions they're making. All of the summary comments are on their own submissions.
stanac · 7h ago
I don't think he is employed by Sony, but work as a consultant for them. So both Sony PS 4/5 and AMD GPUs have his tech inside.
mikepurvis · 7h ago
So you're right, though I would never have guessed— in the PS5 hype cycle he gave that deep dive architecture presentation that for all the world looked like he was a Sony spokesperson.
brcmthrowaway · 4h ago
Who is a better computer architect, Mark Cerny or Anand Shimpi?
wmf · 3h ago
Did we ever hear what Anand does at Apple?
lofaszvanitt · 5h ago
Yes, but what will use it when there are so few games on the platform in the current PS generation?
shmerl · 5h ago
When will Sony support Vulkan on PS?
departure4885 · 4h ago
Why would they? They have their own (two actually) proprietary graphics APIs: GNM and GNMX.
shmerl · 4h ago
I'd ask why wouldn't they. Not a fan of NIH and wheel reinvention proponents.
departure4885 · 2h ago
Because if they write their own they get to own the political/bureaucratic portion of the problem. For better or worse, they don't have to deal with the Kronos Group. They get to optimize their APIs directly against their research with AMD.
shmerl · 2h ago
That still doesn't make NIH a better approach. NIH is dinosaur idea really when it comes to technology like this.