NB: process feature size does not equal transistor size. Process feature size doesn't even equal process feature size.
dist-epoch · 1h ago
You also need space for wires, ..., etc, right? It's not just transistors.
CalChris · 1h ago
The wires didn't fit on the back of the envelope.
a_wild_dandan · 58m ago
I love this retort and I'm stealing it.
amelius · 44m ago
The wires run over the transistors.
ksec · 3h ago
I heard there is still trouble to buy consumer grade Nvidia GPU. At this point I am wondering if it is Gaming market demand, AI, or simply a supply issue.
On another note I am waiting for Nvidia's entry to CPU. At some point down the line I expect the CPU will be less important, ( relatively speaking ) and Nvidia could afford to throw a CPU in the system as bonus. Especially when we are expecting ARM X930 to rival Apple's M4 in terms of IPC. CPU design has become somewhat of a commodity.
Incipient · 3h ago
My understanding is it's the AI demand and willingness to pay crazy money for wafer that makes consumer GPUs a significantly less attractive product to produce.
I don't have really solid evidence, just semi-anecdotal/semi-reliable internet posts:
Nvidia as a whole has been fairly anti-consumer recently with pricing, so I wouldn't be banking on them for a great cpu option. Weirdly Intel is in the position where they have to prove themselves, so hopefully they'll give us some great products in the next 2-5 years - if they survive (think the old lead-up-to-ryzen era for amd)
jonas21 · 2h ago
> I am waiting for Nvidia's entry to CPU.
Haven't they already started doing this with Grace and GB10?
Their Grace datacenter CPU is basically a chip where they put down all the LPDDR5 memory controllers (albeit curiously slow), NVLINK and PCIe IOs they needed around the perimeter, and then filled in the interior with boring off the shelf ARM cores. It's basically an IO and memory expander that happens to run Linux.
GB10 when it ships might be more interesting, since it'll go into systems that need to support use cases other than merely feeding a big GPU ML workloads. But it sounds like the CPU chiplet at least was more or less outsourced to Mediatek.
xl-brain · 1h ago
The micro center in my neighborhood has hundreds of 5090s in stock. I'm not sure its as hard as it used to be.
dist-epoch · 1h ago
Why doesn't NVIDIA also build something like Google TPU, a systolic array processor? Less programmable, but more throughput/power efficiency?
It seems there is a huge market for inference.
AlotOfReading · 1h ago
Nvidia tensor cores are small systolic arrays. They'd have to throw out a lot of their ecosystem investments and backwards compatibility to make effective use of them as the main GPU compute, and there's really no need given how competitive their chips are right now.
aurareturn · 1h ago
Less programmable, but more throughput/power efficiency?
I also wonder the same. It'd make sense to sell two categories of chips:
Traditional GPUs like Blackwell that can do anything and have backwards compatibility.
Less programmable and more ASIC-like inference chips like Google's TPUs. Inference market is going to be multiple times bigger than training soon.
https://resources.nvidia.com/en-us-blackwell-architecture
Blackwell uses the TSMC 4NP process. It has two layers. A very back of the envelope estimate:
NB: process feature size does not equal transistor size. Process feature size doesn't even equal process feature size.On another note I am waiting for Nvidia's entry to CPU. At some point down the line I expect the CPU will be less important, ( relatively speaking ) and Nvidia could afford to throw a CPU in the system as bonus. Especially when we are expecting ARM X930 to rival Apple's M4 in terms of IPC. CPU design has become somewhat of a commodity.
I don't have really solid evidence, just semi-anecdotal/semi-reliable internet posts:
Eg. https://www.tomshardware.com/tech-industry/more-than-251-mil...
Nvidia as a whole has been fairly anti-consumer recently with pricing, so I wouldn't be banking on them for a great cpu option. Weirdly Intel is in the position where they have to prove themselves, so hopefully they'll give us some great products in the next 2-5 years - if they survive (think the old lead-up-to-ryzen era for amd)
Haven't they already started doing this with Grace and GB10?
- https://www.nvidia.com/en-us/data-center/grace-cpu/
- https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwe...
GB10 when it ships might be more interesting, since it'll go into systems that need to support use cases other than merely feeding a big GPU ML workloads. But it sounds like the CPU chiplet at least was more or less outsourced to Mediatek.
It seems there is a huge market for inference.
Traditional GPUs like Blackwell that can do anything and have backwards compatibility.
Less programmable and more ASIC-like inference chips like Google's TPUs. Inference market is going to be multiple times bigger than training soon.