Nvidia is dominating the S&P 500 more than any company in at least 44 years

5 Terretta 6 8/9/2025, 6:36:16 PM sherwood.news ↗

Comments (6)

tfwnopmt · 3h ago
Why is AT&T's P/E ratio 0 in that chart?
bigyabai · 3h ago
I'm shocked to see almost zero discussion of CUDA and the Nvidia GPU architecture's role in this. There's been no about-face from the industry related to Nvidia's success, just a reinvestment in the same broken raster architectures that are fast becoming a second-class citizen.

It's doubly funny, because Nvidia never expressly tried to stop them either. Working with Khronos, Nvidia proved they weren't afraid to build the next CUDA-killer, even in collaboration with the industry. But we'd sooner get working SPIR-V, because gaming is a much more important market than democratized compute.

This will all be very confusing and hard to explain to people in the future. Why was Nvidia rich? Because nobody else wanted the money, I guess.

SilverElfin · 3h ago
I know little about how this works but why don’t others just make a CUDA equivalent? Like intel or amd?
bigyabai · 3h ago
The short answer is, it takes a lot of long-term investment in the software and hardware. For many of Nvidia's competitors, that money was better spent on marketing or direct-to-consumer products; the real "value" of CUDA was dubious prior to crypto and ML.

The long answer is... well, I'm not particularly qualified to explain that either. Nvidia's been working on CUDA for nearly two decades now, which has a lot of advantages besides just platform maturity. Nvidia has been shipping CUDA-compatible hardware with most of their GPUs since ~2009, which means almost every Nvidia GPU (even secondhand ones) support some level of compute capability. This compute is orchestrated via CUDA at the software level, which also has the advantage of being largely backwards/forwards compatible for most operations, in addition to being highly scalable. For many operations, you could reuse the same code you run on a server to run on the Tegra chip of a Nintendo Switch, or a Jetson developer board.

AMD, Intel or Apple feasibly could chase this golden goose, but it's a lot of long-term investment that still sacrifices their consumer appeal. AMD has the most pressure on them, so they're pushing hard on ROCm as a simplified compute layer for certain (mostly ML) acceleration to tide users over. Intel has bigger fish to fry, so they're generally not interested in burning $X billion dollars on a market they can't compete in. Apple has too large of a commitment to the consumer market for it to be worthwhile; additionally they lack the hardware and software interconnect technology to compete with Nvidia's datacenter products. Really it's only AMD in the running, though things could change.

SilverElfin · 2h ago
I’m not enough of a tech person to understand this, to be honest. I just see investment articles talk about CUDA and keep thinking why others don’t make something compatible but cheaper. Maybe that is naive? It sounds like you are saying AMD could do this in theory even though Intel or Apple can’t?
nikonyrh · 25m ago
AMD already has Composable Kernels[1], and supports for example Triton[2]. Then there is also HIP[3], and there are tools to automatically convert from from CUDA to HIP. But since CUDA is the de-facto standard, there is always friction to use something else (unless you need to support also AMD stack).

Making something just CUDA-compatible is non-trivial, and since Nvidia decides its direction and new features then the alternatives would always be lagging behind. Currently there are also major hardware differences between Nvidia and AMD, which may make highly optimized CUDA code inefficient or even buggy.

  [1] https://github.com/ROCm/composable_kernel?tab=readme-ov-file#composable-kernel
  [2] https://github.com/triton-lang/triton?tab=readme-ov-file#triton
  [3] https://github.com/ROCm/HIP?tab=readme-ov-file#what-is-this-repository-for