> Another possibility that has long been on my personal list of “future articles to write” is that the future of computing may look more like used cars. If there is little meaningful difference between a chip manufactured in 2035 and a chip from 2065, then buying a still-functional 30-year-old computer may be a much better deal than it is today. If there is less of a need to buy a new computer every few years, then investing a larger amount upfront may make sense – buying a $10,000 computer rather than a $1,000 computer, and just keeping it for much longer or reselling it later for an upgraded model.
This seems improbable.
50-year-old technology works because 50 years ago, transistors were micron-scale.
Nanometer-scale nodes wear out much more quickly. Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
One of my concerns is we're reaching a point where the loss of a fab due to a crisis -- war, natural disaster, etc. -- may cause systemic collapse. You can plot lifespan of chips versus time to bring a new fab online. Those lines are just around the crossing point; modern electronics would start to fail before we could produce more.
skissane · 20m ago
> Nanometer-scale nodes wear out much more quickly. Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
I recently bought a new MacBook, my previous one having lasted me for over 10 years. The big thing that pushed me to finally upgrade wasn’t hardware (which as far as I could tell had no major issues), it was the fact that it couldn’t run latest macOS, and software support for the old version it could run was increasingly going away.
The battery and keyboard had been replaced, but (AFAIK) the logic board was still the original
sapiogram · 12m ago
> Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
That statement absolutely needs a source. Is "usage" 100% load 24/7? What is the failure rate after 7 years? Are the failures unrepairable, i.e. not just a broken fan?
Symmetry · 1h ago
> In a transistor, the voltage of the gate lying on top of the channel controls the conductivity of the channel beneath it, either creating an insulator or “depletion region”, or leaving the silicon naturally conductive.
That's... not how this works at all. Eventually the depletion region where the positive or negative charge carriers (for p or n doped silicon) deplete far enough and then at the threshold voltage inversion happens when the opposite sort of charge carrier start to accumulate along the oxide and allow conduction. By surrounding the channel there's less space for a depletion region and so inversion happens at lower voltages, leading to higher performance. Same as people used to do with silicon on oxide.
More parallelism. Less clock. More l1 cache per CPU and less disk stalls. Are there plenty of tricks in the sea, when clockspeed goes flat?
Dual ported TCAM memory isn't getting faster and we've got to 1,000,000 prefixes in the internet and ipv6 are 4 times bigger. Memory speed is a real issue.
b0a04gl · 1h ago
the law delivered enough headroom that systems moved on. once compute got cheap to rent and scale ,there was less pressure to push frequency or density every cycle. so focus shifted. the gains kept coming ,just not in the same shape.
orefalo · 26m ago
factor in power usage reduction, and it still works
kristianp · 1h ago
> Roughly every five years, the cost to build a factory for making such chips doubles, and the number of companies that can do it halves.
So we may have Apple and NVidia as the only ones that can afford to build a fab. Edit, correction, Microsoft is the current number 2 company by market cap.
mepian · 24m ago
They can't afford to tank their margins like that, investors would be rather unhappy.
mettamage · 2h ago
Is there also a law for how much more difficult it becomes to sustain Moore's law?
Ultimately, there's a cap. For as far as I know, the universe is finite.
Symmetry · 1h ago
Landuaer's principle govern's how efficient computation can be, but we might have to transition to something other than transistors to hit that limit.
I don't think we know that. We don't even know how big the universe really is - we can only see so far. All we have is a best guess.
There may also be a multiverse out there (or right beside us).
And, creating universes might be a thing.
... I don't expect Moore's law to hold for ever either, but I don't believe in creating unnecessary caps.
matthewdgreen · 2h ago
I think you could very easily give a cap that hinges on our current understanding of basic physical limitations, and it would arrive surprisingly soon.
mandmandam · 2h ago
That's the thing about Moore's law - it has assumed from the beginning that our 'current understanding of basic physical limitations' is incomplete, and been proven correct on that front many times over.
adrianN · 1h ago
Our understanding of basic physical limits seems reasonably good and hasn't changed for a couple of generations. Our understanding of engineering limitations on the other hand is not so good and subject to frequent change.
Teever · 1h ago
I'm not sure I follow. can you elaborate on that?
As I understand it Moore's Law doesn't address any sort of fundamental physical limitations other than perhaps an absolutely limit in terms of some fundamental limit on the smallness of an object, it's just an observation of the doubling of transistor density over a consistent period of time.
It seems more like an economical or social observation than a physical one to me.
jama211 · 3h ago
Moore’s law has been unsustainable for 20 years, I remember Pentium 4’s with 4ghz. But that hasn’t seemed to matter in terms of real day to day performance improvements. This article makes some great points about the scaling cost and reduced market opportunity for there to be more than 2 or 3 makers in the market, but that’s a trend we’ve seen in every market in the world, to be honest I’m surprised it took this long to get there.
As interesting as this breakdown of the current state of things is, it doesn’t tell us much we didn’t know or predict much about the future, and that’s the thing I most wanted to hear from an expert article on the subject, even if we can take it with a large pinch of salt.
CalChris · 3h ago
Roughly every two years, the density of transistors that can be fit onto a silicon chip doubles.
No. Moore's law is not about density. It's just about the number of transistors on a chip. Yes, density increases but so does die size. Anyways, in Moore's own word:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.
> Gordon Moore always emphasized that his “law” was fundamentally rooted in economics, not physics.
Effectively, it was always more of a "marketing law" than an engineering one. Semiconductor chips only had 18-36 months to reap big profits, so Intel tried to stay ahead of that curve.
Symmetry · 1h ago
That's a common misconception. Moore's 1965 paper was about economics but when the phrase "Moore's Law was coined in 1975 it was referring to Dennard Scaling as a whole.
There are a lot of people working on all the mentioned problems - and on many many more.
Re garage invention: lithography is probably too big an issue for that. It's important to keep in mind that we're currently producing a lot of transistors with today's tech. Any alternative would have to match that (eg stamping technologies).
(I work on lithography optics)
kryptiskt · 2h ago
An alternative doesn't have to match all capabilities of the current tech. It "only" has to be competitive in one niche, a la The Innovator's Dilemma. Then it can improve and scale from that beach head, like when CMOS went from low-power applications to world domination.
noelwelsh · 2h ago
Current GPUs have a comparable number of transistors (92.2 billion in the current NVidia Blackwell according to https://chipsandcheese.com/p/blackwell-nvidias-massive-gpu) to the number of neurons in human brains (about 90 billion according to Wikipedia). Brains consume less energy and do more, though transistors beat them on density. This suggests there are alternative pathways to performing computation that will scale better.
jojobas · 2h ago
It takes many transistors to replicate a single neuron, they work very differently in terms of speed, there is no direct comparison.
noelwelsh · 1h ago
Let me restate. The article is musing about medium terms difficulties on the current pathway for producing computation. I'm musing that perhaps we are on the wrong pathway for producing computation.
BriggyDwiggs42 · 1h ago
This only applies to functions of the brain which we wish to replicate on computers, not to those computers already outperform us on.
This seems improbable.
50-year-old technology works because 50 years ago, transistors were micron-scale.
Nanometer-scale nodes wear out much more quickly. Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
One of my concerns is we're reaching a point where the loss of a fab due to a crisis -- war, natural disaster, etc. -- may cause systemic collapse. You can plot lifespan of chips versus time to bring a new fab online. Those lines are just around the crossing point; modern electronics would start to fail before we could produce more.
I recently bought a new MacBook, my previous one having lasted me for over 10 years. The big thing that pushed me to finally upgrade wasn’t hardware (which as far as I could tell had no major issues), it was the fact that it couldn’t run latest macOS, and software support for the old version it could run was increasingly going away.
The battery and keyboard had been replaced, but (AFAIK) the logic board was still the original
That statement absolutely needs a source. Is "usage" 100% load 24/7? What is the failure rate after 7 years? Are the failures unrepairable, i.e. not just a broken fan?
That's... not how this works at all. Eventually the depletion region where the positive or negative charge carriers (for p or n doped silicon) deplete far enough and then at the threshold voltage inversion happens when the opposite sort of charge carrier start to accumulate along the oxide and allow conduction. By surrounding the channel there's less space for a depletion region and so inversion happens at lower voltages, leading to higher performance. Same as people used to do with silicon on oxide.
The Wikipedia article has nice diagrams:
https://en.wikipedia.org/wiki/MOSFET
that... isn't the moore law, it is about count / complexity, not density. and larger chips are a valid way to fullfill it.
https://hasler.ece.gatech.edu/Published_papers/Technology_ov...
https://www.eng.auburn.edu/~agrawvd/COURSE/E7770_Spr07/READ/...
Dual ported TCAM memory isn't getting faster and we've got to 1,000,000 prefixes in the internet and ipv6 are 4 times bigger. Memory speed is a real issue.
So we may have Apple and NVidia as the only ones that can afford to build a fab. Edit, correction, Microsoft is the current number 2 company by market cap.
Ultimately, there's a cap. For as far as I know, the universe is finite.
https://en.wikipedia.org/wiki/Landauer%27s_principle
I don't think we know that. We don't even know how big the universe really is - we can only see so far. All we have is a best guess.
There may also be a multiverse out there (or right beside us).
And, creating universes might be a thing.
... I don't expect Moore's law to hold for ever either, but I don't believe in creating unnecessary caps.
As I understand it Moore's Law doesn't address any sort of fundamental physical limitations other than perhaps an absolutely limit in terms of some fundamental limit on the smallness of an object, it's just an observation of the doubling of transistor density over a consistent period of time.
It seems more like an economical or social observation than a physical one to me.
As interesting as this breakdown of the current state of things is, it doesn’t tell us much we didn’t know or predict much about the future, and that’s the thing I most wanted to hear from an expert article on the subject, even if we can take it with a large pinch of salt.
Effectively, it was always more of a "marketing law" than an engineering one. Semiconductor chips only had 18-36 months to reap big profits, so Intel tried to stay ahead of that curve.
https://hopefullyintersting.blogspot.com/2019/03/what-is-moo...
Re garage invention: lithography is probably too big an issue for that. It's important to keep in mind that we're currently producing a lot of transistors with today's tech. Any alternative would have to match that (eg stamping technologies).
(I work on lithography optics)