I'm curious about their methodology here. Just from a materials standpoint, a M.2 SSD has much less physical volume than a 3.5" HDD and fewer disparate materials (no moving parts = no coils,bearings,motors...)
Perhaps the embodied carbon of memory chips is higher than I expect. To be cynical, there's probably a lot of corporate spin - Seagate is a HDD company after all.
bmenrigh · 4h ago
Semiconductor manufacturing, especially using EUV, is a staggeringly power-hungry process.
The EUV light production requires more than a megawatt by itself because of all the losses involved.
No doubt this is why they only report on embodied carbon since it’s the one metric they can win on.
PaulKeeble · 3h ago
The thing is that anything electrical energy intense is going to become less CO2 producing every year as energy systems transition. Its the use of Iron and other necessary fuel usage that will become the CO2 intense items over the next 5 to 10 years. The energy usage wont be the issue and CO2e for production of any item is going to drop quite drastically.
im3w1l · 2h ago
Iron and steel can be made without fossil fuels. It's just more expensive.
wtallis · 2h ago
Is anyone using EUV for SSDs? I don't think it's economical for NAND manufacturing. I'm not sure if SSD controllers are using EUV yet; they usually aren't on leading-edge process nodes.
estebank · 3h ago
I think the interaction of "thing is cheap" and "thing is tiny" break all of our heuristics for "thing is difficult and energy intensive to produce".
jillesvangurp · 3h ago
Not all energy is carbon intensive. Using a lot of power is fine as long as you get the power in a sustainable way. That isn't necessarily true for a lot of current production. But that's changing rapidly. Especially in the parts of the world where this stuff is produced.
jonathaneunice · 44m ago
Agree if that power is truly sustainable.
OTOH, if you have solar panels and drive power to the grid, you can sell "credits" to large power users who will use them to "offset" their less-green power sources to appear to get their power in sustainable ways / be carbon-neutral. Kind of an "I'm using dirtier power, but I teamed up with someone who's using fully sustainable power, so it all comes out in the wash!"
PaulKeeble · 2h ago
It will certainly become more fine so long as the grid is transitioning to renewable energy sources which almost all countries are doing. In the next decade the intensity of CO2e for electrical energy usage should be considerably lower than it is now and CO2e from necessary fuel usage will begin to dominate in industrial processes. Iron I think will be a big contributor to this since at the moment electrical production is not viable.
xbmcuser · 2h ago
China the largest producer of iron and steel has started restricting coal based for electric arc based furnaces for steel making so iron and coal carbon use will probably come down as well.
Its just going to take a bit longer for Iron and steel production to transition, they likely wont bother until we see the full reduction in electrical price that comes with renewables. They do seem to have a process it just needs some refinement. The main grid transition is happening first and faster.
This is part of the issue with Seagate's assessment, its changing rapidly right now and in 5 years time Taiwan will be on mostly renewable energy and suddenly all of Seagate's iron usage will make them more expensive, then in 10 years time they are using electrically produced iron and its not an issue anymore.
eru · 3h ago
If you are part of a large grid, power is fairly fungible (within that grid).
rob_c · 2h ago
Nope.
Again look at Spain, not all power sources are equal and just because it's been connected to a network doesn't magically mean all of the load is then spread between all sinks.
If your high school science teacher told you so, they were wrong. It's time to talk about proper grown up electronics in this discussion.
theoreticalmal · 1h ago
The “time to talk about grown up…” is a bit much.
eru · 1h ago
I was including the weasel world 'fairly' in front of 'fungible' exactly because I foresaw such a discussion.
To be slightly more precise and wordy in what I wanted to express:
The original comment said:
> Not all energy is carbon intensive. Using a lot of power is fine as long as you get the power in a sustainable way.
And that's true. You can buy 'green electricity' from the grid. But that mostly just means that the people who don't care get allocated a larger fraction of 'non-green' electricity.
(That works, until you lose enough of that buffer of people who don't care where they electricity is coming from, that you have more demand for green electricity than is available on the grid. Then you actually need to spin up more green electricity or raise prices for that 'colour'.)
Yes, there's different demand levels over time (throughout the day, and throughout the year etc), and different demanders need different reliability levels. And all the power sources have different profiles for when they are available, and how easy (or hard or impossible) it is to turn them on or off on short notice.
> If your high school science teacher told you so, they were wrong. It's time to talk about proper grown up electronics in this discussion.
Haha, right.
15123125 · 3h ago
The "not all" argument is very lazy of you. All these production plants are on the grid and they use carbon energy. How much is "not all" ?
rob_c · 2h ago
Absolutely not.
The poster has highlighted a major issue that is often glossed over in terms of actual carbon footprint. It's like saying a Chinese coal plant is the same as a Scottish hydro dam otherwise which is very, very wrong.
Yes nothing is perfect, but running a dam in an area with little water silt for many many years is different in carbon footprint to a coal plant run at 80% efficiency from strip mined coal.
amelius · 4h ago
But a spinning harddrive contains electronics too ...
PaulKeeble · 2h ago
A HDD has 1 controller chip and that controller is relatively simple compared to the controller chip on an SSD, its certainly a lot smaller and on a less advanced process because it has to deal with ~200MB/s not 15000MB/s. On top of the controller chip a 2280 NVMe SSD will have 4 or 8 memory chips as well, which are on a recent process and have many many layers stacked together. Then the SSD will also often have a DRAM chip as well. An SSD uses substantially more silicon than a HDD does.
reginald78 · 2h ago
Not disputing your broader point necessarily, but don't hard drives also include memory for cache while dram cache has become less and less common on SSDs? (to point where it is now uncommon on new SSD devices)
PaulKeeble · 2h ago
Yes they do. A consumer HDD from Seagate seems to have about 512MB whereas an SSD from Samsung seems to use 2GB.
cheschire · 3h ago
HDDs use much simpler electronics than SSDs. The controller in a hard drive mostly just moves the read/write head and spins the motor. It doesn’t need to do much processing. These chips are built on older manufacturing processes that are cheaper and less energy intensive.
SSDs though need a fast processor to manage the flash memory for wear leveling, error correction, and keeping track of where everything is written etc. This requires more advanced chips, built on newer process nodes, which take a lot more energy and resources to make.
And SSDs also need extra chips like DRAM and obviously the flash memory itself.
dezgeg · 37m ago
Is the controller really going to be that much simpler on a HDD? On HDD you still have things like command queuing, caching, checksumming and bad block remapping just as with SSD. Sure you don't have the flash translation layer, but I'd expect processing of the analog signal going to/from the heads to be a processing headache too. I wouldn't be surprised if they have to do temperature compensation and factory and/or runtime calibration of the analog path. Of course much of that will be done in fixed-function custom hardware blocks, but almost certainly so would a SSD for eg. ECC calculations.
HDD controller calculates and checks ECC either and en/decodes the user's 0/1-bits to the 0/1-bits actually to be written on the platter but at much smaller speed (0.3 GB/s vs 14 GB/s SSD max). HDD has an extra servo controller. HDD has an extra 512 MB - 1 GB DRAM.
WD HDDs 20 GB+ have some FLASH inside for metadata offload.
whatevaa · 2h ago
DRAM is optional on NVME SSD's which can use host memory instead. DRAM-less SATA SSD's sucked.
diggan · 3h ago
> and fewer disparate materials (no moving parts = no coils,bearings,motors...)
But even so, isn't the lifetime of a SSD less than a HDD given the same amount of writes? Especially when you try to have the same amount of storage available.
So say I want 16TB, I can either get 1 16TB SATA drive, or something like 4 4TB SSDs (or 8 2TB) to have the same storage, is the physical volume still less in that case? And, if I write 16TB of data each month to each of these alternatives, which one breaks first and has to be replaced?
PaulKeeble · 2h ago
A typical consumer hard drive is going to last about 5 years, maybe more but it will wear just from existing. If you wrote 16GB to it a month that wont be a challenge for it.
The SSD however will be specified on a life of how many writes it can do. For example a 990 Pro from Samsung will have a specified life of 1200TB. That is about 76,800 months of operation at 16GB, this is not only not a challenging workload for an SSD but you have 8 of them so 8 * 76800 months of operation on average.
I have SSDs that are quite old, my original Intel 80GB G1 still works and still has 80% of its life left but its utterly obsolete as its too small to be useful and 500MB/s on SATA is a bit dated. All the OCZ SSDs are dead long ago but the Crucial M4 is still trucking along and its 512GB so still useful enough as is the Western Digital 1TB. I have barely scratched the surface of their usage life despite writing 35TB on them they still have 95% life over 42,000 hours of operation.
So at 16GB writes monthly they are going to last as long as the silicon and PCB lasts, which if made well could be decades. They will certainly become obsolete in speed and size more than likely rather than dying.
diggan · 2h ago
> I have barely scratched the surface of their usage life despite writing 35TB on them they still have 95% life over 42,000 hours of operation.
What are you using these drives for, if I may ask? It seems to barely be used for anything, or I'm an outlier here. Here's an example of my main drive in my desktop/work computer:
Model: Samsung SSD 980 PRO 2TB
Power On Hours: 6,286
Data Written: 117,916,388 [60.3 TB]
I'm just an average developer (I think), although I do bounce around a lot between languages/projects/manage VMs and whatnot, and some languages (Rust) tend to write a lot of data to disk when compiling.
But the ratio difference seems absurd, you have ~0.0008 written per hour, while I have ~0.0096, that's a huge difference.
Both of these are games, so they tend to sit with contents for longer. My boot SSD is 31TB in 6183 hours so quite a bit faster usage than those old drive but about half your rate. Its very workload dependent how long SSDs last. 16GB a month is extremely slow, much slower than my usage at 120GB a day. SSDs you could write their entire contents every day and they would still be working 5 years later but probably not 10.
ndriscoll · 14m ago
My main OS disk (256 GB SATA) reports 40k power on hours and 27.5 TBW. I've also got a 4 TB nvme drive that's at 16.5k power on hours and 5.7 TBW and an 8 TB SATA drive that's at 18.5k hours and 25 TBW.
fuzzy2 · 1h ago
A server machine with multiple busy databases has 24.6 TiB written in 1349 hours. That’s on ZFS. As such, I’d say your usage patterns appear rather non-average.
Teongot · 4h ago
> To be cynical, there's probably a lot of corporate spin - Seagate is a HDD company after all.
applause
everdrive · 4h ago
Until we start optimizing code, websites, etc this is a meaningless argument. Computers could use much less carbon than they do, but everyone needs them to be really fast, really powerful, use ray tracing, etc. But what most people do on computers could easily be done from a computer 20+ years ago; email, chatting, watching videos, word processing. In an alternate reality, the computing advances over time would have all been about efficiency, and code, operating systems, and websites would be written lean to work on much slower devices. We'll never live in that world, so Seagate's argument could potentially be true in a technical sense, but ultimately doesn't matter.
eru · 3h ago
A lot of the computer advances of the last two decades have been about power efficiency: remember, phone are also computers. Laptops and tablets, too.
'Thanks' to limited battery capacity, consumers have been very interested in power efficiency.
PaulKeeble · 2h ago
I still have an old 3930K which is 6 cores at about 3.6Ghz with 16GB of RAM, it was used as a game server for years but its not on now. It consumes about 110Watts at idle, there is no GPU of note (710GT or something like that) but its mostly all the CPU and lack of power control. A newer desktop however with 32GB DDR5 and a 9800X3D will idle at 40Watts with a lot more drives and a modern GPU etc. New machines use considerably less power at idle and when you go back as far as a Pentium 4 those things used 100Watts all the time just for the CPU whether in use or not.
Anything since about the Core 9th gen does behave fairly well as do all the modern era Ryzen processors. There is definitely some CPUs in the middle that had a bunch of issues with power management and had performance issues ramping clockspeed up which was felt on the desktop as latency. Its been for me a major advance of the past 10 generations of CPUs the power management behaviour has improved significantly.
nottorp · 3h ago
I don't think the desktop CPU and video chip manufacturers got that memo...
But as the top of this thread said, the most unjustified carbon footprint comes from javascript.
maccard · 2h ago
No, intel didn’t get it. AMD certainly did. An i7 14700k can draw 253 watts and do 5.4 GHz, a 9800X3D can boost to 5.2GHz at 160W. Thats pretty close to the top end of desktop CPUs (for home use). As you go down the chain, you’ll see huge drops in power usage.
Intel in particular are guilty of replacing their mid and low range CPUs and replacing them with neutered low power cores to try and claw back the laptop market.
nottorp · 2h ago
160 W is less than Intel, but still a lot.
And I bet that even with AMD you get 85% of the performance with 50% of the power consumption on any current silicon...
And 85% of the current performance is a lot.
I have an AMD box that I temperature limited from the BIOS (because I was too lazy to look where the power limits were). It never uses more than 100W and it's fast enough.
eru · 1h ago
Have you heard of 'race-to-idle' or 'race-to-sleep'?
Temporarily allowing performance (and thus power) spikes might make your whole system consume less energy, because it can idle more.
nottorp · 1h ago
You can't race to idle when two out of three websites go amok with your CPU :)
That's like saying communism works in theory, it's just applied wrong.
ZYbCRq22HbJ2y7 · 2h ago
> But as the top of this thread said, the most unjustified carbon footprint comes from javascript.
You think that a programming language has a carbon footprint just from existing? Maybe you could reword your argument.
I'd treat it with a two-order of magnitudes (at least) grain of salt.
Timshel · 3h ago
Thx, there is a graph for embodied cost at the start of page 3 and figure vary wildly.
The biggest drive at the time was 3840GB, and its production emitted ~425kg.
But if we look at the more prevalent models the emission vary widly: for 512: between ~25 to 225kg and for 1024 from ~60 to 275kg.
And while the paper start with the assertion that manufacturing a Gigabyte
of Flash emits 0.16 Kg CO2, I expect emission to be highly dependent on the number of flash module and not so much on the capacity itself.
GuestFAUniverse · 3h ago
AFAIK a gram of purified silicon has used the equivalent of 75kg of charcoal -- I wish I still had the sources.
So, yeah. Possibly.
Then again: the amount of hard drives I need to get a RAID that performs anything close to even the slowest SSD(RAID) is unfathomable.
-> hard drives need more resources and a lot more power considering performance
PaulKeeble · 2h ago
I suspect in CO2e if performance is the limitation of what you are doing then an SSD is going to be less CO2e emitted because you can do it on fewer devices. However if you are just looking for space and the performance isn't an issue then the HDD will be the less CO2e emitting solution for now.
This i think will change however as more of the grid uses renewable energy, there is nothing instrinsic in the silicon process that must be burning CO2, its all electricity based and while it uses a lot of energy it doesn't require burning fuel like making Iron does (at the moment).
GuestFAUniverse · 1h ago
Agree. These were pretty old numbers and are getting irrelevant by the day.
And as I tried to argue: for performance SSD are a no-brainer in all dimensions.
Space vs. cost: you don't have to tell me. We take care for more than a PB in-house. Guess what? Still mostly on HDs.
bob1029 · 4h ago
I think the numbers might be broken?
If it costs you 5000 kg of CO2 to manufacture the SSD, you will never recoup in operational terms no matter how it's sliced.
A modern NGCC power plant generates 400-500kg of CO2 per megawatt-hour of energy produced. It would take well over 100 years of operation to begin approaching this at a consumption level of ~10w.
gnfargbl · 4h ago
I thought so too at first, but after a little bit of research, those really do appear to be the correct numbers. For instance, a detailed report [1] into one of Seagate's 1.92TB SSDs suggests that it consumed >200kg CO2 per TB. Eye-opening, really.
One of the estimates my Solar system gives me is the amount of CO2 saved from using the system. I save about 6 Tonnes a year for having a 5.5KWp system. That is about 2.2 Tonnes of coal not burnt but its also only about 3 trees worth!
Its quite staggering the amount of CO2 our energy use is actually producing. KGs of CO2 for an items manufacture is pretty normal.
voidUpdate · 4h ago
What load are these drives under? Is that just idle consumption, or constantly actively reading/writing?
bmenrigh · 4h ago
They only estimated “embodied carbon” which is all the carbon used to make the thing.
They conveniently ignore the “operational carbon” to use them which is going to be way higher than SSD / NVME. They’re way slower which keeps the rest of the computer waiting and using power for much longer.
user32489318 · 4h ago
They also failed to adjust for expected lifetime of a product.
bmenrigh · 4h ago
Yeah “manufacturing carbon per TB” has got to be one of the most creative units I’ve seen used in corporate marketing bullshit.
voidUpdate · 4h ago
The third column of the table is CO2/TB/Year, doesn't that mean how much they use per year, which would depend on how they're being used?
bmenrigh · 4h ago
No, they just assume drives last 5 years so they divide the carbon by 5.
jbverschoor · 4h ago
Curious to see what WD / SanDisk has to say about this
nialse · 3h ago
This is per TB, right? Not per IOPS?
lupusreal · 4h ago
I find it hard to imagine a situation in which carbon footprint could be a tie breaker in a storage purchasing decision.
userbinator · 4h ago
Because they don't wear out with each write and need to be replaced regularly like current SSDs do? Oh wait, we're talking about Seagate disks...
Zekio · 4h ago
well unlike SSDs you need to regularly replace HDDs no matter the workload :D
rapestinians · 3h ago
Now even storage vendors play the stupid game of climate BS.
ChrisNorstrom · 2h ago
I can't wait to see their reaction one day when they realize "Climate Change" is actually caused by the sun's increased output and upcoming scheduled micronova. I saw an article floating around that admitted there's no man made climate change, it's actually the sun, which was pretty brave.
Perhaps the embodied carbon of memory chips is higher than I expect. To be cynical, there's probably a lot of corporate spin - Seagate is a HDD company after all.
The EUV light production requires more than a megawatt by itself because of all the losses involved.
No doubt this is why they only report on embodied carbon since it’s the one metric they can win on.
OTOH, if you have solar panels and drive power to the grid, you can sell "credits" to large power users who will use them to "offset" their less-green power sources to appear to get their power in sustainable ways / be carbon-neutral. Kind of an "I'm using dirtier power, but I teamed up with someone who's using fully sustainable power, so it all comes out in the wash!"
https://energyandcleanair.org/publication/turning-point-chin...
This is part of the issue with Seagate's assessment, its changing rapidly right now and in 5 years time Taiwan will be on mostly renewable energy and suddenly all of Seagate's iron usage will make them more expensive, then in 10 years time they are using electrically produced iron and its not an issue anymore.
Again look at Spain, not all power sources are equal and just because it's been connected to a network doesn't magically mean all of the load is then spread between all sinks.
If your high school science teacher told you so, they were wrong. It's time to talk about proper grown up electronics in this discussion.
To be slightly more precise and wordy in what I wanted to express:
The original comment said:
> Not all energy is carbon intensive. Using a lot of power is fine as long as you get the power in a sustainable way.
And that's true. You can buy 'green electricity' from the grid. But that mostly just means that the people who don't care get allocated a larger fraction of 'non-green' electricity.
(That works, until you lose enough of that buffer of people who don't care where they electricity is coming from, that you have more demand for green electricity than is available on the grid. Then you actually need to spin up more green electricity or raise prices for that 'colour'.)
Yes, there's different demand levels over time (throughout the day, and throughout the year etc), and different demanders need different reliability levels. And all the power sources have different profiles for when they are available, and how easy (or hard or impossible) it is to turn them on or off on short notice.
> If your high school science teacher told you so, they were wrong. It's time to talk about proper grown up electronics in this discussion.
Haha, right.
The poster has highlighted a major issue that is often glossed over in terms of actual carbon footprint. It's like saying a Chinese coal plant is the same as a Scottish hydro dam otherwise which is very, very wrong.
Yes nothing is perfect, but running a dam in an area with little water silt for many many years is different in carbon footprint to a coal plant run at 80% efficiency from strip mined coal.
SSDs though need a fast processor to manage the flash memory for wear leveling, error correction, and keeping track of where everything is written etc. This requires more advanced chips, built on newer process nodes, which take a lot more energy and resources to make.
And SSDs also need extra chips like DRAM and obviously the flash memory itself.
https://spritesmods.com/?art=hddhack&page=3 has some data points on a 10+ year old HDD - 3 ARM9 feroceon cores + Cortex-M3 core. Doesn't sound too simple.
WD HDDs 20 GB+ have some FLASH inside for metadata offload.
But even so, isn't the lifetime of a SSD less than a HDD given the same amount of writes? Especially when you try to have the same amount of storage available.
So say I want 16TB, I can either get 1 16TB SATA drive, or something like 4 4TB SSDs (or 8 2TB) to have the same storage, is the physical volume still less in that case? And, if I write 16TB of data each month to each of these alternatives, which one breaks first and has to be replaced?
The SSD however will be specified on a life of how many writes it can do. For example a 990 Pro from Samsung will have a specified life of 1200TB. That is about 76,800 months of operation at 16GB, this is not only not a challenging workload for an SSD but you have 8 of them so 8 * 76800 months of operation on average.
I have SSDs that are quite old, my original Intel 80GB G1 still works and still has 80% of its life left but its utterly obsolete as its too small to be useful and 500MB/s on SATA is a bit dated. All the OCZ SSDs are dead long ago but the Crucial M4 is still trucking along and its 512GB so still useful enough as is the Western Digital 1TB. I have barely scratched the surface of their usage life despite writing 35TB on them they still have 95% life over 42,000 hours of operation.
So at 16GB writes monthly they are going to last as long as the silicon and PCB lasts, which if made well could be decades. They will certainly become obsolete in speed and size more than likely rather than dying.
What are you using these drives for, if I may ask? It seems to barely be used for anything, or I'm an outlier here. Here's an example of my main drive in my desktop/work computer:
I'm just an average developer (I think), although I do bounce around a lot between languages/projects/manage VMs and whatnot, and some languages (Rust) tend to write a lot of data to disk when compiling.But the ratio difference seems absurd, you have ~0.0008 written per hour, while I have ~0.0096, that's a huge difference.
Here's what I ran to get the data, in case others wanna check theirs: https://gist.github.com/victorb/f120f5b9bcc1c04a4c3d0107f633...
applause
'Thanks' to limited battery capacity, consumers have been very interested in power efficiency.
Anything since about the Core 9th gen does behave fairly well as do all the modern era Ryzen processors. There is definitely some CPUs in the middle that had a bunch of issues with power management and had performance issues ramping clockspeed up which was felt on the desktop as latency. Its been for me a major advance of the past 10 generations of CPUs the power management behaviour has improved significantly.
But as the top of this thread said, the most unjustified carbon footprint comes from javascript.
Intel in particular are guilty of replacing their mid and low range CPUs and replacing them with neutered low power cores to try and claw back the laptop market.
And I bet that even with AMD you get 85% of the performance with 50% of the power consumption on any current silicon...
And 85% of the current performance is a lot.
I have an AMD box that I temperature limited from the BIOS (because I was too lazy to look where the power limits were). It never uses more than 100W and it's fast enough.
Temporarily allowing performance (and thus power) spikes might make your whole system consume less energy, because it can idle more.
That's like saying communism works in theory, it's just applied wrong.
You think that a programming language has a carbon footprint just from existing? Maybe you could reword your argument.
I'd treat it with a two-order of magnitudes (at least) grain of salt.
The biggest drive at the time was 3840GB, and its production emitted ~425kg. But if we look at the more prevalent models the emission vary widly: for 512: between ~25 to 225kg and for 1024 from ~60 to 275kg.
And while the paper start with the assertion that manufacturing a Gigabyte of Flash emits 0.16 Kg CO2, I expect emission to be highly dependent on the number of flash module and not so much on the capacity itself.
So, yeah. Possibly.
Then again: the amount of hard drives I need to get a RAID that performs anything close to even the slowest SSD(RAID) is unfathomable. -> hard drives need more resources and a lot more power considering performance
This i think will change however as more of the grid uses renewable energy, there is nothing instrinsic in the silicon process that must be burning CO2, its all electricity based and while it uses a lot of energy it doesn't require burning fuel like making Iron does (at the moment).
And as I tried to argue: for performance SSD are a no-brainer in all dimensions.
Space vs. cost: you don't have to tell me. We take care for more than a PB in-house. Guess what? Still mostly on HDs.
If it costs you 5000 kg of CO2 to manufacture the SSD, you will never recoup in operational terms no matter how it's sliced.
A modern NGCC power plant generates 400-500kg of CO2 per megawatt-hour of energy produced. It would take well over 100 years of operation to begin approaching this at a consumption level of ~10w.
[1] https://www.seagate.com/files/www-content/global-citizenship...
Its quite staggering the amount of CO2 our energy use is actually producing. KGs of CO2 for an items manufacture is pretty normal.
They conveniently ignore the “operational carbon” to use them which is going to be way higher than SSD / NVME. They’re way slower which keeps the rest of the computer waiting and using power for much longer.