GigaByte CXL memory expansion card with up to 512GB DRAM

100 tanelpoder 60 9/6/2025, 6:17:06 PM gigabyte.com ↗

Comments (60)

Twirrim · 1d ago
CXL is going to be really interesting.

On the positive side, you can scale out memory quite a lot, fill up PCI slots, even have memory external to your chassis. Memory tiering has a lot of potential.

On the negative side, you've got latency costs to swallow up. You don't get distance from CPU for free (there's a reason the memory on your motherboard is as close as practical to the CPU) https://www.nextplatform.com/2022/12/05/just-how-bad-is-cxl-.... CXL spec for 2.0 is at about 200ns of latency added to all calls to what is stored in memory, so when using it you've got to think carefully about how you approach using it, or you'll cripple yourself.

There's been work on the OS side around data locality, but CXL stuff hasn't been widely available, so there's an element of "Well, we'll have to see".

Azure has some interesting whitepapers out as they've been investigating ways to use CXL with VMs, https://www.microsoft.com/en-us/research/wp-content/uploads/....

tanelpoder · 1d ago
Yup, for best results you wouldn't just dump your existing pointer-chasing and linked-list data structures to CXL (like the Optane's transparent mode was, whatever it was called).

But CXL-backed memory can use your CPU caches as usual and the PCIe 5.0 lane throughput is still good, assuming that the CXL controller/DRAM side doesn't become a bottleneck. So you could design your engines and data structures to account for these tradeoffs. Like fetching/scanning columnar data structures, prefetching to hide latency etc. You probably don't want to have global shared locks and frequent atomic operations on CXL-backed shared memory (once that becomes possible in theory with CXL3.0).

Edit: I'll plug my own article here - if you've wondered whether there were actual large-scale commercial products that used Intel's Optane as intended then Oracle database took good advantage of it (both the Exadata and plain database engines). One use was to have low latency durable (local) commits on Optane:

https://tanelpoder.com/posts/testing-oracles-use-of-optane-p...

VMware supports it as well, but using it as a simpler layer for tiered memory.

packetlost · 21h ago
> You probably don't want to have global shared locks and frequent atomic operations on CXL-backed shared memory (once that becomes possible in theory with CXL3.0).

I'd bet contested locks spend more time in cache than most other lines of memory so in practice a global lock might not be too bad.

tanelpoder · 21h ago
Yep agreed, for single-host with CXL scenarios. I wrote this comment thinking about a hypothetical future CXL3.x+ scenario with multi-host fabric coherence where one could in theory put locks and control structures that protect shared access to CXL memory pools into the same shared CXL memory (so, no need for coordination over regular network at least).
samus · 15h ago
DBMSs have been managing storage with different access times for decades and it should be pretty easy to adapt an existing engine. Or you could use it as a gigantic swap space. No clue whether additional kernel patches would be required for that.
GordonS · 23h ago
Huh, 200ns is less than I imagined; even if it is still almost 100x slower than regular RAM, it's still around 100x faster than NVMe storage.
Dylan16807 · 23h ago
Regular RAM is 50-100ns.
jauntywundrkind · 23h ago
Most cross-socket traffic is >100ns.
temp0826 · 15h ago
I have never had to go deep into NUMA configuration personally but couldn't it be leveraged here?
wmf · 15h ago
Yes, if you want your app to be aware of CXL you can configure it as a separate NUMA node.
tanelpoder · 15h ago
Optane memory modules also present themselves as separate (memory only) NUMA nodes. They’ve given me a chance to play with Linux tiered memory, without having to emulate the hardware for a VM
immibis · 21h ago
What kind of motherboard, CPU, cables, switches, and end devices would I need to buy to have a CXL network?
afr0ck · 20h ago
CXL uses the PCIe physical layer, so you just need to buy hardware that understands the protocol, namely the CPU and the expansion boards. AMD Genoa (e.g. EPYC 9004) supports CXL 1.1 as well as Intel Saphire Rapids and all subsequent models do. For CXL memory expansion boards, you can get from Samsung or Marvell. I got a 128 GB model from Samsung with 25 GB/s read throughput.
wmf · 21h ago
CXL networking is still in the R&D stage.
imtringued · 13h ago
The latency concern is completely overblown because CXL has cache coherence. The moment you do a second request to the same page it will be a cache hit.

I would be more worried about memory bandwidth. You can now add so much memory to your servers that it might take minutes to do a full in-memory table scan.

justincormack · 11h ago
Cache lines are 64 bytes, not page size.
mdaniel · 1d ago
> Buy From One of the Regions Below > Egypt

:-/

But, because I'm a good sport, I actually chased a couple of those links figuring that I could convert Egyptian Pound into USD but <https://www.sigma-computer.com/en/search?q=CXL%20R5X4> is "No results", and similar for the other ones that I could get to even load

tanelpoder · 1d ago
Yeah I saw the same. I've been keeping an eye on the CXL world for ~5 years and so far it's 99% announcements, unveilings and great predictions. But the only CXL cards a consumer/small business can buy are some experimental-ish 64GB/128GB cards that you can actually buy today. Haven't seen any of my larger clients use it either. Both Intel Optane and DSSD storage efforts got discontinued after years of fanfare, from technical point of view, I hope that the same doesn't happen to CXL.
afr0ck · 20h ago
I think Meta has already rolled out some CXL hardware for memory tiering. Marvell, Samsung, Xconn and many others have built various memory chips and switching hardware up to CXL 3.0. All recent Intel and AMD CPUs support CXL.
sheepscreek · 23h ago
That is pretty hilarious. I wonder what’s the reason behind this. Maybe they wanted plausible deniability in case someone tried to buy it (“oh the phone lines were down, you’ll have to go there to buy one”).
eqvinox · 15h ago
I think someone just forgot to delete an option somewhere and it "crept in", and it really isn't supposed to have a "buy" link at all at this point.
antonvs · 15h ago
Ok, I rented a camel and went to the specified location, but there was nothing there but some scorpions and an asp. What gives?
bri3d · 1d ago
CXL is a standard for compute and I/O extension over PCIe signaling which has been around for a few years, with a couple of available RAM boards (from SMART and others).

I think the main bridge chipsets come from Microchip (this one) and Montage.

This Gigabyte product is interesting since it’s a little lower end than most AXL solutions - so far AXL memory expansion has mostly appeared in esoteric racked designs like the particularly wild https://www.servethehome.com/cxl-paradigm-shift-asus-rs520qa... .

bobmcnamara · 21h ago
CXL seems so much cleaner than the old AMD way of plumbing an FPGA through the second CPU socket.
eqvinox · 15h ago
The "AI" marketing on this is positively silly (and a good reflection of how weird everything has gotten in this industry.)

Do like the card though, was waiting for someone to make an affordable version (or rather: this looks affordable, I hope it will be both that and actually obtainable. CXL was kinda locked away so far…)

pella · 20h ago
I’m really looking forward to GPU-CXL integration.

"CXL-GPU: Pushing GPU Memory Boundaries with the Integration of CXL Technologies" https://arxiv.org/abs/2506.15601

trebligdivad · 1d ago
My god - a CXL product! That's really surprising anything go that far. I'd been expecting external CXL boxes, not internal stuff.
nmstoker · 10h ago
Assuming you have the requisite CPU and motherboard with this card, does the memory just appear as normal under Linux/Windows/whatever OS is installed? Or do you need to get special drivers or other particular software to make use of it?
alberth · 18h ago
As someone not well versed in GPU and CXL, would someone mind explaining the significance of this.
wmf · 15h ago
This looks like the first CXL card you could actually buy. It's been coming soon for years. It also confirms that both Intel and AMD workstation CPUs support CXL.
roscas · 1d ago
That is amazing. Most consumer boards will only have 32 or 64. To have 512 is great!
justincormack · 23h ago
You havent seen the price of 128GB DDR5 RDIMMs, they are maybe $1300 each.

A lot of the initial use cases of CXL seem to be to use up lots of older DDR4 RDIMMs in newer systems to expand memory, eg cloud providers have a lot.

kvemkon · 23h ago
Micron DDR5-5600 for 900 Euro (without VAT, business).
tanelpoder · 1d ago
... and if you have the money, you can use 3 out of 4 PCIe5 slots for CXL expansion. So that could be 2TB DRAM + 1.5TB DRAM-over-CXL, all cache coherent thanks to CXL.mem.

I guess there are some use cases for this for local users, but I think the biggest wins could come from the CXL shared memory arrays in smaller clusters. So you could, for example, cache the entire build-side of a big hash join in the shared CXL memory and let all other nodes performing the join see the single shared dataset. Or build a "coherent global buffer cache" using CPU+PCI+CXL hardware, like Oracle Real Application Clusters has been doing with software+NICs for the last 30 years.

Edit: One example of the CXL shared memory pool devices is Samsung CMM-B. Still just an announcement, haven't seen it in the wild. So, CXL arrays might become something like the SAN arrays in the future - with direct loading to CPU cache (with cache coherence) and being byte-addressable.

https://semiconductor.samsung.com/news-events/tech-blog/cxl-...

cjensen · 1d ago
Both of the supported motherboards support installation of 2TB of DRAM.
reilly3000 · 22h ago
Presumably this is about adding more memory channels via pcie lanes. I’m very curious to know what kind of bandwidth one could expect with such a setup, as that is the primary bottleneck for inference speed.
Dylan16807 · 22h ago
The raw speed of PCIe 5.0 x16 is 63 billion bytes per second each way. Assuming we transfer several cache lines at a time the overhead should be pretty small, so expect 50-60GB/s. Which is on par with a single high-clocked channel of DRAM.
jonhohle · 23h ago
Why did something like this take so long to exist? I’ve always wanted swap or tmpfs available on old RAM I have lying around.
gertrunde · 23h ago
Such things have existed for quite a long time...

For example:

https://en.wikipedia.org/wiki/I-RAM

(Not a unique thing, merely the first one I found).

And then there are the more exotic options, like the stuff that these folk used to make: https://en.wikipedia.org/wiki/Texas_Memory_Systems - iirc - Eve Online used the RamSan product line (apparently starting in 2005: https://www.eveonline.com/news/view/a-history-of-eve-databas... )

numpad0 · 10h ago
Yeah. I can't count how many times I've seen descriptions of northbridge links smelling like the author knows it's PCIe under the hood. I've also seen someone explaining that it can't be done on most CPUs unless all cache systems are turned off because (IO?)MMU don't allow caching of MMIO addresses outside DRAM range.

The technical explanations for the fact that you (boolean)can't have extra DRAM controllers on PCIe is increasingly sounding like market segmentation reasons than purely technical ones. x86 is a memory mapped I/O platform. Why we can't just have RAM sticks on RAM addresses.

The reverse of this works btw. NVMe drives can use Host Memory Buffer to cache reads and writes on system RAM - the feature that jammed and caught fire on recently rumored bad ntfs.sys incident in Windows 11.

kvemkon · 23h ago
I'd have rather a question why we had single (or already dual) core CPUs with dual-channel memory controller and now we have 16-core CPUs but still with only dual-channel RAM.
justincormack · 11h ago
AMD EPYC has 12 channel, 24 on dual socket. AMD sell machines with 2 (consumer), 4 (threadripper), 6 (dense edge), 8 (threadripper pro) and 12 memory channels (EPYC high end). Next generation EPYC will have 16 channels. Roughly if you look at the AMD options, they give you 2 memory channels per 16 cores. CPUs tend to be somewhat limited in what bandwidth they can use, eg on Apple Silicon you cant actually consume all the memory bandwidth on the wider options just on the CPUs, its mainly useful for the GPU. DDR5 was double speed of DDR4, and speeds have been ramping up too, so there have been improvements there.
Dylan16807 · 22h ago
DDR1 and DDR2 were clocked 20x and 10x slower than DDR5. The CPU cores we have now are faster but not that much faster, and with the typical user having 8 or fewer performance cores 128 bits of memory width has stayed a good balance.

If you need a lot of memory bandwidth, workstation boards have DDR5 at 256-512 bits wide. Apple Silicon supports that range on Pro and Max, and Ultra is 1024.

(I'm using bits instead of channels because channels/subchannels can be 16 or 32 or 64 bits wide.)

bobmcnamara · 21h ago
Intel and AMD I'd reckon. Apple went wide with their busses.
to11mtm · 20h ago
Well, Each Channel needs a lot of pins. I don't think all 288/262 pins need to go to the CPU, but a large number of them do, I'd wager; The old LGA 1366 (Tri-Channel) and LGA 1151 (Dual Channel) are probably as close as we can get to a simple reference point [0].

Apple FBOW, based on a quick and sloppy count of a reballing jig [1], has something on the order of 2500-2700 balls on an M2 CPU.

I think AMD's FP11 'socket' (it's really just a standard ball grid array) pinout is something on the order of 2000-2100 balls and that gets you four 64 Bit DDR channels (I think Apple works a bit different and uses 16 bit channels, thus the 'channel count' for an M2 is higher.)

Which is a roundabout way of saying, AMD and Intel probably can match the bandwidth but to do so likely would require moving to soldered CPUS which would be a huge paradigm shift for all the existing boardmakers/etc.

[0] - They do have other tradeoffs; namely that 1151 has built in PCIE, on the other hand the link to the PCH is AFAIR a good bit thinner than the QPI link on the 1366.

[1] - https://www.masterliuonline.com/products/a2179-a1932-cpu-reb... . I counted ~55 rows along the top and ~48 rows on the side...

bobmcnamara · 4h ago
Completely agree, and this is a bit of a ramble...

I think part of might be that Apple recognized that integrated GPUs require a lot of bulk memory bandwidth. I noticed this with their tablet derivative cores having memory bandwidth that tended to scale with screen size but Samsung and Qualcomm didn't bother for ages. And it sucked doing high speed vision systems on their chips because of it.

For years Intel had been slowly beefing up the L2/L3/L4.

M1Max is somewhere between Nvidia 1080 and 1080TI in bulk bandwidth. The lowest end M chips aren't competitive, but near everything above that overlaps even the current gen NVIDA 4050+ offerings

christkv · 22h ago
Check out Strix Halo 395+ it’s got 8 memory channels up to 128 GB and 16 cores
Dylan16807 · 22h ago
That's a true but misleading number. It's the equivalent of "quad channel" in normal terms.
aidenn0 · 23h ago
(S)ATA or PCI to DRAM adapters were widely available until NAND became cheaper per bit than DRAM, at which point the use for it kind of went away.

IIRC Intel even made a DRAM card that was drum-memory compatible.

Dylan16807 · 23h ago
RAM controllers are expensive enough that it's rarely worth pairing them with old RAM lying around.
nottorp · 14h ago
For every gold rush, make and sell shovels.
JonChesterfield · 22h ago
I don't get it. The point of (ddrN) memory is latency. If its on the far side of pcie latency is much worse than the system memory. In what sense is this better than ssd on the far side of pcie?
wmf · 22h ago
It's only ~2x worse latency than main memory but 100x lower than SSD.
JonChesterfield · 21h ago
I'm finding ~50ns best case for pcie, ~10ns for system. Which is a lot closer than I expected.
adgjlsfhk1 · 18h ago
no system ram is 10ns. That's closer to L2 cache.
Cr8 · 16h ago
pcie devices can also do direct transfers to each other - if you have one of these and a gpu its relatively quick to move data between them without bouncing through main ram
amirhirsch · 23h ago
The i in that logo seems like it’s hurting the A
jauntywundrkind · 21h ago
I wonder whose controller they are using.

For a memory controller, that thing looks hot!

marcopolis · 21h ago
From the manual, it looks like a Microchip PM8712 [1].

[1] PDF Data sheet: https://ww1.microchip.com/downloads/aemDocuments/documents/D...

fithisux · 13h ago
The Amiga approach resurrected.