That article is really low on details and mixes up a lot of things. It compares microleds to traditional WDM fiber transmission systems with edge emitting DFB lasers and ECLs, but in datacentre interconnects there's plenty of optical links already and they use VCSELs (vertical cavity surface emitting lasers), which are much cheaper to manufacture. People also have been putting these into arrays and coupling to multi-core fiber. The difficulty here is almost always packaging, i.e. coupling the laser. I'm not sure why microleds would be better.
Also transmitting 10 Gb/s with a led seems challenging. The bandwidth of an incoherent led is large, so are they doing significant DSP (which costs money and energy and introduces latency) or are they restricting themselves to very short (10s of m) links?
BardiaPezeshki · 1d ago
In datacenters people use optics for longer distances (10m to 2km). Within a rack it is almost always copper. The reason is that for short distances lasers are too expensive, unreliable, and consume too much power. We think microLED based links might replace copper at short distances (sub 10m). MicroLEDs into relatively thick fiber cores (50um) are much easier to package than standard single mode laser based optics.
on the distance - exactly right. The real bottleneck now in AI clusters is the interconnect within a rack or sub 10m. So that is the market we are addressing.
On your second point - exactly! Normally people think LEDs are slow and suck. That is the real innovation. At Avicena, we've figured out how to make LEDs blink on and off at 10Gb/s. This is really surprising and amazing! So with simple on-off modulation, there is no DSP or excess energy use. The article says TSMC is developing arrays of detectors, based on their camera process, that also receive signals at 10Gb/s. Turns out this is pretty easy for a camera with a small number of pixels (~1000). We use blue light, which is easily absorbed in silicon. BTW, feel free to reach out to Avicena, and happy to answer questions.
mmmBacon · 18h ago
It’s not correct to say that lasers are unreliable. Last year more than 20M transceivers shipped. Your statement is not at all supported by real field failure data.
The reliability of micro LEDs. and specifically GaN based micro LEDs is however an open question.
In the absence of any dislocation failure mechanisms it will depend on the current density and thermal dissipation. And just like any other material, it will have to survive in a non-hermetic environment and in the presence of corrosive gasses (an issue in data centers).
To get the 10G, it’s probably kind of like a VCSEL without the grating and so current density is probably high. How well you’re able to heat sink it is going to determine how reliable it will be.
Overall I like the idea. It looks like the beachfront could work. I’d spend more time talking about and how the electrical connection works and what kind of interface to a chip would be needed.
I’d also be careful before throwing shade on laser reliability because it could backfire on you (for all reasons above).
LargoLasskhyfv · 19h ago
So has Gb/s a new meaning now? Giga blinks per second?
throwaway48476 · 1d ago
So, toslink?
rajnathani · 1d ago
The article is about chip interconnects. Think like replacing PCIe, NVLink, or HBM/DDR RAM buses, with optical communication.
qwezxcrty · 1d ago
I guess they are doing direct modulated IMDD for each link so the DSP burden is not related to the coherence of diodes? Also indeed very short reach in the article.
cycomanic · 1d ago
The problem with both leds and imaging fibres is that modal dispersion is massive and completely destroys your signal after only a few meters of propagation. So unless you do MMSE (which I assume would be cost prohibitive), you really can only go a few meters. IMDD doesn't really make a difference here.
adgjlsfhk1 · 1d ago
I think this is intended for short distances (e.g. a few cm). cpu to GPU and network card to network card still will be lasers, the question is whether you can do core to core or CPU to ram with optics
cycomanic · 1d ago
But why are they talking about multicore fibres then? I would have expected ribbons. You might be right though.
topspin · 1d ago
> I would have expected ribbons.
The cable is just 2D parallel optical bus. With a bundle like this, you can wrap it with a nice, thick PVC (or whatever) jacket and employ a small, square connector that matches the physical scheme of the 2D planar microled array.
It's a brute force, simple minded approach enabled by high speed, low cost microled arrays. Pretty cool I think.
The ribbon concept could be applicable to PCBs though.
cycomanic · 1d ago
You might be right and they are talking about fibre bundles, but that that's something different to a multicore fibre (and much larger as well, which could pose significant problems especially if we are talking cm links). What isn't addressed is that leds are quite spatially incoherent and beam divergence is strong, so the fibres they must use are pretty large, coupling via just a connector might not be easy especially if we want to avoid crosstalk.
What I'm getting at is, that I don't see any advantage over vcsel arrays. I'm not convinced that the price point is that different.
topspin · 1d ago
> You might be right and they are talking about fibre bundles
The caption of the image of the cable and connector reads: "CMOS ASIC with microLEDs sending data with blue light into a fiberbundle." So yes, fibre bundles.
> I don't see any advantage over vcsel arrays
They claim the following advantages:
1. Low energy use
2. Low "computational overhead"
3. Scalability
All of these at least pass the smell test. LEDs are indeed quite efficient relative to lasers. They cite about an order of magnitude "pJ/bit" advantage for the system over laser based optics, and I presume they're privy to vcsels. When you're trying to wheedle nuclear reactor restarts to run your enormous AI clusters, saving power is nice. The system has a parallel "conductor" design that likely employs high speed parallel CMOS latches, so the "computational overhead" claim could make sense: all you're doing is latching bits to/from PCB traces or IC pins so all the SerDes and multiplexing cost is gone. They claim that it can easily be scaled to more pixels/lines. Sure, I guess: low power makes that easier.
There you are. All pretty simple.
I think there is use case for this outside data centers. We're at the point where copper transmission lines are a real problem for consumers. Fiber can solve the signal integrity problem for such use cases, however--despite several famous runs at it (Thunderbolt, Firewire)--the cost has always precluded widespread adoption outside niche, professional, or high-end applications. Maybe LED based optics can make fiber cost competitive with copper for such applications: one imagines a very small, very low power microLED based transceiver costing only slightly more than a USB connector on each end of such a cable with maybe 4-8 parallel fibers. Just spit-balling here
nsteel · 1d ago
Aren't they also claiming this is more reliable? I'm told laser reliability is a hurdle for CPO.
And given the talk about this as a CPO alternative, I was assuming this was for back plane and connections of a few metres, not components on the same PCB.
topspin · 1d ago
> Aren't they also claiming this is more reliable?
Indeed they do. I overlooked that.
I know little about microLED arrays and their reliability, so I won't guess about how credible this is: LED reliability has a lot of factors. The cables involved will probably be less reliable than conventional laser fiber optics due to the much larger number of fibers that have to be precision assembled. Likely to be more fragile as well.
On-site fabricating or repairing such cables likely isn't feasible.
nsteel · 1d ago
I understand that CPO reliability concerns are specifically with the laser drivers. It's very expensive to replace your whole chip when one fails. Even if the cables are a concern (I've no idea), having more reliable drivers would still be preferable to less reliable cables, given how much cheaper/easier replacing cables would be (up to a point, of course).
topspin · 1d ago
> I understand that CPO reliability concerns are specifically with the laser drivers.
Yes. I've replaced my share of dead transceivers, and I suspect the laser drivers were the failure mode of most of them.
That doesn't fill in the blank for me though: how reliable are high speed, dense microLEDs?
nsteel · 1d ago
And are they going to work out any better than Linear Drive Optics, the more obvious alternative?
topspin · 1d ago
LDO is just integration. It certainly has value: integration almost always does. So it's clearly the obvious optimization of conventional serial optical communication.
This new TSMC work with parallel incoherent optics is altogether distinct. No DSP. No SerDes. Apples and oranges.
nsteel · 1d ago
Ok, but I'm just after solutions to problems I have talking to other chips. I don't mind what's novel and what's optimisation. Whatever is adopted, in either case it's a step-change from the past 20 years of essentially just copper and regular serdes in this space.
And I'm not sure how much of this is actually TSMC's work, the title is misleading.
Edit: actually, they are working on the detector side.
BardiaPezeshki · 1d ago
we use borosilicate fibers that are used for illumination applications. You might have seen a bundle in a microscope light for example. And they are incredibly robust compared to single mode fibers. Note the very tight bend angle in the picture - that's a 3mm bend radius. Imagine doing that with a single mode fiber!
topspin · 1d ago
> And they are incredibly robust
See my other comment about non-datacenter applications. There is a serious opportunity here for fixing signal integrity problems with contemporary high bandwidth peripherals. Copper USB et al. are no good and in desperate need of a better medium.
BardiaPezeshki · 1d ago
The fiber cables we use are basically 2D arrays of 50um thick fibers that match the LED and detector arrays. We've made connectors and demonstrated very low crosstalk between the fibers. Advantage over VCSELs is much lower power consumption overall, much lower cost (LEDs are dirt cheap and extremely high yield), because we are blue light, the detector arrays are much easier and can be modified camera technology, and most importantly, much better reliability. VCSELs are notorious for bad rel.
morphle · 13h ago
This might be the breakthrough we also have been working on [1] for over 20 years. It would be even better if Avicena wouldn't drive the led array and detector array with high power 10 Gbps SerDes. Even better if you align a blue led array with lenses to a
detector array on a second chip: free space optics [2].
I would love to join you at Avicena and work on your breakthrough instead of just acquiring the IP from you in a few years.
The article mentions lengths of up to 10 m, so this technology is restricted to links inside a cabinet or between closely located cabinets.
The claimed advantage is a very high aggregate throughput and much less energy per bit than with either copper links or traditional laser-based optical links.
For greater distances, lasers cannot be replaced by anything else.
tehjoker · 1d ago
short links it’s in the article
cycomanic · 1d ago
Ah I missed the 10m reference there. I'm not sure it makes more sense though. Typical intra-datacenter connections are 10s-100s of meters and use VCSELs, so introducing microleds just for the very short links instead of just parallelising the VCSEL connections (which is being done already)? If they could actually replace the VCSEL I would sort of see the point.
jauntywundrkind · 1d ago
There's been a constant drum-beat that even intra-rack is trying to make its way to optical as fast as it can, that copper is more and more complex and expensive to scale faster. If we have a relatively affordable short range optical system that doesn't require heavy computational work to do, that sounds like a godsend, like a way to increase bits per joule while reducing expensive cabling cost.
Sure yes, optical might use expensive longer range optical today! But using that framing to assess new technologies & what help the could be may be folly.
sunray2 · 1d ago
Somewhat related: there's a relatively big push for optical interconnects and integrated optics in quantum computing. Maybe this article yields insight onto what may happen in future.
With quantum computing, one is forced to use lasers. Basically, we can't transmit quantum information with the classical light from LEDs (handwaving-ly: LEDs emit a distribution of possible photon numbers, not single photons, so you lose control at the quantum level). Moreover, we often also need the narrow linewidth of lasers, so that we can interact with atoms in the way we want them to. That is, not to excite unwanted atomic energy levels. So you see in trapped ion quantum computing people tripping over themselves to realise integration of laser optics, through fancy engineering that i don't fully understand like diffraction gratings within the chip that diffract light onto the ions. It's an absolutely crucial challenge to overcome if you want to make trapped ion quantum computers with more than several tens of ions.
Networking multiple computers via said optical interconnects is an alternative, and also similarly difficult.
What insight do i gleam from this IEEE article, then? I believe if this approach with the LEDs works out for this use case, then I'd see it as a partial admission of failure for laser-integrated optics at scale. It is, after all, the claim in the article that integrating lasers is too difficult. And then I'd expect to see quantum computing struggle severely to overcome this problem. It's still research at this stage, so let's see if Nature's cards fall fortuitously.
dgfl · 1d ago
Trapped-ion and neutral-atom QC require lasers because the light signal needs to be coherent. That's the main feature of a classical laser, really. The explanation with the number of photons doesn't really cut it, because even a perfect laser does not have a definite photon number: coherent states are inherently uncertain in both photon number and phase. But LEDs are even worse, because the light signal is truly incoherent. It's not even a good quantum state, it's a classical superposition of incoherent photons that you can't really use for any quantum control.
But even more than that, this seems to me like a purely on-chip solution. For trapped ions and neutral atoms you really need to translate to free-space optics at some point.
sunray2 · 1d ago
Indeed, it is nuanced, as you point out. For example, you can't just attenuate a laser and use that as a single photon source (instead you'd get a coherent state). To realise a true single photon source you need an additional (quantum) process, like controlled stimulated emission from single atoms, or driving some nonlinear crystal to generate photon pairs (that's using spontaneous parametric down conversion, i think). And that's where the coherence properties of the laser are essential.
As for fully integrated optics, it's where quantum computers eventually want to be, and there's no physical limitations currently. But perhaps it's too early to say whether we would absolutely require free space optics because it's impossible to do some optics thing another way.
mmmBacon · 1d ago
Quantum computing is still a technology of the future. When we are still talking about 12 qubits as a breakthrough, there’s a long way to go. Optical interconnects are the least of quantum computing’s problems.
However, it’s not correct to say lasers are unreliable. It’s fundamentally false and it’s not supported by field data from today’s pluggable modules. 10’s of millions of lasers are deployed in data centers today in pluggable modules.
It’s also useful to remember that an LED is essentially the gain region of a laser without the reflectors. When lasers fail in the field, they fail for the same reasons an LED will fail; moisture or contamination penetration of the semiconductor material.
An LED is not useful for quantum computing. To create a Bell pair (2qubits) you need a coherent light source to create correlated photons. The photons produced by an incoherent light source like an LED are fundamentally uncorrelated.
ziofill · 1d ago
Actually optical interconnects are the biggest of (photonic) quantum computing problems. If we had good enough optical interconnects (i.e. with low enough optical loss) we would already have a fault-tolerant quantum computer. See https://www.nature.com/articles/s41586-024-08406-9
(also note that Aurora produces 12 physical qubit modes at each clock cycle)
avsteele · 1d ago
TSMC's approach here sounds sensible but I don't think it speaks much to QC. It is a pretty different problem domain. The trapped-ion QCs can use much more expensive / less practical lasers and optics and still be useful.
Liftyee · 2d ago
As I understand it (from designing high-speed electronics), the major limitations to data/clock rates in copper are signal integrity issues. Unwanted electromagnetic interactions all degrade your signal. Optics is definitely a way around this, but I wonder if/when it will ever hit similar limits.
lo0dot0 · 2d ago
Optics also have signal integrity issues. In practice OSNR and SNR limit optics. Cutting the fiber still breaks it. Small vibrations also affect the signal's phase.
cycomanic · 1d ago
Phase variations will not introduce any issues here, they most certainly are talking about intensity modulation. You can't really (easily) do coherent modulation using incoherent light sources like leds.
SNR is obviously an issue for any communication system, however fiber attenuation is orders of magnitude lower than coax.
The bigger issues in this case would be mode-dispersion, considering that they are going through "imaging" fibres, i.e. different spatial components of the light walking off to each other causing temporal spread of the pulses until they overlap and you can't distinguish 1's and 0's.
abdullahkhalids · 1d ago
Mode dispersion is frequency dependent phase changes.
cycomanic · 1d ago
That's chromatic dispersion, mode dispersion is spatial "path" dependent phase changes. Vibration is actually somewhat more relevant because if it wasn't for that we could theoretically undo mode dispersion (we would need phase information though).
That said all of that is irrelevant to what the previous speaker said, vibration induced phase variation as an impairment. Thats just not an issue, vibrations are way too slow to impair optical comms signals.
mycall · 1d ago
How do the gravity wave optical paths solve the vibration issues? Couldn't TSMC do something similar?
BardiaPezeshki · 1d ago
aha! that is true with lasers that are coherent. But not with LEDs. We don't care about modes, polarization, or phase. Also, no worry about feedback into the lasers, so no isolators. LEDs are way easier!
EA-3167 · 2d ago
The energy densities required for photon-photon interactions are so far beyond anything we need to worry about that it's a non-issue. Photons also aren't going to just ignore local potential barriers and tunnel at the energy levels and scales involved in foreseeable chip designs either.
wyager · 1d ago
P-p interactions are not an issue but we do have high enough field intensities in high bandwidth fibers to run into p-f nonlinearity issues.
notepad0x90 · 2d ago
isn't attenuation also an issue with copper? maybe with small electronics it is negligible given the right amps? in other words, with with no interference, electrons will face impedance and start losing information.
to11mtm · 2d ago
Attenuation is going to be an issue for any signal, but in my experience Fiber can go for many miles without a repeater whereas something like Coax you're going one to two orders of magnitude less. [0]
[0] - Mind you, some of that for Coax is due to other issues around CTB and/or the challenge that in Coax, you've got many frequencies running through alongside each frequency having different attenuation per 100 foot...
spwa4 · 1d ago
> Coax is due to other issues around CTB and/or the challenge that in Coax, you've got many frequencies running through alongside each frequency having different attenuation per 100 foot
Actually this is true for fibers as well. In DWDM (all internet links are DWDM, including fiber-to-the-home in most places) you have many frequencies running alongside and each frequency has differences in attenuation (though generally measured per kilometer, not 100 foot)
Optical light are standing electromagnetic waves. Which means they don't disrupt each other. Electrical signals aren't standing waves. They affect each other.
The difference can be put like this: how many X (electrical waves, but essentially everything, protons, ...) fit on the tip of a needle? (or in a cable)
1) electrical waves? Some finite number. Can be large of course, but ...
2) photons (ie. fiber signals)? ALL OF THEM. Literally every photon that exists in the entire universe would happily join every other photon on the tip of a needle nothing would interfere with anything else
bgnn · 2d ago
This is the main mechanism of interference anyhow, called inter-symbol-interference.
moelf · 2d ago
luckily photons are boson (if we ever pushes things to this level of extreme)
Taek · 2d ago
This comment appears insightful but I have no idea what it means. Can someone elaborate?
scheme271 · 2d ago
Electrons are fermions which means that two electrons can't occupy the same quantum state (Pauli exclusion principle). Bosons don't have the limit so I believe that implies that you can have stronger signals at the low end since you can have multiple photons conveying or storing the same information.
xeonmc · 1d ago
Also less chance for external interference.
cycomanic · 1d ago
What the previous poster is implying is that electrons interact much more strongly than photons. Hence electrons are very good for processing (e.g. building a transistor), while photons are very good for information transfer. This is also a reason why much of the traditional "optical computer" research was fundamentally flawed, just from first principles one could estimate that power requirements are prohibitive.
nullc · 1d ago
> This is also a reason why much of the traditional "optical computer" research was fundamentally flawed
presumably also because photons at wavelengths we can work with are BIG
xeonmc · 1d ago
Fermions can “hit each other” whereas bosons “pass through each other”.
(Strong emphasis on the looseness of the scare quotes.)
wyager · 1d ago
We already regularly run into optical nonlinearity issues in submarine cables. The instantaneous EM fields generated in high bandwidth fiber are sufficiently strong to cause nonlinear interactions with the fiber medium that we have to correct for.
Salgat · 1d ago
I don't believe this is a factor for the distances that inter-chip transmission has. From what I can find, this is at most an issue for communication spanning tens to hundreds of meters in a datacenter.
speedbird · 2d ago
Headline seems misleading. They’re buildings detectors for someone, not ‘betting’ on it.
walthamstow · 1d ago
The 'bet' is investing time and money into something that may not yield results. It's pretty common business language.
ggm · 1d ago
The word is mostly used for finance investment promotion. I'd guess that it's written to attract those bloggers/casters, and make money for somebody.
Minor: It would be nice if the company TSMC is collaborating with on this, Avicena, is mentioned in the HN title.
stingraycharles · 1d ago
It’s against HN policy to editorialize the titles
rajnathani · 1d ago
But it is also against HN rules to keep sensationalistic titles, dang (moderator 1) routinely modifies titles wherever necessary.
albertzeyer · 1d ago
There is also optical neuromorphic computing, as an alternative to electronic neuromorphic computing like memristors. It's an fascinating field, where you use optical signals to perform analog computing. For example:
As far as I understood, you can only compute quite small neural networks until the noise signal gets too large, and also only a very limited set of computations works well in photonics.
cycomanic · 1d ago
The issue with optical neuromorphic computing is that the field has been doing the easy part, i.e. the matrix multiplication. We have known for decades that imaging/interference networks can do matrix operations in a massively parallel fashion. The problem is the nonlinear activation function between your layers. People have largely been ignoring this, or just converted back to electrical (now you are limited again by the cost/bandwidth of the electronics).
seventytwo · 1d ago
Seems hard to imagine there’s not some non-linear optical property they could take advantage of
cycomanic · 1d ago
The problem is intensity/power, as discussed previously photon-photon interactions are weak, so you need very high intensities to get a reasonable nonlinear response. The issue is, that optical matrix operations work by spreading out the light over many parallel paths, i.e. reducing the intensity in each path. There might be some clever ways to overcome this, but so far everyone has avoided that problem. They said we did "optical deep learning" what they really did was an optical matrix multiplication, but saying that would not have resulted in a Nature publication.
programjames · 1d ago
There is, and people have trained purely optical neural networks:
The real issue is trying to backpropagate those nonlinear optics. You need a second nonlinear optical component that matches the derivative of the first nonlinear optical component. In the paper above, they approximate the derivative by slightly changing the parameters, but that means the training time scales linearly with the number of parameters in each layer.
Note: the authors claim it takes O(sqrt N) time, but they're forgetting that the learning rate mu = o(1/sqrt N) if you want to converge to a minimum:
Loss(theta + dtheta) = Loss(theta) + dtheta * dLoss(theta) + O(dtheta^2)
= Loss(theta) + mu * sqrtN * C (assuming Lipschitz continuous)
==> min(Loss) = mu * sqrtN * C/2
amelius · 2d ago
> The transmitter acts like a miniature display screen and the detector like a camera.
So if I'm streaming a movie, it could be that the video is actually literally visible inside the datacenter?
tails4e · 2d ago
No, this is just an a analogy. The reality is the data is heavily modulated, and also the video is encoded so at no point would something that visually looks like an image be visible in the fibre.
topspin · 1d ago
They story states there are 300 optical lines in the "fiberbundle." Let's assume this is arranged as 20x15 and the wavelength of the led is visible and bright enough to perceive. So if your unencoded, monochrome 20x15 movie was aligned on every frame and rendered at 10E9 FPS, then yes, your movie would be visible at the end of one of these cables through a magnifying glass.
lo0dot0 · 2d ago
Obviously this is not how video compression and packets work but for the sake of the argument consider the following. The article speaks of a 300 fiber cable. A one bit per pixel square image with approx. 300 pixels is 17x17 in size. Not your typical video resolution.
fecal_henge · 2d ago
Not your typical frame rate either.
tehjoker · 2d ago
maybe a low res bit packed binary motion picture thats uncompressed
mystified5016 · 1d ago
Not any more than if you blinked an LED each time a bit comes across your network connection.
qwezxcrty · 2d ago
Not an expert in communications.
Would the SerDes be the new bottleneck in the approach? I imagine there is a reason for serial interfaces dominating over the parallel ones, maybe timing skew between lanes, how can this be addressed in this massive parallel optical parallel interface?
to11mtm · 2d ago
> timing skew between lanes
That's a big part of it. I remember in the Early Pentium 4 days, starting to see a lot more visible 'squiggles' on PCB traces on motherboards; the squiggles essentially being a case of 'these lines need more length to be about as long as the other lines and not skew timing'
In the case of what the article is describing, I'm imagining a sort of 'harness cable' that has a connector on each end for all the fibers, and the fibers in the cable itself are all the same length, there wouldn't be a skew timing issue. (Instead, you worry about bend radius limitations.)
> Would the SerDes be the new bottleneck in the approach
I'd think yes, but at the same time in my head I can't really decide whether it's a harder problem than normal mux/demux.
fecal_henge · 2d ago
SerDes is already frequently parallelised. The difference is you never expect the edges or even the entire bits to arrive at the same time. You design your systems to recover timing per link so the skew doesnt become the constraint on the line rate.
bgnn · 1d ago
one can implement SerDes at any point of the electro-optical boundary. For example, if we have 1 Tbps incoming NRZ data from the fiber, and the CMOS technology at hand only allows 10 GHz clock speed for the slicers, one can have 100x receivers (photodiode, TIA, slicer), or 1x photodiode, 100x TIA + slicer, or 1x photodiode + TIA and 100x slicers. The most common id the last one, and it spits out 100x parallel data.
Things get interesting if the losses are high and there needs to be a DFE. This limits speed a lot, but then copper solutions moved to sending multi-bit symbols (PAM 3, 4,5,6,8,16.. ) which can also be done in optical domain. One can even send multiple wavelengths in optical domain, so there are ways to boost the baud rate without requiring high clock frequencies.
waterheater · 2d ago
>serial interfaces dominating over the parallel ones
Semi-accurate. For example, PCIe remains dominant in computing. PCIe is technically a serial protocol, as new versions of PCIe (7.0 is releasing soon) increase the serial transmission rate. However, PCIe is also parallel-wise scalable based on performance needs through "lanes", where one lane is a total of four wires, arranged as two differential pairs, with one pair for receiving (RX) and one for transmitting (TX).
PCIe scales up to 16 lanes, so a PCIe x16 interface will have 64 wires forming 32 differential pairs. When routing PCIe traces, the length of all differential pairs must be within <100 mils of each other (I believe; it's been about 10 years since I last read the spec). That's to address the "timing skew between lanes" you mention, and DRCs in the PCB design software will ensure the trace length skew requirement is respected.
>how can this be addressed in this massive parallel optical parallel interface?
From a hardware perspective, reserve a few "pixels" of the story's MicroLED transmitter array for link control, not for data transfer. Examples might be a clock or a data frame synchronization signal. From the software side, design a communication protocol which negotiates a stable connection between the endpoints and incorporates checksums.
Abstractly, the serial vs. parallel dynamic shifts as technology advances. Raising clock rates to shove more data down the line faster (serial improvement) works to a point, but you'll eventually hit the limits of your current technology. Still need more bandwidth? Just add more lines to meet your needs (parallel improvement). Eventually the technology improves, and the dynamic continues. A perfect example of that is PCIe.
cycomanic · 1d ago
They are doing 10 Gb/s over each fibre, to get to 10 Gb/s you have already undergone a parallel -> serial conversion in electronics (clock rates of your asics/fpgas are much lower), to increase the serial rate is in fact the bottleneck. Where the actual optimum serial rate is highly depends on the cost of each transceiver, e.g. long haul optical links operate at up to 1 Tb/s serial rates while datacenter interconnects are 10-25G serial AFAIK.
smj-edison · 1d ago
With this design, how do they route enough pins from the chip to the optical transceiver? Would it take chiplets to get enough lanes?
m3kw9 · 2d ago
If each cable is 10gb/s and uses 1 pixel to convert into electrical signals, would that mean they need a 10 giga frame per second sensor?
cpldcpu · 2d ago
I think that's just a simplifying example. They would most likely not use an image sensor, but a photodetector with a broadband amplifier.
lo0dot0 · 2d ago
No, not necessarily. If you can distinguish different amplitude levels you can do better. For example four amplitude modulation (4AM) carries two bits per symbol. There is also the option to use coherent optics, which can detect phase, and carry additional information in the phase.
ls612 · 2d ago
Forgive the noob question but what stops us from making optical transistors?
qwezxcrty · 2d ago
I think the most fundamental reason is that there is no efficient enough nonlinearity at optical frequencies. So two beams(or frequencies in some implementation) tends not to affect each other in common materials, unless you have a very strong source (>1 W) so the current demonstrations for all-optical switching are mostly using pulsed sources.
perlgeek · 1d ago
As somebody who tried to do a PHD in optical communications, this is 100% correct.
I wonder if meta material might provide such nonlinearities in the future.
ls612 · 2d ago
I wonder if considerably more engineering and research effort will be applied here when we reach the limit of what silicon and electrons can do.
cycomanic · 1d ago
No this is not an engineering issue, it's a problem of fundamental physics. Photons don't interact easily. That doesn't mean there are not specialised applications where optical processing can make sense, e.g. a matrix multiplication is really just a more complex lens so it's become very popular to make ML accelerators based on this.
momoschili · 1d ago
Contrary to the prior commenter, there is definitely significant engineering going toward this, but it's not clear or likely that photonic computing will supplant electronic computing (at least not anytime soon), but rather most seem to think of it as an accelerator for highly parallel tasks. Two major ways people are thinking of achieving this are using lithium niobate devices which mediate nonlinear optical effects via light-matter interaction, and silicon photonic devices with electrically tunable elements. In the past there was a lot of work with III-V semiconductors (GaAs/InAs/GaN/AlN etc) but that seems to have leveled off in favor of lithium niobate.
Photonics has definitely proved itself in communications and linear computing, but still has a way to in terms of general (nonlinear) compute.
ls612 · 1d ago
Yeah that was sorta what I was thinking is there a clever way to exploit light interacting with matter to turn the gate on/off.
elorant · 2d ago
Photons are way more difficult to control than electrons. They don't interact with each other.
dartharva · 1d ago
Electric signals can be made to operate and switch currents on-demand in CMOS transistors to make logic gates and eventually CPUs. Light signals, however, operate linearly - photons just pass through each other and don't interact.
cubefox · 1d ago
This article is misleading. TSMC doesn't "bet" on the tech by Avicena (the startup in question). Instead, Avicena appears to simply pay TSMC to help them with manufacturing. Here is the linked press release by Avicena:
Noting also that there have been multiple articles on IEEE Spectrum about this startup in the past, I really hope the journalists don't own the stock or are otherwise biased.
BardiaPezeshki · 1d ago
TSMC is developing custom detectors for Avicena on their own dime. They almost never do anything like for start-ups. That is why it is a bet on Avicena. They are nurturing this technology because they think it has real potential. See the TSMC quotes in the article.
dartharva · 1d ago
I wonder if I will ever see a photonic CPU in my lifetime. Probably not, you'll have to invent a completely new material never seen before that somehow enables nonlinear interactions with light signals. It'd be nothing short of magic.
Also transmitting 10 Gb/s with a led seems challenging. The bandwidth of an incoherent led is large, so are they doing significant DSP (which costs money and energy and introduces latency) or are they restricting themselves to very short (10s of m) links?
on the distance - exactly right. The real bottleneck now in AI clusters is the interconnect within a rack or sub 10m. So that is the market we are addressing.
On your second point - exactly! Normally people think LEDs are slow and suck. That is the real innovation. At Avicena, we've figured out how to make LEDs blink on and off at 10Gb/s. This is really surprising and amazing! So with simple on-off modulation, there is no DSP or excess energy use. The article says TSMC is developing arrays of detectors, based on their camera process, that also receive signals at 10Gb/s. Turns out this is pretty easy for a camera with a small number of pixels (~1000). We use blue light, which is easily absorbed in silicon. BTW, feel free to reach out to Avicena, and happy to answer questions.
The reliability of micro LEDs. and specifically GaN based micro LEDs is however an open question.
In the absence of any dislocation failure mechanisms it will depend on the current density and thermal dissipation. And just like any other material, it will have to survive in a non-hermetic environment and in the presence of corrosive gasses (an issue in data centers).
To get the 10G, it’s probably kind of like a VCSEL without the grating and so current density is probably high. How well you’re able to heat sink it is going to determine how reliable it will be.
Overall I like the idea. It looks like the beachfront could work. I’d spend more time talking about and how the electrical connection works and what kind of interface to a chip would be needed.
I’d also be careful before throwing shade on laser reliability because it could backfire on you (for all reasons above).
The cable is just 2D parallel optical bus. With a bundle like this, you can wrap it with a nice, thick PVC (or whatever) jacket and employ a small, square connector that matches the physical scheme of the 2D planar microled array.
It's a brute force, simple minded approach enabled by high speed, low cost microled arrays. Pretty cool I think.
The ribbon concept could be applicable to PCBs though.
What I'm getting at is, that I don't see any advantage over vcsel arrays. I'm not convinced that the price point is that different.
The caption of the image of the cable and connector reads: "CMOS ASIC with microLEDs sending data with blue light into a fiberbundle." So yes, fibre bundles.
> I don't see any advantage over vcsel arrays
They claim the following advantages:
All of these at least pass the smell test. LEDs are indeed quite efficient relative to lasers. They cite about an order of magnitude "pJ/bit" advantage for the system over laser based optics, and I presume they're privy to vcsels. When you're trying to wheedle nuclear reactor restarts to run your enormous AI clusters, saving power is nice. The system has a parallel "conductor" design that likely employs high speed parallel CMOS latches, so the "computational overhead" claim could make sense: all you're doing is latching bits to/from PCB traces or IC pins so all the SerDes and multiplexing cost is gone. They claim that it can easily be scaled to more pixels/lines. Sure, I guess: low power makes that easier.There you are. All pretty simple.
I think there is use case for this outside data centers. We're at the point where copper transmission lines are a real problem for consumers. Fiber can solve the signal integrity problem for such use cases, however--despite several famous runs at it (Thunderbolt, Firewire)--the cost has always precluded widespread adoption outside niche, professional, or high-end applications. Maybe LED based optics can make fiber cost competitive with copper for such applications: one imagines a very small, very low power microLED based transceiver costing only slightly more than a USB connector on each end of such a cable with maybe 4-8 parallel fibers. Just spit-balling here
And given the talk about this as a CPO alternative, I was assuming this was for back plane and connections of a few metres, not components on the same PCB.
Indeed they do. I overlooked that.
I know little about microLED arrays and their reliability, so I won't guess about how credible this is: LED reliability has a lot of factors. The cables involved will probably be less reliable than conventional laser fiber optics due to the much larger number of fibers that have to be precision assembled. Likely to be more fragile as well.
On-site fabricating or repairing such cables likely isn't feasible.
Yes. I've replaced my share of dead transceivers, and I suspect the laser drivers were the failure mode of most of them.
That doesn't fill in the blank for me though: how reliable are high speed, dense microLEDs?
This new TSMC work with parallel incoherent optics is altogether distinct. No DSP. No SerDes. Apples and oranges.
And I'm not sure how much of this is actually TSMC's work, the title is misleading.
Edit: actually, they are working on the detector side.
See my other comment about non-datacenter applications. There is a serious opportunity here for fixing signal integrity problems with contemporary high bandwidth peripherals. Copper USB et al. are no good and in desperate need of a better medium.
I would love to join you at Avicena and work on your breakthrough instead of just acquiring the IP from you in a few years.
[1] https://youtu.be/wDhnjEQyuDk?t=1569
[2] see schematic on page 373 of https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=780...
The claimed advantage is a very high aggregate throughput and much less energy per bit than with either copper links or traditional laser-based optical links.
For greater distances, lasers cannot be replaced by anything else.
Sure yes, optical might use expensive longer range optical today! But using that framing to assess new technologies & what help the could be may be folly.
With quantum computing, one is forced to use lasers. Basically, we can't transmit quantum information with the classical light from LEDs (handwaving-ly: LEDs emit a distribution of possible photon numbers, not single photons, so you lose control at the quantum level). Moreover, we often also need the narrow linewidth of lasers, so that we can interact with atoms in the way we want them to. That is, not to excite unwanted atomic energy levels. So you see in trapped ion quantum computing people tripping over themselves to realise integration of laser optics, through fancy engineering that i don't fully understand like diffraction gratings within the chip that diffract light onto the ions. It's an absolutely crucial challenge to overcome if you want to make trapped ion quantum computers with more than several tens of ions.
Networking multiple computers via said optical interconnects is an alternative, and also similarly difficult.
What insight do i gleam from this IEEE article, then? I believe if this approach with the LEDs works out for this use case, then I'd see it as a partial admission of failure for laser-integrated optics at scale. It is, after all, the claim in the article that integrating lasers is too difficult. And then I'd expect to see quantum computing struggle severely to overcome this problem. It's still research at this stage, so let's see if Nature's cards fall fortuitously.
But even more than that, this seems to me like a purely on-chip solution. For trapped ions and neutral atoms you really need to translate to free-space optics at some point.
As for fully integrated optics, it's where quantum computers eventually want to be, and there's no physical limitations currently. But perhaps it's too early to say whether we would absolutely require free space optics because it's impossible to do some optics thing another way.
However, it’s not correct to say lasers are unreliable. It’s fundamentally false and it’s not supported by field data from today’s pluggable modules. 10’s of millions of lasers are deployed in data centers today in pluggable modules.
It’s also useful to remember that an LED is essentially the gain region of a laser without the reflectors. When lasers fail in the field, they fail for the same reasons an LED will fail; moisture or contamination penetration of the semiconductor material.
An LED is not useful for quantum computing. To create a Bell pair (2qubits) you need a coherent light source to create correlated photons. The photons produced by an incoherent light source like an LED are fundamentally uncorrelated.
SNR is obviously an issue for any communication system, however fiber attenuation is orders of magnitude lower than coax.
The bigger issues in this case would be mode-dispersion, considering that they are going through "imaging" fibres, i.e. different spatial components of the light walking off to each other causing temporal spread of the pulses until they overlap and you can't distinguish 1's and 0's.
That said all of that is irrelevant to what the previous speaker said, vibration induced phase variation as an impairment. Thats just not an issue, vibrations are way too slow to impair optical comms signals.
[0] - Mind you, some of that for Coax is due to other issues around CTB and/or the challenge that in Coax, you've got many frequencies running through alongside each frequency having different attenuation per 100 foot...
Actually this is true for fibers as well. In DWDM (all internet links are DWDM, including fiber-to-the-home in most places) you have many frequencies running alongside and each frequency has differences in attenuation (though generally measured per kilometer, not 100 foot)
Optical light are standing electromagnetic waves. Which means they don't disrupt each other. Electrical signals aren't standing waves. They affect each other.
The difference can be put like this: how many X (electrical waves, but essentially everything, protons, ...) fit on the tip of a needle? (or in a cable)
1) electrical waves? Some finite number. Can be large of course, but ...
2) photons (ie. fiber signals)? ALL OF THEM. Literally every photon that exists in the entire universe would happily join every other photon on the tip of a needle nothing would interfere with anything else
presumably also because photons at wavelengths we can work with are BIG
(Strong emphasis on the looseness of the scare quotes.)
https://www.nature.com/articles/s41566-020-00754-y
https://www.nature.com/articles/s44172-022-00024-5
As far as I understood, you can only compute quite small neural networks until the noise signal gets too large, and also only a very limited set of computations works well in photonics.
https://arxiv.org/abs/2208.01623
The real issue is trying to backpropagate those nonlinear optics. You need a second nonlinear optical component that matches the derivative of the first nonlinear optical component. In the paper above, they approximate the derivative by slightly changing the parameters, but that means the training time scales linearly with the number of parameters in each layer.
Note: the authors claim it takes O(sqrt N) time, but they're forgetting that the learning rate mu = o(1/sqrt N) if you want to converge to a minimum:
So if I'm streaming a movie, it could be that the video is actually literally visible inside the datacenter?
That's a big part of it. I remember in the Early Pentium 4 days, starting to see a lot more visible 'squiggles' on PCB traces on motherboards; the squiggles essentially being a case of 'these lines need more length to be about as long as the other lines and not skew timing'
In the case of what the article is describing, I'm imagining a sort of 'harness cable' that has a connector on each end for all the fibers, and the fibers in the cable itself are all the same length, there wouldn't be a skew timing issue. (Instead, you worry about bend radius limitations.)
> Would the SerDes be the new bottleneck in the approach
I'd think yes, but at the same time in my head I can't really decide whether it's a harder problem than normal mux/demux.
Things get interesting if the losses are high and there needs to be a DFE. This limits speed a lot, but then copper solutions moved to sending multi-bit symbols (PAM 3, 4,5,6,8,16.. ) which can also be done in optical domain. One can even send multiple wavelengths in optical domain, so there are ways to boost the baud rate without requiring high clock frequencies.
Semi-accurate. For example, PCIe remains dominant in computing. PCIe is technically a serial protocol, as new versions of PCIe (7.0 is releasing soon) increase the serial transmission rate. However, PCIe is also parallel-wise scalable based on performance needs through "lanes", where one lane is a total of four wires, arranged as two differential pairs, with one pair for receiving (RX) and one for transmitting (TX).
PCIe scales up to 16 lanes, so a PCIe x16 interface will have 64 wires forming 32 differential pairs. When routing PCIe traces, the length of all differential pairs must be within <100 mils of each other (I believe; it's been about 10 years since I last read the spec). That's to address the "timing skew between lanes" you mention, and DRCs in the PCB design software will ensure the trace length skew requirement is respected.
>how can this be addressed in this massive parallel optical parallel interface?
From a hardware perspective, reserve a few "pixels" of the story's MicroLED transmitter array for link control, not for data transfer. Examples might be a clock or a data frame synchronization signal. From the software side, design a communication protocol which negotiates a stable connection between the endpoints and incorporates checksums.
Abstractly, the serial vs. parallel dynamic shifts as technology advances. Raising clock rates to shove more data down the line faster (serial improvement) works to a point, but you'll eventually hit the limits of your current technology. Still need more bandwidth? Just add more lines to meet your needs (parallel improvement). Eventually the technology improves, and the dynamic continues. A perfect example of that is PCIe.
I wonder if meta material might provide such nonlinearities in the future.
Photonics has definitely proved itself in communications and linear computing, but still has a way to in terms of general (nonlinear) compute.
https://www.businesswire.com/news/home/20250422988144/en/Avi...
Noting also that there have been multiple articles on IEEE Spectrum about this startup in the past, I really hope the journalists don't own the stock or are otherwise biased.