Minor: It would be nice if the company TSMC is collaborating with on this, Avicena, is mentioned in the HN title.
cycomanic · 13h ago
That article is really low on details and mixes up a lot of things. It compares microleds to traditional WDM fiber transmission systems with edge emitting DFB lasers and ECLs, but in datacentre interconnects there's plenty of optical links already and they use VCSELs (vertical cavity surface emitting lasers), which are much cheaper to manufacture. People also have been putting these into arrays and coupling to multi-core fiber. The difficulty here is almost always packaging, i.e. coupling the laser. I'm not sure why microleds would be better.
Also transmitting 10 Gb/s with a led seems challenging. The bandwidth of an incoherent led is large, so are they doing significant DSP (which costs money and energy and introduces latency) or are they restricting themselves to very short (10s of m) links?
rajnathani · 15m ago
The article is about chip interconnects. Think like replacing PCIe, NVLink, or HBM/DDR RAM buses, with optical communication.
qwezxcrty · 13h ago
I guess they are doing direct modulated IMDD for each link so the DSP burden is not related to the coherence of diodes? Also indeed very short reach in the article.
cycomanic · 13h ago
The problem with both leds and imaging fibres is that modal dispersion is massive and completely destroys your signal after only a few meters of propagation. So unless you do MMSE (which I assume would be cost prohibitive), you really can only go a few meters. IMDD doesn't really make a difference here.
adgjlsfhk1 · 10h ago
I think this is intended for short distances (e.g. a few cm). cpu to GPU and network card to network card still will be lasers, the question is whether you can do core to core or CPU to ram with optics
cycomanic · 10h ago
But why are they talking about multicore fibres then? I would have expected ribbons. You might be right though.
topspin · 7h ago
> I would have expected ribbons.
The cable is just 2D parallel optical bus. With a bundle like this, you can wrap it with a nice, thick PVC (or whatever) jacket and employ a small, square connector that matches the physical scheme of the 2D planar microled array.
It's a brute force, simple minded approach enabled by high speed, low cost microled arrays. Pretty cool I think.
The ribbon concept could be applicable to PCBs though.
cycomanic · 3h ago
You might be right and they are talking about fibre bundles, but that that's something different to a multicore fibre (and much larger as well, which could pose significant problems especially if we are talking cm links). What isn't addressed is that leds are quite spatially incoherent and beam divergence is strong, so the fibres they must use are pretty large, coupling via just a connector might not be easy especially if we want to avoid crosstalk.
What I'm getting at is, that I don't see any advantage over vcsel arrays. I'm not convinced that the price point is that different.
topspin · 3h ago
> You might be right and they are talking about fibre bundles
The caption of the image of the cable and connector reads: "CMOS ASIC with microLEDs sending data with blue light into a fiberbundle." So yes, fibre bundles.
> I don't see any advantage over vcsel arrays
They claim the following advantages:
1. Low energy use
2. Low "computational overhead"
3. Scalability
All of these at least pass the smell test. LEDs are indeed quite efficient relative to lasers. They cite about an order of magnitude "pJ/bit" advantage for the system over laser based optics, and I presume they're privy to vcsels. When you're trying to wheedle nuclear reactor restarts to run your enormous AI clusters, saving power is nice. The system has a parallel "conductor" design that likely employs high speed parallel CMOS latches, so the "computational overhead" claim could make sense: all you're doing is latching bits to/from PCB traces or IC pins so all the SerDes and multiplexing cost is gone. They claim that it can easily be scaled to more pixels/lines. Sure, I guess: low power makes that easier.
There you are. All pretty simple.
I think there is use case for this outside data centers. We're at the point where copper transmission lines are a real problem for consumers. Fiber can solve the signal integrity problem for such use cases, however--despite several famous runs at it (Thunderbolt, Firewire)--the cost has always precluded widespread adoption outside niche, professional, or high-end applications. Maybe LED based optics can make fiber cost competitive with copper for such applications: one imagines a very small, very low power microLED based transceiver costing only slightly more than a USB connector on each end of such a cable with maybe 4-8 parallel fibers. Just spit-balling here
nsteel · 1h ago
Aren't they also claiming this is more reliable? I'm told laser reliability is a hurdle for CPO.
And given the talk about this as a CPO alternative, I was assuming this was for back plane and connections of a few metres, not components on the same PCB.
topspin · 44m ago
> Aren't they also claiming this is more reliable?
Indeed they do. I overlooked that.
I know little about microLED arrays and their reliability, so I won't guess about how credible this is: LED reliability has a lot of factors. The cables involved will probably be less reliable than conventional laser fiber optics due to the much larger number of fibers that have to be precision assembled. Likely to be more fragile as well.
On-site fabricating or repairing such cables likely isn't feasible.
tehjoker · 13h ago
short links it’s in the article
cycomanic · 13h ago
Ah I missed the 10m reference there. I'm not sure it makes more sense though. Typical intra-datacenter connections are 10s-100s of meters and use VCSELs, so introducing microleds just for the very short links instead of just parallelising the VCSEL connections (which is being done already)? If they could actually replace the VCSEL I would sort of see the point.
jauntywundrkind · 7h ago
There's been a constant drum-beat that even intra-rack is trying to make its way to optical as fast as it can, that copper is more and more complex and expensive to scale faster. If we have a relatively affordable short range optical system that doesn't require heavy computational work to do, that sounds like a godsend, like a way to increase bits per joule while reducing expensive cabling cost.
Sure yes, optical might use expensive longer range optical today! But using that framing to assess new technologies & what help the could be may be folly.
As I understand it (from designing high-speed electronics), the major limitations to data/clock rates in copper are signal integrity issues. Unwanted electromagnetic interactions all degrade your signal. Optics is definitely a way around this, but I wonder if/when it will ever hit similar limits.
lo0dot0 · 15h ago
Optics also have signal integrity issues. In practice OSNR and SNR limit optics. Cutting the fiber still breaks it. Small vibrations also affect the signal's phase.
cycomanic · 13h ago
Phase variations will not introduce any issues here, they most certainly are talking about intensity modulation. You can't really (easily) do coherent modulation using incoherent light sources like leds.
SNR is obviously an issue for any communication system, however fiber attenuation is orders of magnitude lower than coax.
The bigger issues in this case would be mode-dispersion, considering that they are going through "imaging" fibres, i.e. different spatial components of the light walking off to each other causing temporal spread of the pulses until they overlap and you can't distinguish 1's and 0's.
abdullahkhalids · 11h ago
Mode dispersion is frequency dependent phase changes.
cycomanic · 10h ago
That's chromatic dispersion, mode dispersion is spatial "path" dependent phase changes. Vibration is actually somewhat more relevant because if it wasn't for that we could theoretically undo mode dispersion (we would need phase information though).
That said all of that is irrelevant to what the previous speaker said, vibration induced phase variation as an impairment. Thats just not an issue, vibrations are way too slow to impair optical comms signals.
mycall · 7h ago
How do the gravity wave optical paths solve the vibration issues? Couldn't TSMC do something similar?
notepad0x90 · 15h ago
isn't attenuation also an issue with copper? maybe with small electronics it is negligible given the right amps? in other words, with with no interference, electrons will face impedance and start losing information.
to11mtm · 14h ago
Attenuation is going to be an issue for any signal, but in my experience Fiber can go for many miles without a repeater whereas something like Coax you're going one to two orders of magnitude less. [0]
[0] - Mind you, some of that for Coax is due to other issues around CTB and/or the challenge that in Coax, you've got many frequencies running through alongside each frequency having different attenuation per 100 foot...
bgnn · 14h ago
This is the main mechanism of interference anyhow, called inter-symbol-interference.
moelf · 16h ago
luckily photons are boson (if we ever pushes things to this level of extreme)
Taek · 15h ago
This comment appears insightful but I have no idea what it means. Can someone elaborate?
scheme271 · 14h ago
Electrons are fermions which means that two electrons can't occupy the same quantum state (Pauli exclusion principle). Bosons don't have the limit so I believe that implies that you can have stronger signals at the low end since you can have multiple photons conveying or storing the same information.
xeonmc · 11h ago
Also less chance for external interference.
cycomanic · 13h ago
What the previous poster is implying is that electrons interact much more strongly than photons. Hence electrons are very good for processing (e.g. building a transistor), while photons are very good for information transfer. This is also a reason why much of the traditional "optical computer" research was fundamentally flawed, just from first principles one could estimate that power requirements are prohibitive.
nullc · 11h ago
> This is also a reason why much of the traditional "optical computer" research was fundamentally flawed
presumably also because photons at wavelengths we can work with are BIG
xeonmc · 11h ago
Fermions can “hit each other” whereas bosons “pass through each other”.
(Strong emphasis on the looseness of the scare quotes.)
EA-3167 · 15h ago
The energy densities required for photon-photon interactions are so far beyond anything we need to worry about that it's a non-issue. Photons also aren't going to just ignore local potential barriers and tunnel at the energy levels and scales involved in foreseeable chip designs either.
wyager · 11h ago
P-p interactions are not an issue but we do have high enough field intensities in high bandwidth fibers to run into p-f nonlinearity issues.
wyager · 11h ago
We already regularly run into optical nonlinearity issues in submarine cables. The instantaneous EM fields generated in high bandwidth fiber are sufficiently strong to cause nonlinear interactions with the fiber medium that we have to correct for.
speedbird · 14h ago
Headline seems misleading. They’re buildings detectors for someone, not ‘betting’ on it.
ggm · 9h ago
The word is mostly used for finance investment promotion. I'd guess that it's written to attract those bloggers/casters, and make money for somebody.
albertzeyer · 13h ago
There is also optical neuromorphic computing, as an alternative to electronic neuromorphic computing like memristors. It's an fascinating field, where you use optical signals to perform analog computing. For example:
As far as I understood, you can only compute quite small neural networks until the noise signal gets too large, and also only a very limited set of computations works well in photonics.
cycomanic · 13h ago
The issue with optical neuromorphic computing is that the field has been doing the easy part, i.e. the matrix multiplication. We have known for decades that imaging/interference networks can do matrix operations in a massively parallel fashion. The problem is the nonlinear activation function between your layers. People have largely been ignoring this, or just converted back to electrical (now you are limited again by the cost/bandwidth of the electronics).
seventytwo · 13h ago
Seems hard to imagine there’s not some non-linear optical property they could take advantage of
cycomanic · 12h ago
The problem is intensity/power, as discussed previously photon-photon interactions are weak, so you need very high intensities to get a reasonable nonlinear response. The issue is, that optical matrix operations work by spreading out the light over many parallel paths, i.e. reducing the intensity in each path. There might be some clever ways to overcome this, but so far everyone has avoided that problem. They said we did "optical deep learning" what they really did was an optical matrix multiplication, but saying that would not have resulted in a Nature publication.
programjames · 10h ago
There is, and people have trained purely optical neural networks:
The real issue is trying to backpropagate those nonlinear optics. You need a second nonlinear optical component that matches the derivative of the first nonlinear optical component. In the paper above, they approximate the derivative by slightly changing the parameters, but that means the training time scales linearly with the number of parameters in each layer.
Note: the authors claim it takes O(sqrt N) time, but they're forgetting that the learning rate mu = o(1/sqrt N) if you want to converge to a minimum:
Loss(theta + dtheta) = Loss(theta) + dtheta * dLoss(theta) + O(dtheta^2)
= Loss(theta) + mu * sqrtN * C (assuming Lipschitz continuous)
==> min(Loss) = mu * sqrtN * C/2
amelius · 15h ago
> The transmitter acts like a miniature display screen and the detector like a camera.
So if I'm streaming a movie, it could be that the video is actually literally visible inside the datacenter?
tails4e · 15h ago
No, this is just an a analogy. The reality is the data is heavily modulated, and also the video is encoded so at no point would something that visually looks like an image be visible in the fibre.
topspin · 7h ago
They story states there are 300 optical lines in the "fiberbundle." Let's assume this is arranged as 20x15 and the wavelength of the led is visible and bright enough to perceive. So if your unencoded, monochrome 20x15 movie was aligned on every frame and rendered at 10E9 FPS, then yes, your movie would be visible at the end of one of these cables through a magnifying glass.
lo0dot0 · 14h ago
Obviously this is not how video compression and packets work but for the sake of the argument consider the following. The article speaks of a 300 fiber cable. A one bit per pixel square image with approx. 300 pixels is 17x17 in size. Not your typical video resolution.
fecal_henge · 14h ago
Not your typical frame rate either.
mystified5016 · 10h ago
Not any more than if you blinked an LED each time a bit comes across your network connection.
tehjoker · 15h ago
maybe a low res bit packed binary motion picture thats uncompressed
smj-edison · 8h ago
With this design, how do they route enough pins from the chip to the optical transceiver? Would it take chiplets to get enough lanes?
qwezxcrty · 14h ago
Not an expert in communications.
Would the SerDes be the new bottleneck in the approach? I imagine there is a reason for serial interfaces dominating over the parallel ones, maybe timing skew between lanes, how can this be addressed in this massive parallel optical parallel interface?
to11mtm · 14h ago
> timing skew between lanes
That's a big part of it. I remember in the Early Pentium 4 days, starting to see a lot more visible 'squiggles' on PCB traces on motherboards; the squiggles essentially being a case of 'these lines need more length to be about as long as the other lines and not skew timing'
In the case of what the article is describing, I'm imagining a sort of 'harness cable' that has a connector on each end for all the fibers, and the fibers in the cable itself are all the same length, there wouldn't be a skew timing issue. (Instead, you worry about bend radius limitations.)
> Would the SerDes be the new bottleneck in the approach
I'd think yes, but at the same time in my head I can't really decide whether it's a harder problem than normal mux/demux.
fecal_henge · 14h ago
SerDes is already frequently parallelised. The difference is you never expect the edges or even the entire bits to arrive at the same time. You design your systems to recover timing per link so the skew doesnt become the constraint on the line rate.
bgnn · 13h ago
one can implement SerDes at any point of the electro-optical boundary. For example, if we have 1 Tbps incoming NRZ data from the fiber, and the CMOS technology at hand only allows 10 GHz clock speed for the slicers, one can have 100x receivers (photodiode, TIA, slicer), or 1x photodiode, 100x TIA + slicer, or 1x photodiode + TIA and 100x slicers. The most common id the last one, and it spits out 100x parallel data.
Things get interesting if the losses are high and there needs to be a DFE. This limits speed a lot, but then copper solutions moved to sending multi-bit symbols (PAM 3, 4,5,6,8,16.. ) which can also be done in optical domain. One can even send multiple wavelengths in optical domain, so there are ways to boost the baud rate without requiring high clock frequencies.
waterheater · 14h ago
>serial interfaces dominating over the parallel ones
Semi-accurate. For example, PCIe remains dominant in computing. PCIe is technically a serial protocol, as new versions of PCIe (7.0 is releasing soon) increase the serial transmission rate. However, PCIe is also parallel-wise scalable based on performance needs through "lanes", where one lane is a total of four wires, arranged as two differential pairs, with one pair for receiving (RX) and one for transmitting (TX).
PCIe scales up to 16 lanes, so a PCIe x16 interface will have 64 wires forming 32 differential pairs. When routing PCIe traces, the length of all differential pairs must be within <100 mils of each other (I believe; it's been about 10 years since I last read the spec). That's to address the "timing skew between lanes" you mention, and DRCs in the PCB design software will ensure the trace length skew requirement is respected.
>how can this be addressed in this massive parallel optical parallel interface?
From a hardware perspective, reserve a few "pixels" of the story's MicroLED transmitter array for link control, not for data transfer. Examples might be a clock or a data frame synchronization signal. From the software side, design a communication protocol which negotiates a stable connection between the endpoints and incorporates checksums.
Abstractly, the serial vs. parallel dynamic shifts as technology advances. Raising clock rates to shove more data down the line faster (serial improvement) works to a point, but you'll eventually hit the limits of your current technology. Still need more bandwidth? Just add more lines to meet your needs (parallel improvement). Eventually the technology improves, and the dynamic continues. A perfect example of that is PCIe.
cycomanic · 13h ago
They are doing 10 Gb/s over each fibre, to get to 10 Gb/s you have already undergone a parallel -> serial conversion in electronics (clock rates of your asics/fpgas are much lower), to increase the serial rate is in fact the bottleneck. Where the actual optimum serial rate is highly depends on the cost of each transceiver, e.g. long haul optical links operate at up to 1 Tb/s serial rates while datacenter interconnects are 10-25G serial AFAIK.
ls612 · 14h ago
Forgive the noob question but what stops us from making optical transistors?
qwezxcrty · 14h ago
I think the most fundamental reason is that there is no efficient enough nonlinearity at optical frequencies. So two beams(or frequencies in some implementation) tends not to affect each other in common materials, unless you have a very strong source (>1 W) so the current demonstrations for all-optical switching are mostly using pulsed sources.
perlgeek · 3h ago
As somebody who tried to do a PHD in optical communications, this is 100% correct.
I wonder if meta material might provide such nonlinearities in the future.
ls612 · 14h ago
I wonder if considerably more engineering and research effort will be applied here when we reach the limit of what silicon and electrons can do.
momoschili · 11h ago
Contrary to the prior commenter, there is definitely significant engineering going toward this, but it's not clear or likely that photonic computing will supplant electronic computing (at least not anytime soon), but rather most seem to think of it as an accelerator for highly parallel tasks. Two major ways people are thinking of achieving this are using lithium niobate devices which mediate nonlinear optical effects via light-matter interaction, and silicon photonic devices with electrically tunable elements. In the past there was a lot of work with III-V semiconductors (GaAs/InAs/GaN/AlN etc) but that seems to have leveled off in favor of lithium niobate.
Photonics has definitely proved itself in communications and linear computing, but still has a way to in terms of general (nonlinear) compute.
ls612 · 10h ago
Yeah that was sorta what I was thinking is there a clever way to exploit light interacting with matter to turn the gate on/off.
cycomanic · 13h ago
No this is not an engineering issue, it's a problem of fundamental physics. Photons don't interact easily. That doesn't mean there are not specialised applications where optical processing can make sense, e.g. a matrix multiplication is really just a more complex lens so it's become very popular to make ML accelerators based on this.
elorant · 14h ago
Photons are way more difficult to control than electrons. They don't interact with each other.
m3kw9 · 16h ago
If each cable is 10gb/s and uses 1 pixel to convert into electrical signals, would that mean they need a 10 giga frame per second sensor?
cpldcpu · 15h ago
I think that's just a simplifying example. They would most likely not use an image sensor, but a photodetector with a broadband amplifier.
lo0dot0 · 15h ago
No, not necessarily. If you can distinguish different amplitude levels you can do better. For example four amplitude modulation (4AM) carries two bits per symbol. There is also the option to use coherent optics, which can detect phase, and carry additional information in the phase.
Also transmitting 10 Gb/s with a led seems challenging. The bandwidth of an incoherent led is large, so are they doing significant DSP (which costs money and energy and introduces latency) or are they restricting themselves to very short (10s of m) links?
The cable is just 2D parallel optical bus. With a bundle like this, you can wrap it with a nice, thick PVC (or whatever) jacket and employ a small, square connector that matches the physical scheme of the 2D planar microled array.
It's a brute force, simple minded approach enabled by high speed, low cost microled arrays. Pretty cool I think.
The ribbon concept could be applicable to PCBs though.
What I'm getting at is, that I don't see any advantage over vcsel arrays. I'm not convinced that the price point is that different.
The caption of the image of the cable and connector reads: "CMOS ASIC with microLEDs sending data with blue light into a fiberbundle." So yes, fibre bundles.
> I don't see any advantage over vcsel arrays
They claim the following advantages:
All of these at least pass the smell test. LEDs are indeed quite efficient relative to lasers. They cite about an order of magnitude "pJ/bit" advantage for the system over laser based optics, and I presume they're privy to vcsels. When you're trying to wheedle nuclear reactor restarts to run your enormous AI clusters, saving power is nice. The system has a parallel "conductor" design that likely employs high speed parallel CMOS latches, so the "computational overhead" claim could make sense: all you're doing is latching bits to/from PCB traces or IC pins so all the SerDes and multiplexing cost is gone. They claim that it can easily be scaled to more pixels/lines. Sure, I guess: low power makes that easier.There you are. All pretty simple.
I think there is use case for this outside data centers. We're at the point where copper transmission lines are a real problem for consumers. Fiber can solve the signal integrity problem for such use cases, however--despite several famous runs at it (Thunderbolt, Firewire)--the cost has always precluded widespread adoption outside niche, professional, or high-end applications. Maybe LED based optics can make fiber cost competitive with copper for such applications: one imagines a very small, very low power microLED based transceiver costing only slightly more than a USB connector on each end of such a cable with maybe 4-8 parallel fibers. Just spit-balling here
And given the talk about this as a CPO alternative, I was assuming this was for back plane and connections of a few metres, not components on the same PCB.
Indeed they do. I overlooked that.
I know little about microLED arrays and their reliability, so I won't guess about how credible this is: LED reliability has a lot of factors. The cables involved will probably be less reliable than conventional laser fiber optics due to the much larger number of fibers that have to be precision assembled. Likely to be more fragile as well.
On-site fabricating or repairing such cables likely isn't feasible.
Sure yes, optical might use expensive longer range optical today! But using that framing to assess new technologies & what help the could be may be folly.
SNR is obviously an issue for any communication system, however fiber attenuation is orders of magnitude lower than coax.
The bigger issues in this case would be mode-dispersion, considering that they are going through "imaging" fibres, i.e. different spatial components of the light walking off to each other causing temporal spread of the pulses until they overlap and you can't distinguish 1's and 0's.
That said all of that is irrelevant to what the previous speaker said, vibration induced phase variation as an impairment. Thats just not an issue, vibrations are way too slow to impair optical comms signals.
[0] - Mind you, some of that for Coax is due to other issues around CTB and/or the challenge that in Coax, you've got many frequencies running through alongside each frequency having different attenuation per 100 foot...
presumably also because photons at wavelengths we can work with are BIG
(Strong emphasis on the looseness of the scare quotes.)
https://www.nature.com/articles/s41566-020-00754-y
https://www.nature.com/articles/s44172-022-00024-5
As far as I understood, you can only compute quite small neural networks until the noise signal gets too large, and also only a very limited set of computations works well in photonics.
https://arxiv.org/abs/2208.01623
The real issue is trying to backpropagate those nonlinear optics. You need a second nonlinear optical component that matches the derivative of the first nonlinear optical component. In the paper above, they approximate the derivative by slightly changing the parameters, but that means the training time scales linearly with the number of parameters in each layer.
Note: the authors claim it takes O(sqrt N) time, but they're forgetting that the learning rate mu = o(1/sqrt N) if you want to converge to a minimum:
So if I'm streaming a movie, it could be that the video is actually literally visible inside the datacenter?
That's a big part of it. I remember in the Early Pentium 4 days, starting to see a lot more visible 'squiggles' on PCB traces on motherboards; the squiggles essentially being a case of 'these lines need more length to be about as long as the other lines and not skew timing'
In the case of what the article is describing, I'm imagining a sort of 'harness cable' that has a connector on each end for all the fibers, and the fibers in the cable itself are all the same length, there wouldn't be a skew timing issue. (Instead, you worry about bend radius limitations.)
> Would the SerDes be the new bottleneck in the approach
I'd think yes, but at the same time in my head I can't really decide whether it's a harder problem than normal mux/demux.
Things get interesting if the losses are high and there needs to be a DFE. This limits speed a lot, but then copper solutions moved to sending multi-bit symbols (PAM 3, 4,5,6,8,16.. ) which can also be done in optical domain. One can even send multiple wavelengths in optical domain, so there are ways to boost the baud rate without requiring high clock frequencies.
Semi-accurate. For example, PCIe remains dominant in computing. PCIe is technically a serial protocol, as new versions of PCIe (7.0 is releasing soon) increase the serial transmission rate. However, PCIe is also parallel-wise scalable based on performance needs through "lanes", where one lane is a total of four wires, arranged as two differential pairs, with one pair for receiving (RX) and one for transmitting (TX).
PCIe scales up to 16 lanes, so a PCIe x16 interface will have 64 wires forming 32 differential pairs. When routing PCIe traces, the length of all differential pairs must be within <100 mils of each other (I believe; it's been about 10 years since I last read the spec). That's to address the "timing skew between lanes" you mention, and DRCs in the PCB design software will ensure the trace length skew requirement is respected.
>how can this be addressed in this massive parallel optical parallel interface?
From a hardware perspective, reserve a few "pixels" of the story's MicroLED transmitter array for link control, not for data transfer. Examples might be a clock or a data frame synchronization signal. From the software side, design a communication protocol which negotiates a stable connection between the endpoints and incorporates checksums.
Abstractly, the serial vs. parallel dynamic shifts as technology advances. Raising clock rates to shove more data down the line faster (serial improvement) works to a point, but you'll eventually hit the limits of your current technology. Still need more bandwidth? Just add more lines to meet your needs (parallel improvement). Eventually the technology improves, and the dynamic continues. A perfect example of that is PCIe.
I wonder if meta material might provide such nonlinearities in the future.
Photonics has definitely proved itself in communications and linear computing, but still has a way to in terms of general (nonlinear) compute.