Negotiating PoE+ Power in the Pre‑Boot Environment

212 pietrushnic 61 5/27/2025, 11:45:02 PM roderickkhan.com ↗

Comments (61)

minetest2048 · 21h ago
Related problem is single-board computers that relies on USB-PD for power. USB-PD sources requires the sink to do power delivery negotiation within 5 seconds, or it will cut its power or do funny things. Because USB-PD negotiation is handled in Linux, by the time Linux boots it will be too late, and power supply will cut the power, so it will be stuck in a boot loop: https://www.spinics.net/lists/linux-usb/msg239175.html

They way they're trying to solve it is very similar to this article, by doing the USB-PD negotiation during U-boot bootloader stage:

- https://gitlab.collabora.com/hardware-enablement/rockchip-35...

- https://lore.kernel.org/u-boot/20241015152719.88678-1-sebast...

rickdeckard · 20h ago
Interesting, thanks for sharing. I missed that evolution of "thin" USB-C controllers which delegate the PD handshake elsewhere.

I don't know yet how I feel about the fact that a driver in the OS is supposed to take this role and tell the power-supply how much power to deliver. Not necessarily a novel security concern, but a potential nightmare from a plain B2C customer service perspective (i.e. a faulty driver causing the system to shut down during boot, fry the Motherboard,...)

gorkish · 12h ago
It's not, per-se, a driver doing the PD negotiation in software; it's more that the USB chipset isnt initialized and configured for PD negotiation (or anything else for that matter) until the CPU twiddles its PCI configuration space.

I would have imagined that USB controller chipsets would likely offer some nonvolatile means to set the PD configuration (like jumpers or eeprom) precisely because of this issue. It's surprising to me that such a feature isnt common

hypercube33 · 16h ago
It kinda blows my mind that the Ethernet or USB phy doesn't have this stored in some tiny nvram and handle all of the negotiations. What if I have a battery to charge while the device is off such as a laptop? How does Android deal with this when it's not booted? Does the bios handle this stuff?
wolrah · 16h ago
> What if I have a battery to charge while the device is off such as a laptop? How does Android deal with this when it's not booted? Does the bios handle this stuff?

In my experience, if the device doesn't have enough power to actually boot it will simply slow charge at the default USB rate.

This can be problematic with devices that immediately try to boot when powered on.

RulerOf · 13h ago
> This can be problematic with devices that immediately try to boot when powered on.

I had an iPad 3 stuck in a low-battery reboot loop like this for hours once upon a time. I eventually got the idea to force it into DFU mode and was finally able to let it charge long enough to complete its boot process.

kevin_thibedeau · 14h ago
There are standalone PD controllers that can be configured with the desired power profile(s) in flash.

No comments yet

rjsw · 19h ago
Reading the thread, the behaviour seems to depend on the power supply. I have powered a Pinebook Pro via USB-C with a PinePower PSU, didn't even have a FUSB302 driver in the OS then (am currently adding one).

Other boards don't do USB-PD at all and just rely you using a PSU with a USB-C connector that defaults to 5V, e.g. RPi and Orange pi 5 (RK3588).

rickdeckard · 17h ago
For 5V output (like used on the Pinebook) you don't need to negotiate anything over USB-PD, that's the default provided by a USB-C PSU to ensure legacy USB compatibility. Support for higher currents can then be "unlocked" with resistors between the Alt Data-Lines (like a USB-A charger would).

Everything beyond 5V requires a handshake between device and PSU, which ensures that the connected device can actual handle higher power output.

nfriedly · 13h ago
It's arguably not "negotiation", but if the connector is USB-C on both ends, then even 5V requires a couple of resistors to determine which side is the source and which side is the sink.

It's pretty common for really cheap electronics to skip these resistors, and then they can only be powered with a USB-A to USB-C cable, not C-to-C. (Because USB-A ports always a source and never a sink.) Adafruit even makes a $4.50 adapter to fix the issue.

But you're right that everything higher than 5V & 3A gets significantly more complex.

varjag · 18h ago
Apple solves this by doing all PD negotiation in hardware.
nyrikki · 18h ago
Unless you are very price sensitive, using USB Power Delivery ICs is the norm now for most devices, but PD is different than POE. PD is just loose tolerance resistors on USB.
p12tic · 18h ago
Incorrect. https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Deliver... is a good start about the subject: "PD-aware devices implement a flexible power management scheme by interfacing with the power source through a bidirectional data channel and requesting a certain level of electrical power <...>".
nyrikki · 17h ago
You can tell I have been in the 5v world too much, thanks for the correction.
floating-io · 18h ago
No, it's not. You can do very basic selection with resistors, but you can't get above 5V (or more than a couple of amps IIRC) without using the actual PD communication protocol.
throw0101d · 1d ago
> PoE Standards Overview (IEEE 802.3)

For the record, 802.3bt was released in 2022:

* https://en.wikipedia.org/wiki/Power_over_Ethernet

It allows for up to 71W at the far end of the connection.

londons_explore · 1d ago
The UART standards didn't specify bit rates - which allowed the same standard to scale all the way from 300 bps in the 1960's all the way up to 10+ megabps in the 90's.

Why can't POE standards do the same?

Simply don't set voltage or current limits in the standard, and instead let endpoint devices advertise what they're capable of.

Aurornis · 1d ago
> Simply don't set voltage or current limits in the standard,

There are thermal and safety limits to how much current and voltage you can send down standard cabling. The top PoE standards are basically at those limits.

> and instead let endpoint devices advertise what they're capable of.

There are LLDP provisions to negotiate power in 0.1W increments.

The standards are still very useful for having a known target to hit. It’s much easier to say a device is compatible with one of the standards then to have to check the voltage and current limits for everything.

esseph · 1d ago
That would require them to know the standard of cable they're connected with

Unless you like home and warehouse fires

Or if you want to add per port fuses. That sounds incredibly expensive.

brirec · 1d ago
The standard is, well, a standard, and that’s why PoE is safe in the first place. Adding per-port fuses won’t stop bad cable from burning, because the fuse would have to be sized for the rating of the PoE switch.

This is why you don’t want “fake” Cat6 etc. cable. I’ve seen copper-clad aluminum sold as cat6 cable before, but that shit will break 100% of the time and a broken cable will absolutely catch fire from a standard 802.at switch.

esseph · 1d ago
There are also distance limits based on the type of cable used and the power drawn by the end device. The more you push that, the more heat you build. Shielding reduces that heat factor.

https://en.wikipedia.org/wiki/Power_over_Ethernet#Power_capa...

For Dayjob I power a lot of very expensive not-even-on-the-market-yet radios and other equipment via multiple PoE standards, mixed vendors, 2 pair, 4 pair, etc via POE and we have ran into all kinds of POE problems over the years.

POE fires do happen. Sometimes it's the cable, the connector, sometimes something happened to the cable run. Sometimes the gear melts.

https://www.powerelectronictips.com/halt-and-catch-fire-the-...

throw0101d · 20h ago
> There are also distance limits

It should be noted that there are two standards (of course) for Ethernet cabling, and one (TIA) officially hardcodes distances (e.g., 100m) but the other (ISO) simply specifies the signal-to-noise has to be a certain limits which could allow for longer distances (>100m):

* https://www.youtube.com/watch?v=kNa_IdfivKs

A specific product that lets you go longer than 100m:

* https://www.youtube.com/watch?v=ZY48KUAZKhM

esseph · 1d ago
As for your note about PoE standards btw, I remember an old joke, something along the lines of "The best thing about standards is that there are so many to choose from!"

---

Non-standard implementations There are more than ten proprietary implementations.[49] The more common ones are discussed below.

https://en.wikipedia.org/wiki/Power_over_Ethernet#Non-standa...

RF_Savage · 1d ago
Proper PoE sources have active per port current monitoring and will disable the PoE power in case of an over current event.
esseph · 1d ago
You can be over the thermal capacity of the cable without having too much draw on the port.
izacus · 23h ago
So the situation would be: you create a setup, buy devices and then they randomly shutdown as they pull too much current? How is that better than having a well defined standard that ensures compatibility?
crote · 20h ago
The device first negotiates a certain current capability. If a device explicitly asks for 15W and then goes on to draw 60W, you can hardly call it a "random" shutdown: it is clearly misbehaving, so it is best to shut it down to prevent further damage.
izacus · 16h ago
Again, how's that better than having a clear standard which ensures that the devices you buy are conformant from the get go?
lazide · 12h ago
There is no device standard which can ensure the existing networking cabling isn’t damaged or subtly out of spec.
wmf · 1d ago
Does POE+++++ measure the cable? If not, there's nothing in the protocol stopping you from overloading the cable.
esseph · 1d ago
Have you ever ran a DC voltage calculation for voltage drop for a cat5/6/7 cable?

It can be substantial. But yes, there are cable spec requirements for POE depending on the demands of the device!

NEC as of 2017 has new standards and a whole section for PoE devices above 60W now, specifically a section on safety and best practices. It DOES have cable requirements that do impact the cable standard chosen.

More info on that here: https://www.panduit.com/content/dam/panduit/en/landing-pages...

From: https://reolink.com/blog/poe-distance-limit/?srsltid=AfmBOop... --- PoE Distance Limit (802.3af)

The original 802.3af PoE standard ratified in 2003 provides up to 15.4W of power to devices. It has a maximum distance limit of 100 meters, like all PoE standards. However, because of voltage drop along Ethernet cables, the usable PoE distance for 15.4W devices is often only 50-60 meters in practice using common Cat5e cabling.

In addition, this piece of note from Wikipedia: https://en.wikipedia.org/wiki/Power_over_Ethernet#Power_capa... ---

The ISO/IEC TR 29125 and Cenelec EN 50174-99-1 draft standards outline the cable bundle temperature rise that can be expected from the use of 4PPoE. A distinction is made between two scenarios:

bundles heating up from the inside to the outside, and bundles heating up from the outside to match the ambient temperature

The second scenario largely depends on the environment and installation, whereas the first is solely influenced by the cable construction. In a standard unshielded cable, the PoE-related temperature rise increases by a factor of 5. In a shielded cable, this value drops to between 2.5 and 3, depending on the design.

PoE+ Distance Limit (802.3at) An update to PoE in 2009 called PoE+ increased the available power to 30W per port. The formal 100-meter distance limit remains unchanged from previous standards. However, the higher power budget of 30W devices leads to increased voltage drops during transmission over long distances.

PoE++ Distance Limit (802.3bt) The latest 2018 PoE++ standard increased available power further to as much as 60W. As you can expect, with higher power outputs, usable distances for PoE++ are even lower than previous PoE versions. Real-world PoE++ distances are often only 15-25 meters for equipment needing the full 60W.

wmf · 1d ago
I understand all that but... let's imagine I run 60W 802.3bt over 100m of cat5. The voltage drop will be bad. So what actually happens? Does the device detect voltage droop and shut off? Or does the cable just catch on fire?
leoedin · 1d ago
A longer cable won't just catch fire, because the power dissipation per unit of length is the same regardless of overall length. Imagine the most extreme case - a cable so long the voltage difference is 0V at the end. It's basically just a very long resistor dissipating 60W. But each meter of cable will be dissipating the same power as every other PoE setup.

Looping the cable or putting it in a confined space could cause issues. The cable could then catch fire even though it appeared to be operating normally to the PoE controller.

esseph · 1d ago
In my understanding, it depends on the device and the cable installation. Let's say it's a 59W device so it doesn't fall under NEC regulations as of 2017 for PoE devices over 60W.

The device needs a certain amount of power to keep itself alive. Depending on how the device is designed and if actually adhering to standards, the device should simply not have enough power to start at say 80m, or let's say they pushed the install from the get-go (happens all the time) and it's actually 110m on poor / underspec'd cable.

And let's say the device has enough power to start, but you're using indoor cat5 and it's been outdoor for 7 years, and you don't know this but it's CCA. If it's in a bundle with other similar devices drawing high power and there is enough heat concentrated at a bend, then yes, the cable could catch fire without the device having a problem. As long as the device has enough power it's going to keep doing its thing until the cable has degraded enough to cause signal drop and assuming it's using one of the more modern 4pair PoE standards, would just shut off. But that could be after the drapes or that amazon box in the corner of the room caught fire.

We're just lucky in the residential space that PoE hasn't been as "mass market" as an iphone, and we've been slowly working into higher power delivery as demands have increased.

IMO? It's all silly though. We should just go optical and direct-DC whenever possible ;)

throwaway67743 · 21h ago
Depending on switch vendor and quality, they can actually increase the voltage output - the spec iirc allows for up to 57V at PSE which an intelligent switch can modulate to overcome limited voltage drop - cheaper switches (desktop etc) just supply all ports 54V (or less, but it should be 54 at source) from the same rail without any modulation
Aurornis · 17h ago
The specs have a voltage range that the source can put out and a corresponding range of voltages that the device must accept at the end of the wire, after voltage drop.

Longer wires don’t increase the overheating risk because the additional heat is divided over the additional length.

jorvi · 19h ago
I mean.. once you have to start to buy specialized cables anyway, you might as well have specified in the PoE++[0] standard that only specialized PoE++ cables are accepted, verified by device handshake. And then you can engineer the whole thing to basically be "Powerline but with higher data rates", making the cable power-first instead of data-first.

[0]What a horrible naming. PoE 2, 3, 4 etc would have been much better..

esseph · 16h ago
Well, as of 2017 we have -LP marked cables now.

https://www.electricallicenserenewal.com/Electrical-Continui...

userbinator · 1d ago
The standards basically specify the minimum power the source is supposed to be able to supply, and the maximum power the other end can sink.
yencabulator · 12h ago
802.3bt changes how the wires are used physically. Power can now be negotiated to be delivered over previously data-only lines.
mrheosuper · 1d ago
because power delivery depends on a lot of other things. The most important one i could think is the cable, the ethernet cable is a dumb one, no way to tell its capability. USB-C solved this problem with the E-marker chip, which basically transform the dumb cable into smart one.

Even so, the PD protocol limits how much power can be transferred.

varjag · 18h ago
It was finalized in 2018, and by 2020 there were commercial offerings from major vendors. I know this as we developed a 802.3bt product in 2018.
userbinator · 1d ago
running Intel Atom processors[...]these were full-fledged x86 computers that required more power than what the standard PoE (802.3af) could deliver

Those must've been the server Atoms or the later models that aren't actually all that low-power, as the ones I'm familiar with are well under 10W.

bigfatkitten · 1d ago
You might have a 10W TDP CPU, but the rest of the system requires power too.
SoftTalker · 1d ago
What I don't understand, though, is that these were for "digital signage systems" according to TFA. You're not running Windows 10 Pro on an Atom board and a large illuminated digital sign with 25W PoE. Maybe the signs were smaller than I'm thinking? Like tablet-sized? But if you're needing to run power for a display, why not just power the whole system from that?
p_l · 23h ago
Lots of digital signage with small touchscreen LCDs that people interact with.
transpute · 1d ago
Wouldn't each display have dedicated PoE?
ranger207 · 2h ago
I know USB PD has trigger chips that will request the power levels that require active negotiation for you; are there not equivalents for PoE?
oakwhiz · 1d ago
Wouldn't it be funny to probe peripherals to decide if extra power is demanded or not, then request it all inside UEFI
BobbyTables2 · 3h ago
Blade servers basically do this before they even turn on.
amelius · 22h ago
What if the demand is variable?
pawanjswal · 1d ago
Solving PoE+ power negotiation before the OS boots is next-level. This is good and a clever workaround.
tremon · 19h ago
I would have thought it the other way around: performing PoE+ negotiation in the network hardware is first-level; delegating it to the OS is next-level for me.
mrheosuper · 1d ago
Interesting. If it were me, i would try to boot the OS at lower CPU clock and maybe I can get away with it. That approach would be less than ideal than author's.
willis936 · 18h ago
>the switch is configured to require LLDP for Data Link Layer Classification for devices requiring more than 15.4W

This really feels like a switch configuration problem. A compliant PoE PD circuit indicates its power class and shouldn't need to bootstrap power delivery. If the PD is compliant and components selected correctly then the PSE is either non-compliant or configured incorrectly.

theandrewbailey · 20h ago
> I dug deeper and came across the concept of UEFI applications.

TIL.

yencabulator · 12h ago
My rule of thumb: UEFI is like a cleaned-up MS-DOS.
protocolture · 1d ago
This is awesome. I probably would have used passive poe, which is the defacto workaround everyone seems to use. Good to see someone actually tackle the issue instead of working around it.
xyst · 18h ago
> Back in 2015, I was working on a project to build PoE-powered embedded x86 computers and digital signage systems.

> Our device required about 23W when fully operational, which pushed us into 802.3at (PoE+) territory

The problem the author solved is quite interesting. But I can’t help but think how wasteful it is to load up a full copy of windows just to serve up dumb advertisements.

The attack surface of a fully copy of windows 10 is an attackers wet dream.

Hope most of these installations are put out to pasture and replaced with very low power solutions.

wildzzz · 17h ago
It all depends on the environment. If windows is already running on every PC in the building, it may make more sense to have these signs run windows too. You can put copies of the security software you're already paying for and allow for approved AD users to login and manage the signs. Windows is a great attack vector but there's less risk if you're in control of it versus some vendor solution that you can't audit as easily. The fact that you don't have a user going to malicious websites or plugging in malware flash drives probably reduces the risk too. If you already have 1000 windows machines on the enterprise network, what's a few more?
stackskipton · 15h ago
Having worked with these systems before, most of them are appliances meaning no, we did not AD join them or install our security software.

Windows was running because Linux was too hard for vendors.