Posted by jnord 18 hours ago
Then I started routing ethernet with PoE throughout my house and observed that other than a few large appliances, the majority of powered devices in a typical home in 2026 could be supplied via PoE DC current as well! Lighting, laptops, small/medium televisions. The current PoE spec allows up to 100 W, which covers like 80% of the powered devices in most homes. I think it would make more sense to have fewer AC outlets around the modern house and many more terminals for PoE instead (maybe with a more robust connector than RJ45). I wonder what sort of energy efficiency improvements this would yield. No more power bricks all over the place either.
We installed 120 LED ceiling lights in our home circa 2020, all of which were run with high voltage (romex) and accompanied by 120 little transformer boxes that mount inside the ceiling next to them.
Later ...
We installed outdoor lighting with low voltage, outdoor rated wiring and powered by a 12V transformer[1] and I felt the same way you did: why did we use a mile of romex and install all of those little mini transformers when we could have powered the same lights with 12V and low voltage wire ?
I then learned that the energy draw of running the low-volt transformer all the time - especially one large enough to supply an entire house of lighting - would more than cancel out energy savings from powering lower voltage fixtures.
You don't have this problem with outdoor lighting because the entire transformer is on a switch leg and is off most of the time.
So ... I like the idea of removing a lot of unnecessary high voltage wire but it's not as simple as "just put all of your lights behind a transformer".
[1] https://residential.vistapro.com/lex-cms/product/262396-es-s...
That's not a constraint of physics, you can absolutely build a DC power supply that is efficient in a wide load range. (Worst case it might involve paralleling and switching between multiple PSUs that target different load ranges.) But of course something like that is more expensive...
With double-conversion, generally yes.
I recently ran across the (patented?) concept of a delta conversion/transformer UPS that seems to eliminate/reduce the inefficiencies:
* https://dc.mynetworkinsights.com/what-are-the-different-type...
* a bit technical: https://www.youtube.com/watch?v=nn_ydJemqCk
* Figures 6 to 8 [pdf]: https://www.totalpowersolutions.ie/wp-content/uploads/WP1-Di...
The double-conversion only occurs when there's a 'hiccup' from utility power, otherwise if power is clean the double-conversion is not done at all so the inefficiencies don't kick in.
I think it's highly unlikely we'll see mass scale retrofits, but if enough momentum builds up, I can see it as a great bonus feature for new builds.
I got lucky with my house and every room has a dedicated phone line meeting at a distribution panel (a couple of 2x4s with screw terminals) built in the 50s. I'm in the process of converting it to light duty DC power. The wiring is only good for an amp or two, but at 48v that's still significant power transmission.
GE has a paper about the power conversion design, but it doesn't mention the unit to rack electrical and mechanical interface. Liteon is working on that, but the animation is rather vague.[2] They hint at hot plugging but hand-wave how the disconnects work. Delta offers a few more hints.[3] There's a complex hot-plugging control unit to avoid inrush currents on plug-in and arcing on disconnect. This requires active management of the switching silicon carbide MOSFETs.
There ought to be a mechanical disconnect behind this, so that when someone pulls out a rackmount unit, a shutter drops behind it to protect people from 800V. All these papers are kind of hand-wavey about how the electrical safety works.
Plus, all this is liquid-cooled, and that has to hot-plug, too.
[1] https://library.grid.gevernova.com/white-papers-case-studies...
[2] https://www.youtube.com/watch?v=CQOreYMhe-M&
[3] https://filecenter.deltaww.com/Products/download/2510/202510...
With that sort of voltage you should be able to use a capacitive or inductive sensor to activate a relay.
> When it is detected that the PDB starts to detach from the interface, the hot-swap controller quickly turns off the MOSFET to block the discharge path from Cin to the system. After the main power path is completely disconnected, the interface is physically detached, and no current flows at this time
> For insertion, long pins (typically for ground and control signals) make contact first to establish a stable reference and enable pre-insertion checks, while short pins (for power or sensitive signals) connect later once conditions are safe; during removal, the sequence is reversed, with short pins disconnecting first to minimize interference.
Somehow this seems the wrong approach to AI.
Data center workers are gonna need those big yoink sticks and those thick arc-fault bibs that furnace operators wear.
It's not that bad. It's just ordinary industrial protective gear.
[1] https://www.mcmaster.com/products/arc-flash-protection-face-...
[2] https://www.mcmaster.com/products/electrical-protection-glov...
Look at NTT Data or SoftBank.
See e.g. https://www.dell.com/support/kbdoc/en-us/000221234/wiring-in...
I will say that this is a surprisingly deep and complex domain. The amount of flexibility, variety and scalability you see in DC architectures is mind-boogling. They can span from a 3kW system that fits in 2U all the way to multiples of 100kWs that span entire buildings and be powered through any combination of grid, solar and/or gas.
Honestly, that was pretty surprising to me when I had to work with some telco equipment a couple of decades ago. To this day, I don't think I've encountered anything else that requires negative voltage relative to ground.
What's horrific converter performance in numbers?
An isolated flyback (to 12V) should be able to hit >92% and doesn't care if it's fed -48V or +48V or ±24V. TI webench gives me 95% though I'd only believe that if I'd built and measured it. What's the performance of your -48V → +48V?
[with the caveat that these frequently require custom transformers... not an issue with large runs, but finding something that can be done with an existing part for smaller runs is... meh]
Horrific performance by my definition would be 48v to say 1v. We only realistically use buck topologies for POL supplies. Such a ratio is really bad for current transients, not to mention issues like minimum on times for the controller.
(Thanks for the info!)
Automotive collectors can probably still relate to cars from the 1920s-50s having a "positive ground."
[1] https://www.analogisnotdead.com/article26/what-is-going-on-w...
The crucial difference is the direction in which the current is flowing: is it going "in to", or "out of" a hot wire? This becomes rather important when those wires are leaving the building and are buried underground for miles, where they will inevitably develop minor faults.
With +48V corrosion will attack all those individual telephone wires, which will rapidly become a huge maintenance nightmare as you have to chase the precise location of each, dig it up, and patch it.
With -48V corrosion will attack the grounding rod at your exchange. Still not ideal, but monitoring it isn't too bad and replacing a corroded grounding rod isn't that difficult. Telephone wires will still develop minor faults, but it'll just cause some additional load rather than inevitably corroding away.
Does that mean when you have electronics and use multiple dc-dc converters all the inputs and outputs share the same ground, it's not just the values for that pair of wires?
And if I want to use a telephone on an incorrectly wired 48dc circuit, I could switch the positive and negative wires, as long as the circuit in the telephone is isolated and never touches ground?
Thanks. Somehow I got in my head that all circuits were just about the delta from neutral and therefore nothing outside them mattered.
I think a circuit should mostly care about the deltas, but when you’re talking about things like phone lines, the earth becomes part of your circuit. You can’t influence its potential (it’s almost exactly neutral because any charge imbalance gets removed by interaction with the interplanetary medium) so everything else is going to end up being determined by what you need for their relative potential to that.
With DC systems you generally think about the issues - which is why modern cars are negative ground. However other than cars most people never encounter power systems of any size - inside a computer the voltages and distances are usually small enough that it doesn't matter what ground is. Not to mention most computers don't even have a chassis ground plane (there are circuit board ground planes but they conceptually different), and with non-conductive (plastic) cases ground doesn't even make sense.
With AC it's about where the ground is attached along the length of the transformer secondary. In the EU they ground one of the ends of the secondary, in the US we ground the center point.
I don't get to say this very often ... but the US way is objectively safer with no downside: 99% of human shocks are via ground, and it halves the voltage to ground (120V vs 240V). A neutral isn't required if there aren't 120V loads.
- uninsulated metal pins make contact with supply while partially exposed - much smaller distance between metal pins and the edge of the plug
But there's no inherent power tradeoff: you can have 240V outlets in the US, with the two prongs both 120V to ground. They're just really uncommon in residences.
edit: found it https://www.cnet.com/tech/tech-industry/google-uncloaks-once...
So the grid was always charging up the lead acid batteries, and the phone lines were always draining them? Or was there some kind of power switching going on where when the grid was available the batteries would just get "topped off" occasionally and were only drained when the power went out?
Actually, there was one. Even earlier phones had their own power. A dry-cell battery in each phone, and every 6 months, the phone company would come around with a cart and replace everyone's battery. Central battery was found to be more convenient, since phone company employees didn't have to go around to everyone's site. Central offices could economize scale and have actual generators feeding rechargeable batteries.
I was wiring in a phone extension for my grandma once as a boy and grabbed the live cable instead of the extension and stripped the wire with my teeth (as you do). I've been electrocuted a great number of times by the mains AC, but getting hit by that juicy DC was the best one yet. Jumped me 6ft across the room :D
The batteries are floated at the line voltage nothing was really charging or discharging and there was no switchover.
This is similar to your cars 12v dc power system such the when the car is running the alternator is providing DC power and the batteries float doing nothing except buffering large fluctuations stabilizing voltage.
Another thing we lost in the age of VoIP landlines, but then again mobile towers also have batteries. Just don't be unlucky and have a power outage with 3% battery on your phone...
Much of the world's mains-voltage electronics run at 240V (historical) and have PFC circuits (which are essentially just boost converters) that run at ~400V DC link voltages. 650V gives you enough headroom to tolerate overshoots and still have an 80% safety margin with a single level topology.
This voltage also coincidentally is a convenient crossover point where silicon MOSFETs start to become inefficient and GaN FETs have recently become feasible and mass-produced.
But what about availability? If you ask most of our users whether they’d prefer 4 9s of availability or 10% more money to spend on CPUs, they choose the CPUs. We asked them.
There are a lot of availability-insensitive workloads in the commercial world, as well, like AI training. What matters in those cases is how much computing you get done by the end of the month, and for a fixed budget a UPS reduces this number.
And then every machine has a switching power supply to convert this to low-voltage DC, and then probably random point-of-load converters in various places (DC -> AC -> DC again) for stuff like the CPU / GPU core, RAM, etc. Each of these stages may be ~95% efficient with optimal load, but the losses add up, and get a lot worse outside a narrow envelope.
Yes, of course both of those things are true, and yes, some data centers do engage in those processes for their unique advantages. The issue is that aside from specialty kit designed for that use (like the AWS Outposts with their DC conversion), the rank-and-file kit is still predominantly AC-driven, and that doesn't seem to be changing just yet.
While I'd love to see more DC-flavored kit accessible to the mainstream, it's a chicken-and-egg problem that neither the power vendors (APC, Eaton, etc) or the kit makers (Dell, Cisco, HP, Supermicro, etc) seem to want to take the plunge on first. Until then, this remains a niche-feature for niche-users deal, I wager.
https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architec...
https://blogs.nvidia.com/blog/gigawatt-ai-factories-ocp-vera...
almost everybody in the industry is embracing 800V DC mostly because of Vera Rubin and the increased electricity requirements.
DC doesn't have such a killer. There are a decent bunch of benefits, and the main drawback is gear availability. However, the chicken-and-egg problem is being solved by hyperscalers. Like it or not, the rank-and-file of small & medium businesses is dying, and massive deployments like AWS/GCP/Azure/Meta are becoming the norm. Those four already account for 44% of data center capacity! If they switch to DC can you still call it "specialty kit", or would it perhaps be more accurate to call it "industry norm"?
It is becoming increasingly obvious that the rest of the industry is essentially getting Big Tech's leftovers. I wouldn't be surprised if DC became the norm for colocation over the next few decades.
[0]: https://thecoolingreport.com/intel/pfas-two-phase-immersion-...
Fucks sake.
https://www.nokia.com/bell-labs/publications-and-media/publi...
Every single DC I’ve worked in, from two racks to hundreds, has been AC-driven. It’s just cheaper to go after inefficiencies in consumption first with standard kit than to optimize for AC-DC conversion loss. I’m not saying DC isn’t the future so much as I’ve been hearing it’s the future for about as long as Elmo’s promised FSD is coming “next year”.
Its much cheaper, quicker and easier to use cooling blocks with leak proof quick connectors to do liquid cooling. It means you can use normal equipment, and don't need to re-re-enforce the floor.
A lot of "edge" stuff has 12/48v screw terminals, which I suspect is because they are designed to be telco compatible.
For megawatt racks though, I'm still not really sure.
Looking at the manual for the first server line that came to mind, you can buy a Dell PowerEdge R730 today with a first party support DC power supply.
We have some old ceiling and exhaust fans, but I know those can be replaced. Our refrigerator is AC, but extended family with an off-grid home has a DC refrigerator that cycles way less, probably due to multiple design factors but I’m sure the lack of transformer heat is part of it. I’m not as sure about laundry machine or oven/cooktop options but I believe those are also running on DC in the off-grid home without inverters.
Most of these AC appliances also have transformers in them anyway for the control boards. It seems kind of insane to me that we are still doing things this way.
AC motors are using way more power than the puddly control boards in most home appliances. So you lose a little efficiency on conversion but being 80% efficient doesn’t matter much when it’s 1-5% of the devices energy budget. You generally gain way more than that from similarly priced AC motors being more efficient.
I know that a long time ago DC-to-DC voltage converters were very large in size, which meant AC would win on space efficiency. But unless I’m mistaken, that’s no longer the case. Wouldn’t a DC refrigerator with equivalent insulation and interior volume have nearly identical exterior dimensions as an AC refrigerator?
Sure, but it’s important to separate what could be built from what is being built based on consumer preferences and buying habits. The average refrigerator could be significantly quieter, but how often do people actually listen to what they are buying? People buying Tesla’s didn’t test drive the actual car they were buying so the company deprioritized panel gaps. And so forth, companies optimize in ways that maximize their profits not arbitrary metrics.
A DC household would have to choose a trade-off between multiple lines with different voltages or fewer voltages that need to be adapted to the appliances. And we're right back at the AC situation, but worse since DC voltages are more difficult to change.
But consumers like datacenters can very well plan ahead and standardize on a single DC voltage. They already need beefy equipment to deal with interruptions, power sourges, non-sinus components, and brownouts, which already involves transformers, condensators, and DC conversion for battery storage. Therefore almost no additional equipment is required.
The trade-off between, say, one (relatively) high voltage DC bus throughout the home vs many branches with lower discrete voltages is indeed a problem. With AC, we took the bus approach, running 120v everywhere (in the U.S., higher elsewhere). I’m inclined to say we should keep doing that for flexibility and predictability. But it’s a trade off, like you said. It would obviously help if regulatory and standards bodies came out with official recommendations.
Everything else I can think of in a typical household is basically a mere heater that in principle works equally well with AC and DC of the correct voltage. Even computers can be said to mostly care about the correct voltage since AC->DC conversion is vastly easier than voltage conversion.
Hard as a rock!
Well it's harder than a rock!Many datacenters I'd been to at that point were already DC.
Didn't think this was that new of a trend in 2026, but also acknowledge I did not visit more than a handful of datacenters since 2007.
It just seemed like a undenyably logical thing to do.