Posted by calcifer 6 hours ago
I have absolutely no idea what anyone means when they say USB 3.2 gen 2x2. I used to know what USB 3.2 meant but it's certainly not that.
The lack of clarity is in keeping with the USB C connector itself, which may supply or accept power at various rates or not at all, may be fast or slow, may provide or accept video or not, and may even provide an interpretation of PCI Express but probably doesn't.
It probably looks the same no matter what, and the cable selected to use probably also won't be very forthcoming with its capabilities either.
(Be sure to drink your Ovaltine.)
https://www.usb.org/sites/default/files/usb_type-c_cable_log...
Isn't the whole point of the USB standard to make it so you don't have to be a super nerd to plug stuff together? People just want to transfer data from their phone or camera to a laptop without navigating spec sheets.
When it doesn't, it will take hours/days to figure out why and if it comes down to a cable incompatibility, I would have already made the mistake of not knowing what I was buying.
As for allowing to switch to fiber, that just seems orthogonal again to what these USB NICs are for, not to mention the SFP+ itself is probably more expensive than the NIC shown here...
The other side will then also need a low power NIC (of which fiber and DAC over SFP+ are less power hungry). What this article doesn't mention, is that there are also a lot of PCIe NICs on the market which aren't power hungry (RTL8127), as well as RTL8261C for switches/routers.
I've seen low power RTL NICs with SFP+ on it, too (example: [1]). With SFP+, you'll have a lot more versatility. DAC and SFP+ fiber are very cheap, btw. Especially second hand they go for virtually nothing. I have 10 SFP+ fiber lying around here doing nothing which I got for a few EUR each.
For me as European with high energy prices and solar energy gotten the beat next year (in NL), this is all very interesting.
There's a couple of good reasons why to opt for fiber in the home. You keep the energy between the different groups separated which can help. I also find fiber very easy to get through walls, allowing me to have multiple fiber connections through walls (currently I use 1x fiber + 1x ethernet for PoE possibilities from fusebox).
With all above being said, AQC100S is low power and does not get very hot. You can get these with SFP+ and PCIe/TB. They've been available for a while.
[1] https://nl.aliexpress.com/item/1005011733192115.html (no vouching for, just first hit on search)
"Card supports 10Gbit/s and 10/100/1000/2500/5000/10000Mbit/s Ethernet"
Nice to see; some NICs are shedding 10/100 support. Apparently, it's not necessary to do this, even in a low cost device.
100BASE-TX uses just two pairs (lanes), one for sending and one for receiving. 1000BASE-T uses all four pairs, for both sending and receiving. Therefore, a 100BASE-TX interface that's only receiving needs to power up one pair. A 1000BASE-T interface needs to power all four pairs all the time.
I recall reading about some extensions that allow switching off some of the pairs some of the time ("Green Ethernet"), but I think that they require support on both sides of the link, and I'm not sure if they are widely deployed.
For regular Ethernet, the switch will have a table of which IPs are on which NIC and thus can dynamically send packets at the right transmission protocols supported by those NICs without degrading the service of other NICs.
100m is fine. 10m is fine but I can’t think of anything that negotiates 10m other than maybe WOL (I don’t use it enough to be sure from memory).
If I didn ahve something esoteric it would be on a specialised vlan anyway.
It's not, cf. sibling posts. The GP probably learned networking in the 80ies~90ies when it was true, but those times are long gone.
(unless you're talking wifi.)
If anyone's aware of something better, I'd be interested too :)
(Then again I wouldn't voluntarily use 5Gb-T or 10Gb-T anyway, and ≈50W is enough for most use cases.)
[ed.: https://www.aliexpress.us/item/3256807960919319.html ("2.5GPD2CBT-20V" variant) - actually 2.5G not 1G as I wrote initially]
A lot of laptops won't accept less than 60w
My work laptop won't accept less than 90w (A modern HP, i7 155h with a random low end GPU)
At first everyone at the office just assumed that the USB C wasn't able to charge the pc
Some devices expect USB-A on the charger side instead of C
USB-A pump out 1A5V(5W) regardless of what's connected to it, then it negotiate higher power if available.
USB C-C does not give any power if the receiving device is not able to negotiate it
A 20w charger will definitely charge the MacBook, just slowly.
I can’t recall which cable I used though. The cable might have been garbage but I’m pretty sure I threw out all the older USB cables so they wouldn’t get mixed with more modern supporting cables.
Laptop charges fine regular 5V as well.
When plugged into 100W chargers while powered on, it takes ten minutes to gain a single percentage point. Idle in power save may let me charge the thing in a few hours. If I start playing video, the battery slowly drains.
If your laptop is part space heater, like most laptops with Nvidia GPUs in them seem to be, using a low power adapter like that is pretty useless.
Also, 100W chargers are what, 25 euros these days? An OEM charger costs about 120 so the USB-C plan still works out.
Other manufacturers do similar things. Apple accepts lower wattage chargers (because that's what they sell themselves) but they ignore two power negotiation standards and only supports the very latest, which isn't in many affordable chargers, limiting the fast charge capacity for third parties.
* ≤15W charger: must have 5V
* ≤27W charger: must have 5V & 9V
* ≤45W charger: must have 5V & 9V & 15V
* (OT but worth noting: >60W: requires "chipped" cable.)
* ≤100W charger: must have 5V & 9V & 15V & 20V
(levels above this starting to become relevant for the new 240W stuff)
(36W/12V doesn't exist anymore in PD 3.0. There seems to be a pattern with 140W @ 28V now, and then 240W at 48V, I haven't checked what's actually in the specs now for those, vs. what's just "herd agreement".)
Some devices are built to only charge from 20V, which means you need to buy a 45.000001W (scnr) charger to be sure it'll charge. If I remember correctly, requiring a minimum wattage to charge is permitted by the standard, so if the device requires a 46W charger it can assume it'll get 15V. Not sure about what exactly the spec says there, though.
(Of course the chargers may support higher voltages at lower power, but that'd cost money to build so they pretty much don't.)
NB: the lower voltages are all mandatory to support for higher powered chargers to be spec compliant. Some that don't do that exist — they're not spec compliant.
$ upower -i $(upower -e | grep BAT)
[...]
voltage-min-design: 11.58 V
And I can charge it via USB-C using a 22.5W powerbank @ 12V (HP EliteBook 845 G10.)I guess that would be out of spec then?
edit: nvm I didn't see the qualifier 'minimum'
voltage-min-design: 11.58 V
This has nothing to do with USB-C, this is the minimum design voltage of your lithium ion battery pack. In this case, you have a 4-cell pack, and if the cells drop below 2.895V that means they're physically f*cked and HP would like to sell you a new battery. (Sometimes that can be fixed by trickle charging, depending on how badly f*cked the battery is.)If your laptop's USB-C circuitry were built for it, you could charge it from 5V. (Slowly, of course.) It's not even that much of a stretch given laptops are built with "NVDC"¹ power systems, and any charger input goes into a buck-boost voltage regulator anyway.
¹ google "NVDC power", e.g. https://www.monolithicpower.com/en/learning/resources/batter... (scroll down to it)
https://hackaday.com/2023/08/14/adding-power-over-ethernet-s...
Makes sense, thanks!
Surely a matter of time until someone does this…
Might be a struggle I suspect!
The problem comes when you try to design a large network and need random PoE ports on end devices where you can't home-run a cable back.
I have a Unifi Pro XG 48 PoE and I love it, but I still don't use PoE for everything. The cost of a (non unifi) poe device + the cost of using one of those ports always exceeds a simple power adapter on the other side (if possible).
I think about this a lot.
Does anyone know if the old bulky ones will hit 10G speeds on the same hardware?
I assume I can get a few old TB2 models and adapters on the cheap and they'll run cool enough and stable enough for constant 1G internet and occasional 10G intranet
Is this just my hardware? It's hard to imagine these issues would be so prevalent with how many people use these on linux...
For cables, I think everything converged to cat6a a while ago, which is both reasonably cheap and perfecrly fine for 10G (up to 100m from what I remember)