Posted by homebrewer 2 days ago
only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
It's a shared medium and it's not even half duplex, unlike the dedicated full duplex you would typically get with an ethernet cable to a switch port.
The fact that Wi-Fi achieves what it does with this limitation, and how it co-ordinates the dance of multiple unknown clients using the same medium - and in the presence of other RF technologies to boot - is indeed an incredible technology story, but this achilles heel is the single most defining thing about Wi-Fi performance.
That’s not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.
When you and your neighbour _appear_ to be transmitting at the same time, each adapter is actually spending most of it's time waiting for a clear medium and for various backoff timers to expire before attempting to transmit.
"Appear as noise" is not defined for Wi-Fi adapters. There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted. They just wait for a retransmit. Senders ordinarily wait a certain time to receive an acknowledgement, and if they don't, the start the transmit wait cycle again. But they often then reduce the data rate to increase the odds of a successful transmission.
I'm glossing over some complexity here, because there's a sender and receiver to consider, and each has a different view of the RF environment, but the point is always correct when all transmitters and receivers (lets say the 2 APs and each has 1 client) are in audible range of each other. And this is most of the time. Note that "audible range" (where the signal is such that the medium is deemed as busy by the adapter) is much larger than the "usable range" (where data can be transmitted at reasonable speeds). So transmitters create interference in a much larger area than they actually operate in.
That means your neighbour transmitting at 6Mbps to his AP will indeed degrade the performance of your client who wants to transmit at 600Mbps because your client has to wait ~100 times longer for a clear medium.
That's not correct. WiFi is "listen before talk." Radios listen to the channel, trying to decode preambles from other networks, before transmitting. In that process, they can detect other signals well below the threshold where they'll consider the medium in use (the CCA threshold). If you have an otherwise clean channel, the noise floor might be -95 dBm. Radios typically can decode the preambles 3-4 dB above the noise floor. Conventionally, the WiFi standards set the CCA threshold at -82 dBm. So the radio can "hear" a lot of signals that won't cause it to trigger collision avoidance. More recent standards allow using a CCA threshold as high as -62 dBM under certain circumstances to facilitate spatial reuse: https://arista.my.site.com/AristaCommunity/s/article/Spatial....
Also, what the Wifi standards do is less aggressive than what radios could do. The CCA thresholds are set to facilitate orderly use of the spectrum--they're not physical limits. To receive a transmission, you just need sufficient signal-to-noise ratio. An adjacent network transmission raises the noise floor, but if your radio is close enough to your AP, you might still have sufficient SNR.
OFDMA on wifi7/802.11be: https://blogs.cisco.com/networking/wi-fi-7-mru-ofdma-turning...
As a general rule of thumb, the best version of WiFi x will only come with WiFi x+1. So for all the problems to be solved and ironed out on OFDMA it will be WiFi 8 then. And for all the promises of Ultra-High Reliability, it will have to be WIFI 9.
WiFi is clearly moving more towards like 4G and 5G with every version. I just hope someday that it really is good enough where there are many people using it at the same time.
But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded. It's like a linear hack to an exponential problem. It seems to work at first, but under very high load conditions performance still degrades ever faster until it falls off a cliff. Then there's all sorts of complex dynamic behaviour like the hidden node problem to add to this, but it all boils down to needing air-time and SNR.
You’re overlooking the spatial dimension: https://en.wikipedia.org/wiki/Spatial_multiplexing
802.11 is in general a vast swag of cool tricks, and when enough ideas are thrown at a wall, many do end up sticking, but for the most part the benefits are cumulative. MIMO being one major exception.
Another thing is that features like beamforming and higher QAM, let's say, are going to matter more in ideal scenarios where APs are in their sweet spot relative to clients, and you get to take advantage of high SNRs. Is that going to help when someone buys a Netgear Wifi 7 AP only to flip it upside down behind the couch in their apartment in an environment where 2.4 and even 5 ghz are basically gone from all their neighbors' use? Still, faster data rates mean clients get on and off the air quicker overall, saving airspace and battery if applicable. So, I think there's mainstream and highly specialized features rolling out simultaneously.
I think part of it is that if there isn't a regular and practiced process for bumping standards, then gaps between revisions can grow quite large and stagnation can set in, and if there are any significant improvements it'll take longer for them to come to fruition than if there were regular revisions that are only modest most of the time. Looking at a few other things that come to mind: USB had an 8 year gap between 2 and 3 as well, PCIe had a 7 year gap between 3 and 4 (albeit while they only had a 3 year gap between the specification for 5 to 6, it still took 3 more years (2025) for the first pcie6 devices, and I still can't buy a consumer-level pcie6 motherboard, it's a separate mess), C++ had an 8 year gap between C++03 and C++11, Java had a 5 year gap between 6 and 7 (and another 3 years after 7 to get to Java 8); all of these things now have more rapid cycles.
Just taking a swing at it, but I don't play that sport so probably a big whiff
The "old" cellular bands aren't generally open, at least in the States. We tend to use them for newer licensed stuff in cellular-land instead of the old licensed stuff we used to do. (Old modulation techniques die out and get replaced, but licensed RF bandwidth is still licensed RF bandwidth.)
This is surprising to me. I'd have guessed it decreases quadratically (i.e. due to the inverse square law), not exponentially.
The paragraph below seems to contain an explanation, but I don't really understand it (namely because I don't know what that percentage "Coverage" column actually means, or what we mean with "the total distance at each QAM step").
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
But the sticker speed is what sells..
If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.
You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.
But you're technically correct!
But then again, the sentence uses the term "signal strength", not "throughput", so that would suggest quadratically. But I guess "signal strength" could be meant colloquially and mean more than just the raw signal power received by the antenna, here.
It's all very fuzzy to me, as it stands.
Because the variable is the base, not exponent.
The 4x4 makes all the difference. Sitting in my car the 6+ would fight with my 4G for internet and cause maps to be super slow; now I'm off the property before its unusable.
I had intended to put APs in multiple rooms, but there doesn't seem like much point now.
Now every other brand is dead to me.
I have a Netgear WAX218, one of the last cheap business-class APs I could find that don't require a cloud service to manage. WAY better than the pro-sumer wifi routers I was running before in access point mode. I'll have to look into Zyxel offerings a bit more when I'm ready to replace my Netgear.
Finding it increasingly difficult to avoid bottlenecks though. Even with wifi 7 I still get 1.3 on my mac and 0.5 on iphone. More than enough realistically, but upstream internet is 1.7 so tiny bit unfortunately
Think I'm just going to wire the place with 10 gig fiber
>The speed advantages that Access Points have over mesh systems will become much more obvious with Wi-Fi 7.
From what I've read mesh devices generally can detect when they've got wired backhaul so they can stay in mesh mode for the clean handovers while not relying on it for actually moving data
And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps. And still be on channels with little interference. (DFS helps if you're not near radar, which intentionally causes you to get kicked off those channels and lose connection entirely.) And even then you might have to mess about a lot with positioning, because of reflections and generally multipath propagation.
I'd say it's not worth the headache. I would love to lay down Ethernet cable, even if it was just cabling only suitable for 1 Gbps (for which there's no good reason to, might as well do 10 Gbps).
But yeah, any mesh system worth its salt figures out the topology and absolutely favors wired links over WiFi for the back haul. Anything else wouldn't make any sense at all, there is basically no situation where you'd prefer an RF channel over a wire, unless the wire is maybe made of wet string.
If one considers that the higher speeds in 802.11ac and 802.11be require 256QAM modulation or better, this is completely expected (assuming 5 GHz band of course, which doesn't go through material very well at all). If you've sen a live eyeball chart of a 256QAM or 1024QAM constellation on test equipment for clear-air microwave link purposes, and seen how quickly it can degrade or get fuzzy if there's anything in the way of the link, it becomes more readily apparent. MCS levels 8 and onwards here:
https://en.wikipedia.org/wiki/Wi-Fi_7
"Clean" eyeball example of 256QAM: https://www.everythingrf.com/community/what-is-256-qam-modul...
examples of "fuzzy qam" in 16QAM, same principle applies to denser QAM
https://www.researchgate.net/figure/Typical-eye-diagram-Symb...
https://www.hp.com/us-en/shop/tech-takes/what-is-a-powerline...
If you have a set of full capability 802.11be clients you'll see the best performance with a 3x3 AP and 160 MHz channels.
It's unfortunately consumer grade tp-links so while they have actually been pretty good...you don't get a lot of knobs to tweak.
Still need to try MLO at some stage and they're currently acting as a bridge so (i.e. wifi backhaul) so think it might get better once I've laid fiber backhaul between them