This has been a controversial question amongst AP manufacturers and Wi-Fi engineers. The AP manufacturers are busy pushing out 802.11ac wave 2 access points, while us Wi-Fi engineers cannot resist playing with the “latest and greatest” Wi-Fi technology.
That said, as always, the answer to the appropriateness of deploying 802.11ac wave 2 is “it depends on your requirements.” In the SMB market, where I specialize, the answer is almost always going to be “no”.
(If you are interested in more detailed technical aspects, I refer you to my previous blog post, which goes into technical detail as to how MIMO and MU-MIMO work. While 802.11ac is a single standard, it has been implemented by the Wi-Fi industry in two different phases, known as “waves” in the market. Thus, it is convenient and practical to present “802.11ac wave 1” and “802.11ac wave 2” as two different and successive generations of the Wi-Fi standard, even though this is incorrect by strict definition.)
802.11ac wave 1 was introduced in 2014 and provided two major improvements over 802.11n:
- 80 MHz channels at 5 GHz. This more than doubles the throughput of a 40 MHz channel at 5 GHz with 802.11n, though the number of independent non-overlapping channels is now only 5-6 at 80 MHz, vs 12 at 40 MHz. If the DFS (i.e. radar) portions of the band need to be omitted, generally due to close proximity to airports, military installations, or weather stations, this reduces to only 2 independent channels at 80 MHz vs. 4 independent channels at 40 MHz. Fortunately, most SMB applications are not high density and generally not in close proximity to locations where DFS is an issue, so using 80 MHz channels is perfectly appropriate.
- 256 QAM at 5 GHz: This is a more complex modulation scheme that, at its maximum, squeezes an additional 33% throughput improvement above the maximum 64 QAM MCS rate used in 802.11n. The major limitation of 256 QAM is that the signal to noise ratio (SNR) needs to be > 29 dB in order to successfully resolve this encoded modulation at the receiver. In practical terms, this typically translates into the client device needing to be within 10 – 15 feet of the access point with no major obstructions (e.g. walls) or significant co-channel interference. Clients with lower SNR (either due to distance and/or interference) will still connect at the same MCS rates available in 802.11n.
Thus, while 802.11ac wave 1 could be argued to be a “minor” improvement over 802.11n @ 5 GHz, the throughput gains from using 80 MHz channels can generally be realized, making it a sound technology investment for any new SMB Wi-Fi network.
802.11ac wave 2 brings three additional “improvements” over 802.11ac wave 1, though the practical gains from these for the SMB market are non-existent.
“Improvement” 1: 160 MHz Channels
These can either be a contiguous 160 MHz or two non-contiguous 80 MHz channels on the 5 GHz band. Currently, either configuration only supports two independent 5 GHz channels, given the allowable frequencies by the FCC. Any practical multi-AP deployment needs an absolute minimum of three independent channels to keep co-channel interference effects manageable. More independent channels are always better, which is why higher density deployments tend to avoid even 80 MHz channels (5-6 independent channels). Hence, unless significantly more 5 GHz frequency space is opened up for Wi-Fi use (which is being considered by both Congress and the FCC), it is unlikely that 160 MHz channels will be practical for any type of multi-AP deployment.
Where 802.11ac using 160 MHz channels will be useful is for short-distance and very high capacity 802.11ac point-to-point wireless backhaul links. Conceivably, practical throughput speeds above 250 Mbps should be achievable. This will require the use of highly directional antennas and a distance limitation of a few hundred feet, in order to minimize the effects of surrounding Wi-Fi systems on overlapping channels and to maintain a SNR above 29 dB, to maintain MCS rates utilizing 256 QAM. There are a lot of applications where such links are useful, especially when doing multi-building IP camera surveillance.
“Improvement” 2: 8 Stream Multi-In Multi-Out (MIMO)
Both 802.11n and 802.11ac wave 1 had a maximum of 4 spatial streams per band. When the spatial streams are used to pass different data sets between an AP and a client device, a technique known as spatial multiplexing, the throughput is effectively doubled, tripled, or quadrupled compared to a single stream. Because of the way spatial multiplexing works, the number of achievable streams have to match on both the AP and the client; whichever device has the lower number of spatial streams drives the number of spatial streams that are used. Because of power and size constraints, most smartphones only have one spatial stream, and most tablets only have one or two spatial streams. Some higher-end laptops will have three spatial streams, but 3x3:3 is a practical maximum for client devices. Accordingly, no enterprise AP manufacturer ever commercially offered a 4x4:4 stream access point for 802.11n or 802.11ac wave 1, even though the 802.11 specs allowed for it. Hence, increasing the number of spatial streams is of no benefit to spatial multiplexing operation with single user MIMO (SU-MIMO). Emergent 802.11ac wave 2 APs are 4x4:4 stream, but the motivation for additional streams is MU-MIMO.
“Improvement” 3: Multi-User Multi-In Multi-Out (MU-MIMO)
MU-MIMO is intended to talk to multiple client devices simultaneously. While this technique looks impressive on paper, it is still a dubious prospect as to whether MU-MIMO can be made to work in actual practice, despite most AP vendors racing to produce MU-MIMO AP models. Even if MU-MIMO can be made to work in real-world environments, its application is fundamentally limited.
MU-MIMO requires position feedback from all client devices engaged in a simultaneous communication session, which requires the chipsets and drivers in the client devices to support calculating and providing such feedback. Furthermore, the client devices sharing a simultaneous communication session need to be geographically separated (with respect to the AP location) while connected at the same MCS rates, so that the communication to each client takes the same amount of time. Unlike past Wi-Fi performance improvements, which have generally been focused on establishing and maintaining faster and faster connections between APs and clients, MU-MIMO does not increase connection speed but increases airtime efficiency. The logic is as follows: if an AP can talk to 2-3 clients at once, it can support more clients at the same connection speeds, or (this is where we collectively “cross our fingers”) support more data throughput to the same number of clients.
Hence, MU-MIMO will only be of any practical benefit in very high density deployments, such as stadiums and conference centers. It isn’t particularly clear whether K-12 and higher education environments are dense enough to really benefit from MU-MIMO, even though that is clearly the largest target market for MU-MIMO technology.
For the SMB space, the growth of devices on Wi-Fi networks is coming in the Internet of Things (IoT), which, according to the hype, consists of an ever-growing array of wearable sensors and Wi-Fi appliances to monitor our health, our environment, our security, and our activities. Some of these devices, like Google Glass and Apple Watch, are already on the market and have a decent adoption rate. That said, the IoT in the short and medium term have characteristics which make them incompatible with 5 GHz 802.11ac with MU-MIMO. Most importantly, most IoT devices that are Wi-Fi compatible only operate at 2.4 GHz, and are likely to stay that way for the foreseeable future. Google Glass contains a 2.4 GHz 802.11g chip (2003 technology), and the just debuted Apple Watch contains a 2.4 GHz 802.11n 1x1:1 stream (2009 technology). Why? Older generation NICs are cheap, and most IoT devices require minimal bandwidth. Monitoring someone’s health vitals requires << 1% of the throughput of a streaming 4K video. These devices are also designed with consumers in mind, so making them sleek and sexy, as well as functional, is way more important than maintaining optimal performance of a third party Wi-Fi network. Even if the IoT manufacturers eventually succumb to pressure to include 5 GHz Wi-Fi, it is unlikely that they’ll install anything better than 5 GHz 802.11n 1x1:1 stream, because the data requirements simply aren’t there.
So what’s the recommended technology to invest in for an SMB Wi-Fi network?
Most new or renovated Wi-Fi network deployments in the SMB market should be installing 802.11ac wave 1. Any system being installed today has a life expectancy of, at least, 5 years, so for that reason alone, the latest and greatest technology should be used, as it will be “less antiquated” in that 5 year period. We don’t really know how a network deployed today will be used over the next several years, but it is reasonable to expect to see even more devices consuming even more data.
I would only recommend 802.11n dual-band deployments on an exception basis, in cases where project budgets are really squeezed (though 802.11ac APs are not substantively more expensive than dual band 802.11n APs), or in specific cases where density and/or DFS constraints make it impractical to use 80 MHz channels. As for those of you still deploying 2.4 GHz only, please stop! Unless you’re deploying in a mine shaft, where 5 GHz simply doesn’t propagate, a properly designed and implemented network will always provide better Wi-Fi performance on the 5 GHz band.