Monday, June 29, 2015

IoT: A Renaissance of 802.11b


I recently saw this question posted on LinkedIn:  Is energy efficient 802.11b chipsets for IoT applications a boon or bane for modern WLANs?  (See Arsen Bandurian's blog post on this.)


If one believes even 1/10 of the hype about IoT, it is reasonable to expect that there will be a surge of numerous IoT devices on enterprise, SMB, and even private home networks over the next few years.  The reality is that most of these devices will probably be running 802.11b for wireless connectivity.  Most IoT devices neither require high data rates nor real-time responsiveness, so 2.4 GHz 802.11b is perfectly suitable for such applications.  It is in the IoT manufacturer's best interest to rely upon 2.4 GHz vs. 5 GHz, as 2.4 GHz has better range, especially given low Tx power and small low-gain antennas to keep device size minimal.  Additionally, the devices will work in legacy / non-upgraded environments.  At 2.4 GHz, a single stream 802.11n chipset does not provide the benefit of 802.11b/g antenna diversity, and a dual-stream 802.11n chipset requires more streams, thus more power and more space.

Those of us in the Wi-Fi industry can decry the use of 802.11b in IoT;  the hard truth is that there are neither financial nor performance incentives for IoT devices to use anything BUT 802.11b.


So what are we to do?  Fortunately, most large volume data and latency-sensitive devices have been and continue to move towards 5 GHz (802.11ac).  We are rapidly converging on a Wi-Fi reality that the 2.4 GHz band will be even "junkier" than it is today, meaning that any meaningful / critical data transmission must be shifted to 5 GHz for any type of QoS.

My colleague and friend Devin Akin (@DevinAkin on Twitter) has been decrying "2.4 GHz is dead."  For conventional thoughput and latency applications, he is correct.  However, there is still a use for the 2.4 GHz band in Wi-Fi deployments, and that's to support IoT. 

I'll even go one further:  I foresee the time in the not-too-distant future when we will be deploying dual-band access points with the 2.4 GHz radios purposely set to "802.11b only".

Of course, vendors will continue to manufacture and sell dual-band APs that are 802.11b/g/n on the 2.4 GHz band and 802.11a/n/ac/?? on the 5 GHz band.  As a network designer and installer, however, I expect that we'll purposely want to limit my 2.4 GHz band to 802.11b ONLY, especially if most 2.4 GHz clients are IoT devices with 802.11b low power chipsets.

When there is at least one 802.11b client associated to an AP, every single 802.11g or 802.11n transmission by associated clients, as well as 802.11g/n transmissions from the AP itself, must be preceded by either an RTS/CTS exchange between the sender and the receiver, or a CTS-to-Self message by the sender.  These messages are control frames sent at the lowest 802.11b rate of 1 Mbps, and are implemented as a protection mechanism so that the 802.11b clients, which cannot distinguish 802.11g or 802.11n frames, will reset their NAV timers so as to not attempt transmission during the 802.11g / 802.11n portion of the frame exchange.

In practice, this protection mechanism adds a lot of overhead to the channel.  While 802.11g and 802.11n devices are transmitting their data at OFDM MCS rates, this additional overhead means that the effective throughput is substantially lower, and can even be below 11 Mbps (i.e. the fastest 802.11b rate).   There is also the reality of implementation in that, while access points tend to do a RTS / CTS exchange, most client devices opt to only do the faster CTS-to-Self.  This can create a hidden node problem:  If an 802.11b client is far away from the AP and on the opposite side of the AP from the 802.11n client, the 802.11b client may not hear the CTS-to-Self message transmitted by the 802.11n client. The 802.11b client therefore won't reset its NAV timer and could potentially transmit during the 802.11n transmission, causing a collision. Obviously, a CTS-to-self is really only suitable for use by the AP, since by definition all associated clients are in range of the access point, whereas client devices should all use RTS / CTS, on the assumption that the 802.11b client will hear either the sender or the receiver if not both.   In reality, it doesn't get implemented that way.

Irregardless of the hidden node issue, 802.11g / n client devices and the AP must use the RTS / CTS or CTS-to-self protection mechanism, which drastically reduces the effective throughput of these transmissions. As mentioned earlier, this can lower the effective throughput to below 11 Mbps, which is the fastest 802.11b rate. Accordingly, the 802.11g or 802.11n devices could actually get better throughput if they transmitted at 11 Mbps and didn't bother with the protection mechanisms.

Additionally, there's the issue of airtime utilization. For equivalent data frames, an 802.11b device will consume more airtime than an 802.11n device, since time = data volume / speed. (Physics, especially high school physics, is a harsh mistress.)   While some AP vendors do have mechanisms to enforce airtime fairness, this only occurs on the AP side and thus is of limited effectiveness, especially when the majority of clients are 802.11b.  Since most IoT devices shall spend most of their efforts uploading data of whatever they are responsible for sensing, airtime fairness mechanisms on an AP are simply irrelevant.

One objection to the "802.11b only" scenario is that there are several 2.4 Ghz only 802.11n devices out there, especially for patient monitoring.   Yes, this is true.  However, such chipsets are backward compatible to 802.11b.  Furthermore, many healthcare appliances and capital equipment (even newly manufactured) still run 802.11b anyway, as medical vendors (a) don't want to have to go to FDA for a change and (b) they still want to use the cheapest possible network components, even in a $30k device.   The simple reality is that any equipment manufacturer that is putting data capacity / latency sensitive applications on the 2.4 GHz band is doing themselves and their customers a disservice.

Another objection I've heard is "it will all get fixed in 802.11ax", when, amongst other things, bi-directional MU-MIMO becomes a reality.   Please read my prior blog post on why uni-directional MU-MIMO is likely to be ineffective in most environments, as the argument readily extends to bi-directional MU-MIMO.   Furthermore, 802.11ax won't be around unitl ~2019 at the earliest - the spec is still being negotiated, and I would be very surprised to see a finalized spec before the Wi-Fi industry gets a real understanding as to how MU-MIMO works in actual practice vs. the theory on paper.   I hope I have many, many networks to deploy between now and the emergence of the mythical 802.11ax.   That said, even if I am totally wrong about MU-MIMO, and it turns out to be an invention comparable in societal impact to fire, the wheel, the automobile, the airplane, sliced bread, and central air conditioning, as the industry analysts and AP vendor marketing folks are all touting it, there is still no financial or performance incentive for IoT devices to adopt any Wi-Fi technology above 802.11b

So, to quote President Obama only slightly out of context, "...something that rhymes with 'bucket'..."   (Source)

Instead of fighting this trend, we network designers should embrace it.   The surge of IoT devices is great for our business, as it will mean lots of networks to install, upgrade, and re-vamp.  But we have to do it intelligently.  You have data-hogging and/or latency-sensitive applications, put them on 5 GHz 802.11ac and optimize the 5 GHz networks for maximum performance.   You have IoT, put it on 2.4 GHz and let the 2/4 GHz networks run at 802.11b only, so we don't need to worry about protection mechanisms or even throughput or real-time performance, so long as 2.4 GHz AP channel and Tx power settings are such that we mitigate co-channel interference.  (Yes, you can turn off some of the 2.4 GHz radios if you want...)

No comments:

Post a Comment