Friday, August 28, 2015

An Explanation of Channel and Transmit Power on Wi-Fi Networks

When designing and installing a Wi-Fi network consisting of multiple access points (APs), it is generally Best Practice to set up a static channel and static transmit power architecture for your access points.  This is often a confusing topic for both Wi-Fi novices and experts alike, as there are different frequency bands, different channel sizes, and different tradeoffs with using different transmission power settings.

This blog will go through the definitions of channel and power, and establishes the Best Practices of channel and transmit power planning for both 2.4 GHz and 5 GHz.

Defining Wi-Fi Channels

The Federal Communications Commission (FCC) and similar governmental agencies in other countries regulate the use of radio frequency spectrum.  Most spectrum is licensed, meaning that government agencies or commercial entities must purchase or lease a portion of spectrum, and then have exclusive rights to use that spectrum.  Some frequency bands are unlicensed, meaning that anyone can broadcast in that portion of the spectrum without getting a license to use that spectrum, so long as they meet certain maximum transmit power limitations.  This is the portion of the spectrum where Wi-Fi operates. 

Each Wi-Fi access point broadcasts a signal on a particular channel, which is specified as a particular center frequency and channel width.   With 802.11n and 802.11ac, there has been a push to use larger channel widths (40 MHz in 802.11n, and 80 MHz or 160 MHz with 802.11ac), as larger channel sizes enable more data to be sent within the same time window, increasing the throughput of the link.  However, since the size of the unlicensed bands used by Wi-Fi are fixed, larger channel sizes lead to fewer independent, non-overlapping channels.   In addition, larger channel sizes are subject to higher noise floors and more interference from neighbors, making the use of larger channels a tradeoff between potential throughput and achievable signal quality.

Wi-Fi signals interfere if their transmissions occur on the same or overlapping channels in the same space.  In reality, what happens is that the receiver (e.g. a wireless client) at a particular point in space hears transmissions from multiple sources simultaneously, and is incapable of distinguishing between the different sources.  The resultant data received is therefore a mashup of signals from the different sources.   A checksum of the received data indicates that the transmission is corrupted, which ultimately requires the original intended transmission source to retransmit the data.

Most enterprise access point vendors also provide a feature called band steering, which encourages dual band capable client devices to connect on the 5 GHz band to obtain higher speeds because of the use of larger channel sizes and fewer sources of external interference than the 2.4 GHz band.  This feature is also not part of the 802.11 standard, so implementation is vendor-specific, but it is a very good idea.  Especially with the emergence of new IoT network appliances that are not only 2.4 GHz but using new low-poer 802.11b chipsets, it is generally advisable that all clients that can operate on the 5 GHz band be directed to do so.

Defining Transmit Power

The transmit power of an access point radio is proportional to its effective range – the higher the transmit power, the more distance that a signal can travel, and/or the more physical materials that it can effectively penetrate and still have data successfully resolved at the receiver.  A stronger signal at a given distance generally results in a higher signal to noise ratio, which typically allows for more complex modulation and coding schemes (MCS) and thus faster data speeds.  

In early Wi-Fi deployments, which were primarily driven by the coverage requirements, it was common to turn up the power on the AP transmitter as high as allowed by FCC and IEEE regulations.  This approach worked when most clients had reasonably strong transmitters themselves, such as laptops.   With the emergence of smartphones, tablets, and network appliances, however, there is often a transmit power mismatch that leads to a range mismatch.   Most smartphone, tablet, and IoT appliances use relatively weak transmitters in order to preserve both space and battery life.  As a result, the client device can receive a relatively strong transmission from the access point, but the access point cannot receive the relatively weak transmissions of the client device in response.  Think of it this way:  the access point is shouting but the client device is whispering.   Accordingly, though non-intuitively, the effective coverage area is driven by the client devices, and the AP power levels must be set so as to minimize the mismatch between the range of the access point and the corresponding range of the client devices.  

Furthermore, in high density deployments where hundreds or even thousands of client devices can be operating within the coverage area of a single AP (e.g. college lecture halls, conference centers, stadiums, etc.), more access points are needed simply from a capacity standpoint.   This necessitates using lower transmit power levels, using directional antennas, and planning channels very carefully to prevent co-channel interference. 

Finally, as compared to 5 GHz, 2.4 GHz has less free space path loss and attenuation through standard building materials, giving it a larger effective range at a given transmit power level.  When using a dual band access point, one generally wants to have the coverage area equivalent for both bands.  For a typical SMB environment, one generally needs to set the 2.4 GHz transmit power level to be 6 dB lower than the 5 GHz transmit power to get a rough equivalency in coverage.  Even so, balancing coverage can be difficult.  It is not uncommon to design an AP layout for 5 GHz coverage, necessitating that some of the access points have their 2.4 GHz radios disabled (i.e. turned off) so as to not cause co-channel interference on the 2.4 GHz band.

Wi-Fi Channels on the 2.4 GHz band

On the 2.4 GHz band (802.11b/g/n) in North America, there are 11 channels of 20 MHz size allowed by the FCC.  Some or all of channels 12-14 are allowed in some other countries, such as Japan.  Unfortunately, the center frequencies of channels 1-13 are only 5 MHz apart, leading to only three non-overlapping channels, as shown in Figure 1. 

Figure 1:  20 MHz channels on the 2.4 GHz frequency band. [1]

While it is not generally known, Channels 12 and 13 (2467 MHz and 2472 MHz) are actually allowed by the FCC at low power levels.  While channels have a width of 20 MHz, there is some additional side-band leakage, typically at a level below -30 dB of the peak signal.  For Channels 12 - 13, there can be out-of-band emissions in the restricted frequency band 2483.5-2500 MHz (encompassing Channel 14) which is used by the mobile satellite service in the United States, hence Channels 12 and 13 are reserved essentially as a guard interval.  Channel 14 is only allowed in Japan, and then only for DSSS / CCK (802.11b), and not OFDM (802.11g/n). [4]

The 802.11n spec allows for the optional use of 40 MHz channels on the 2.4 GHz band, by bonding two neighboring channels together.  However, given that the entire usable band in 2.4 GHz is only 72 MHz wide (encompassing Channels 1 - 11), there are no two 40 MHz channel sizes that are independent, as shown in Figure 2. This makes the use of 40 MHz channels completely impractical in multi-AP deployments, though it is still unfortunately fairly common to see in practice as most vendors allow this channel width in their default settings.

Figure 2:  40 MHz channels on the 2.4 GHz frequency band. [1]

Given the restrictions on the number of independent channels and how that decreases as the channel width increases, poor channelization will create AP-to-AP interference and thus degrade both usage and coverage requirements.   On the 2.4 GHz band, only 20 MHz channel sizes should be used, and channels should be deployed across APs with an alternating static 1, 6, 11 scheme, both horizontally and vertically.

Wi-Fi Channels on the 5 GHz band

The 5 GHz band is much larger (over 555 MHz, semi-contiguous), and thus makes selecting independent channels and using larger channel widths via bonding neighboring channels much simpler.  802.11a allowed the use of 20 MHz channels.  802.11n allows the use of 40 MHz channels, and 802.11ac allows the use of up to 80 MHz or 160 MHz channels.   This is shown in Figure 3.

Figure 3:  Channels on the 5 GHz frequency band. [1, 2, 3]

The use of 40 MHz channels at 5 GHz with 802.11n is fairly common practice.  In most SMB deployments, unless the design calls for high client density (e.g. convention meeting space, large classrooms, etc.), or there is explicit issue to avoid the DFS channels (rarely a problem for indoor deployments, sometimes a concern for outdoor deployments), we can generally use 80 MHz channels with 802.11ac, and thus double the wireless throughput.  This is the primary advantage of deploying 802.11ac access points vs. 802.11n access points. 

The full list of 20 MHz channels available in North America is shown in Table 1.   Governmental regulatory agencies in other countries may restrict the use of one or more of these frequency bands and/or the maximum transmit power at those frequencies.  Most access points require that a country be selected in the configuration, which dictates what channels and maximum transmit powers are available.

Table 1: 20 MHz channels on the 5 GHz band.

Creating 40 MHz (and larger) channels involves bonding multiple neighboring channels together.  Each bonded channel has a primary 20 MHz channel that is used when an 802.11n or 802.11ac access point communicates with a legacy 802.11a client (or an 802.11n or 802.11ac client that is artificially limited to smaller channels).  The other bonded channels are “extension” channels, and can be either immediately above (upper) or below (lower) the primary channel.

Unfortunately, there are multiple “standards” of referring to bonded 5 GHz channels, which makes it very confusing for both Wi-Fi novices and experts, alike.   The three basic methods are to refer to their bonded channel range, their primary channel with extension (two variants for 40 MHz, four variants for 80 MHz), or their center channels (i.e. frequencies).   These are shown for 40 MHz channels in Table 2 for 40 MHz and for 80 MHz channels in Table 3.

Table 2: 40 MHz channels on the 5 GHz band.

Table 3: 80 MHz channels on the 5 GHz band.


The (*) is for channel 144.  This channel was opened up in March 2014 for use by Wi-Fi in the United States as part of the 802.11ac specification.  You will therefore generally not see it as a valid channel option on older 802.11n access points.  Furthermore, even on 802.11ac access points, many AP vendors still have firmware that complies with the older FCC specifications (i.e. pre March 2014), so do not recognize Channel 144 as being valid for use in the United States.  Accordingly, Channel 144 (20 MHz), Channel 140 (40 MHz), and Channel 132 (80 MHz) often cannot be used in static channel plans.

Note that the UNII-2 and UNII-2e bands (which cover 2/3 of the frequency space) are still in use by legacy military and commercial weather radar systems.  This leads to a requirement known as dynamic frequency selection (DFS), which requires Wi-Fi devices to periodically measure for the presence of such legacy radar systems and move off of the channel for a period of time if it is detected.  Currently, both the access point and client devices are each responsible for detecting DFS interference from radar devices and, if detected, move off the channel.   Prior to March 2014, only access points were required to make that detection and channel move, notifying their connected clients as to the channel change so as to encourage the clients to follow.  This was part of the original 802.11h amendment when UNII-2 and UNII-2e were opened up for Wi-Fi.   The older rules made more sense from a Wi-Fi operations perspective, as client devices associate with an access point and thus follow the access point’s channel.   Unfortunately, many legacy client devices didn’t know how to interpret the “I’m about to change from channel x to channel y” message from the AP and therefore didn’t move off the channel fast enough, which is probably what prompted the rule change.

The unintended consequence of this is that many consumer Wi-Fi device manufacturers decided it wasn’t worth investing in the code to do the DFS detection, and as a result just won’t operate at all on any of the UNII-2 (52-64) or UNII-2e (100-144) channels.   This is why many 802.11n consumer devices supported the UNII-2 and UNII-2e channels, but their newer 802.11ac counterparts do not. Ironically, this also tends to be a limitation of the consumer wireless router products from manufacturers that also make enterprise access point equipment that supports DFS detection. 

Fortunately, most phone and tablet manufacturers are not so shortsighted, so iPhones / iPads and most mainstream brands for Android phones / tablets with 802.11ac capability will work on the UNII-2 and UNII-2e bands.  Also fortunately, most consumer client devices these days are dual-band, so if they do roam to an AP with a 5 GHz channel they don’t recognize, they will still connect on the 2.4 GHz radio and be treated as a 2.4 GHz only client.  Where it can become problematic is using 5 GHz only consumer devices, such as USB dongles and 802.11ac wireless bridges.

Why not just let the AP figure it out on its own?

Many vendors provide features to do automated radio resource management (RRM), commonly also called auto-channel and auto-power. 

For each access point with auto-channel enabled, the AP senses the surrounding environment and then select the “best channel”, i.e. the channel that is presumably least in use by surrounding APs.  This is not officially part of the 802.11 standard, so each vendor implements this feature differently.  It is generally intended to make Wi-Fi deployments easier, and to react to changes in interference from the external environment by not requiring a static channel plan.  The problem with this approach is that these algorithms, while great for vendor marketing, tend not to work in actual practice, and in fact can make the network performance worse.   The standard implementation has each AP perform a periodic scan of all of the channels, typically on the order of 250 ms per channel, to give it enough time to hear at least two beacon frames from surrounding APs.  Once the access point has measured out all of the channels, it will then select the channel that is the least noisy.  This approach, however, is fundamentally flawed:  not only does the AP not get a true understanding of the channel usage over time, but it can be deceiving, especially on the 2.4 GHz band, where a channel can be seen as “clear”, even when there is a lot of traffic on overlapping channels.   On the 5 GHz band, there are many more channels to scan.  The number of channels also increases as you go to bonded 40 MHz, 80 MHz, and 160 MHz channels (though as of 802.11ac which introduced 80 MHz and 160 MHz bonded channels on the 5 GHz band, beacons are broadcast on all of the 20 MHz sub-channels).    Additionally, such algorithms tend to be convergent, meaning that neighboring APs have a tendency to settle on the same or overlapping channels, thereby increasing co-channel interference between APs. 

Auto-power is similar in form, function, and limitations as auto-channel.  As the name implies, auto power dynamically adjusts the transmit power levels used by each AP, with the intent of reacting to changes in interference from the external environment by adjusting the effective area of coverage from each AP to minimize co-channel interference.  

It should be noted that some vendors have proposed more sophisticated proprietary methods to automatically control channel and/or transmit power.  Generally, such alternatives tend to be divergent, i.e. change channels very frequently and not settle down.  It is therefore considered best practice to turn off auto channel and auto channel, and use static channel and transmit power plans for both the 2.4 GHz and 5 GHz bands.

References


  1. Coleman, D. and Westcott, D.  CWNA Certified Wireless Network Administrator Official Study Guide: Exam PWO-105.  3rd edition.  John Wiley & Sons, Inc., Indianapolis, IN.  ISBN 978-1-118-12779-7.   Copyright 2012.
  2.  Jackman, S., Swartz, M., et al.  CWDP Certified Wireless Design Professional Official Study Guide: Exam PW0-250.  .  John Wiley & Sons, Inc., Indianapolis, IN.  ISBN  978-0-470-76904-1.  Copyright 2011.
  3.  Hintersteiner, J. EnGenius Certified Operator.  EnGenius Technologies, Inc. certification program course.    Copyright 2014-2015.
  4. COMMENTS OF THE NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, Copyright 2005. 

6 comments:

  1. Thanks for sharing, Jason!

    One clarification, channels 12 and 13 (@2.4GHz) are actually allowed in North America. The difference with respect to the rest of the world (where channels 1-13 can be used in the same conditions) is that the FCC imposes several additional restrictions to the transmissions in channels 12 and 13 (regarding tx power). It is true that channel 14 is not ISM and, I think (please, correct me), only Japan opens channel 14, but only for DSSS transmissions. That’s why I’d recommend configuring channels 1, 5, 9 and 13 in dense deployments (not in FCC domain), instead of the traditional 1, 6 and 11 (rough capacity increase of 33%!).

    ReplyDelete
  2. Great wifi primer, Jason.

    One question I do have is whose best practice are you referencing in regards to using a static channel plan? I haven't come across this best practice myself. I've seen Aruba recommend this, but only in a VHD deployment. Static channels/power don't scale well in large enterprises, but I can see why having that level of control over your channel plan is appealing.

    ReplyDelete
  3. Most vendors tout their Radio Resource Management (RRM) solutions for auto-channel and auto-power, though I have yet to see a vendor's system where this actually really works in practice. While there are notable exceptions, in most deployments I've found biggest source of co-channel interference are your own access points interfering with each other because of bad channel / power settings. While generating a static channel plan can be laborious and tedious, especially for large networks, a Wi-Fi network typically has better overall behavior if the channels are static (and planned well) than if they're constantly "adapting" to a changing environment, especially if you are constantly adapting to your own APs.

    Now, you do lose the ability to adapt to external interference sources (e.g. a neighbor's AP). In many venues, however, external interference isn't a big issue. Where it is, if your neighbor is set to auto-channel, hopefully the neighbor will eventually adapt to you vs. both of you attempting to adapt to each other, which makes the channel a moving target which then keeps rippling through your network.

    In a handful of cases one has to go in and re-channelize because of exterior interference, but it is more unusual than you'd think.

    Now take a modern apartment building where the telco or cable provider puts in a Wi-Fi router designed for a 2500 sq. ft. house and puts it into an 800 sq. ft. apartment, surrounded by other 800 sq. ft. apartments with 2500 sq. ft. access points, and the reality is that you're screwed no matter what, because there are so many interfering sources that there will be no good channel available. This is especially true on 2.4 GHz, but is becoming more true with 5 GHz with 80 MHz channels on 802.11ac and most consumer routers not using the UNII-2 or UNII-2e bands, leaving you only 2 channels (36-48 and 149-161).

    ReplyDelete
    Replies
    1. Thanks for the reply. Fair points.

      I recently toyed with the idea of using a static channel plan in an all-wireless deployment, but was discouraged from doing so.

      Aruba's ARM works decent. I have no real-world experience with RRM. ARM has some settings that can be tweak if you're not happy with channel/power assignments, but I have yet to go down that road.

      Keep up the great work, Jason.

      Delete
  4. My colleagues generally use the auto for the first week of deployment to let the radios select channels for non overlap and also limit power levels to lowest 3 levels and then usage that as a static plan. when clients are deployed.
    We have seen auto tune systems completely select channel 1 over a period of days when staff use micro wave ovens and make the channels 6 and 11 unavailable so 1 gets reused.

    ReplyDelete
  5. I had a Ruckus site once (not using ChannelFly) using mesh, so all APs could see each other on 5 GHz (and, by definiton, 2.4 GHz as well). Had users complaining of slow speeds, disconnects, etc. Found 19 of the 24 APs all on Channel 6. First thing I did was move it to a 1,6,11 alternating static channel scheme. Over 90% of complaints stopped after that.

    ReplyDelete