Friday, August 28, 2015

An Explanation of Channel and Transmit Power on Wi-Fi Networks

When designing and installing a Wi-Fi network consisting of multiple access points (APs), it is generally Best Practice to set up a static channel and static transmit power architecture for your access points.  This is often a confusing topic for both Wi-Fi novices and experts alike, as there are different frequency bands, different channel sizes, and different tradeoffs with using different transmission power settings.

This blog will go through the definitions of channel and power, and establishes the Best Practices of channel and transmit power planning for both 2.4 GHz and 5 GHz.

Defining Wi-Fi Channels

The Federal Communications Commission (FCC) and similar governmental agencies in other countries regulate the use of radio frequency spectrum.  Most spectrum is licensed, meaning that government agencies or commercial entities must purchase or lease a portion of spectrum, and then have exclusive rights to use that spectrum.  Some frequency bands are unlicensed, meaning that anyone can broadcast in that portion of the spectrum without getting a license to use that spectrum, so long as they meet certain maximum transmit power limitations.  This is the portion of the spectrum where Wi-Fi operates. 

Each Wi-Fi access point broadcasts a signal on a particular channel, which is specified as a particular center frequency and channel width.   With 802.11n and 802.11ac, there has been a push to use larger channel widths (40 MHz in 802.11n, and 80 MHz or 160 MHz with 802.11ac), as larger channel sizes enable more data to be sent within the same time window, increasing the throughput of the link.  However, since the size of the unlicensed bands used by Wi-Fi are fixed, larger channel sizes lead to fewer independent, non-overlapping channels.   In addition, larger channel sizes are subject to higher noise floors and more interference from neighbors, making the use of larger channels a tradeoff between potential throughput and achievable signal quality.

Wi-Fi signals interfere if their transmissions occur on the same or overlapping channels in the same space.  In reality, what happens is that the receiver (e.g. a wireless client) at a particular point in space hears transmissions from multiple sources simultaneously, and is incapable of distinguishing between the different sources.  The resultant data received is therefore a mashup of signals from the different sources.   A checksum of the received data indicates that the transmission is corrupted, which ultimately requires the original intended transmission source to retransmit the data.

Most enterprise access point vendors also provide a feature called band steering, which encourages dual band capable client devices to connect on the 5 GHz band to obtain higher speeds because of the use of larger channel sizes and fewer sources of external interference than the 2.4 GHz band.  This feature is also not part of the 802.11 standard, so implementation is vendor-specific, but it is a very good idea.  Especially with the emergence of new IoT network appliances that are not only 2.4 GHz but using new low-poer 802.11b chipsets, it is generally advisable that all clients that can operate on the 5 GHz band be directed to do so.

Defining Transmit Power

The transmit power of an access point radio is proportional to its effective range – the higher the transmit power, the more distance that a signal can travel, and/or the more physical materials that it can effectively penetrate and still have data successfully resolved at the receiver.  A stronger signal at a given distance generally results in a higher signal to noise ratio, which typically allows for more complex modulation and coding schemes (MCS) and thus faster data speeds.  

In early Wi-Fi deployments, which were primarily driven by the coverage requirements, it was common to turn up the power on the AP transmitter as high as allowed by FCC and IEEE regulations.  This approach worked when most clients had reasonably strong transmitters themselves, such as laptops.   With the emergence of smartphones, tablets, and network appliances, however, there is often a transmit power mismatch that leads to a range mismatch.   Most smartphone, tablet, and IoT appliances use relatively weak transmitters in order to preserve both space and battery life.  As a result, the client device can receive a relatively strong transmission from the access point, but the access point cannot receive the relatively weak transmissions of the client device in response.  Think of it this way:  the access point is shouting but the client device is whispering.   Accordingly, though non-intuitively, the effective coverage area is driven by the client devices, and the AP power levels must be set so as to minimize the mismatch between the range of the access point and the corresponding range of the client devices.  

Furthermore, in high density deployments where hundreds or even thousands of client devices can be operating within the coverage area of a single AP (e.g. college lecture halls, conference centers, stadiums, etc.), more access points are needed simply from a capacity standpoint.   This necessitates using lower transmit power levels, using directional antennas, and planning channels very carefully to prevent co-channel interference. 

Finally, as compared to 5 GHz, 2.4 GHz has less free space path loss and attenuation through standard building materials, giving it a larger effective range at a given transmit power level.  When using a dual band access point, one generally wants to have the coverage area equivalent for both bands.  For a typical SMB environment, one generally needs to set the 2.4 GHz transmit power level to be 6 dB lower than the 5 GHz transmit power to get a rough equivalency in coverage.  Even so, balancing coverage can be difficult.  It is not uncommon to design an AP layout for 5 GHz coverage, necessitating that some of the access points have their 2.4 GHz radios disabled (i.e. turned off) so as to not cause co-channel interference on the 2.4 GHz band.

Wi-Fi Channels on the 2.4 GHz band

On the 2.4 GHz band (802.11b/g/n) in North America, there are 11 channels of 20 MHz size allowed by the FCC.  Some or all of channels 12-14 are allowed in some other countries, such as Japan.  Unfortunately, the center frequencies of channels 1-13 are only 5 MHz apart, leading to only three non-overlapping channels, as shown in Figure 1. 

Figure 1:  20 MHz channels on the 2.4 GHz frequency band. [1]

While it is not generally known, Channels 12 and 13 (2467 MHz and 2472 MHz) are actually allowed by the FCC at low power levels.  While channels have a width of 20 MHz, there is some additional side-band leakage, typically at a level below -30 dB of the peak signal.  For Channels 12 - 13, there can be out-of-band emissions in the restricted frequency band 2483.5-2500 MHz (encompassing Channel 14) which is used by the mobile satellite service in the United States, hence Channels 12 and 13 are reserved essentially as a guard interval.  Channel 14 is only allowed in Japan, and then only for DSSS / CCK (802.11b), and not OFDM (802.11g/n). [4]

The 802.11n spec allows for the optional use of 40 MHz channels on the 2.4 GHz band, by bonding two neighboring channels together.  However, given that the entire usable band in 2.4 GHz is only 72 MHz wide (encompassing Channels 1 - 11), there are no two 40 MHz channel sizes that are independent, as shown in Figure 2. This makes the use of 40 MHz channels completely impractical in multi-AP deployments, though it is still unfortunately fairly common to see in practice as most vendors allow this channel width in their default settings.

Figure 2:  40 MHz channels on the 2.4 GHz frequency band. [1]

Given the restrictions on the number of independent channels and how that decreases as the channel width increases, poor channelization will create AP-to-AP interference and thus degrade both usage and coverage requirements.   On the 2.4 GHz band, only 20 MHz channel sizes should be used, and channels should be deployed across APs with an alternating static 1, 6, 11 scheme, both horizontally and vertically.

Wi-Fi Channels on the 5 GHz band

The 5 GHz band is much larger (over 555 MHz, semi-contiguous), and thus makes selecting independent channels and using larger channel widths via bonding neighboring channels much simpler.  802.11a allowed the use of 20 MHz channels.  802.11n allows the use of 40 MHz channels, and 802.11ac allows the use of up to 80 MHz or 160 MHz channels.   This is shown in Figure 3.

Figure 3:  Channels on the 5 GHz frequency band. [1, 2, 3]

The use of 40 MHz channels at 5 GHz with 802.11n is fairly common practice.  In most SMB deployments, unless the design calls for high client density (e.g. convention meeting space, large classrooms, etc.), or there is explicit issue to avoid the DFS channels (rarely a problem for indoor deployments, sometimes a concern for outdoor deployments), we can generally use 80 MHz channels with 802.11ac, and thus double the wireless throughput.  This is the primary advantage of deploying 802.11ac access points vs. 802.11n access points. 

The full list of 20 MHz channels available in North America is shown in Table 1.   Governmental regulatory agencies in other countries may restrict the use of one or more of these frequency bands and/or the maximum transmit power at those frequencies.  Most access points require that a country be selected in the configuration, which dictates what channels and maximum transmit powers are available.

Table 1: 20 MHz channels on the 5 GHz band.

Creating 40 MHz (and larger) channels involves bonding multiple neighboring channels together.  Each bonded channel has a primary 20 MHz channel that is used when an 802.11n or 802.11ac access point communicates with a legacy 802.11a client (or an 802.11n or 802.11ac client that is artificially limited to smaller channels).  The other bonded channels are “extension” channels, and can be either immediately above (upper) or below (lower) the primary channel.

Unfortunately, there are multiple “standards” of referring to bonded 5 GHz channels, which makes it very confusing for both Wi-Fi novices and experts, alike.   The three basic methods are to refer to their bonded channel range, their primary channel with extension (two variants for 40 MHz, four variants for 80 MHz), or their center channels (i.e. frequencies).   These are shown for 40 MHz channels in Table 2 for 40 MHz and for 80 MHz channels in Table 3.

Table 2: 40 MHz channels on the 5 GHz band.

Table 3: 80 MHz channels on the 5 GHz band.


The (*) is for channel 144.  This channel was opened up in March 2014 for use by Wi-Fi in the United States as part of the 802.11ac specification.  You will therefore generally not see it as a valid channel option on older 802.11n access points.  Furthermore, even on 802.11ac access points, many AP vendors still have firmware that complies with the older FCC specifications (i.e. pre March 2014), so do not recognize Channel 144 as being valid for use in the United States.  Accordingly, Channel 144 (20 MHz), Channel 140 (40 MHz), and Channel 132 (80 MHz) often cannot be used in static channel plans.

Note that the UNII-2 and UNII-2e bands (which cover 2/3 of the frequency space) are still in use by legacy military and commercial weather radar systems.  This leads to a requirement known as dynamic frequency selection (DFS), which requires Wi-Fi devices to periodically measure for the presence of such legacy radar systems and move off of the channel for a period of time if it is detected.  Currently, both the access point and client devices are each responsible for detecting DFS interference from radar devices and, if detected, move off the channel.   Prior to March 2014, only access points were required to make that detection and channel move, notifying their connected clients as to the channel change so as to encourage the clients to follow.  This was part of the original 802.11h amendment when UNII-2 and UNII-2e were opened up for Wi-Fi.   The older rules made more sense from a Wi-Fi operations perspective, as client devices associate with an access point and thus follow the access point’s channel.   Unfortunately, many legacy client devices didn’t know how to interpret the “I’m about to change from channel x to channel y” message from the AP and therefore didn’t move off the channel fast enough, which is probably what prompted the rule change.

The unintended consequence of this is that many consumer Wi-Fi device manufacturers decided it wasn’t worth investing in the code to do the DFS detection, and as a result just won’t operate at all on any of the UNII-2 (52-64) or UNII-2e (100-144) channels.   This is why many 802.11n consumer devices supported the UNII-2 and UNII-2e channels, but their newer 802.11ac counterparts do not. Ironically, this also tends to be a limitation of the consumer wireless router products from manufacturers that also make enterprise access point equipment that supports DFS detection. 

Fortunately, most phone and tablet manufacturers are not so shortsighted, so iPhones / iPads and most mainstream brands for Android phones / tablets with 802.11ac capability will work on the UNII-2 and UNII-2e bands.  Also fortunately, most consumer client devices these days are dual-band, so if they do roam to an AP with a 5 GHz channel they don’t recognize, they will still connect on the 2.4 GHz radio and be treated as a 2.4 GHz only client.  Where it can become problematic is using 5 GHz only consumer devices, such as USB dongles and 802.11ac wireless bridges.

Why not just let the AP figure it out on its own?

Many vendors provide features to do automated radio resource management (RRM), commonly also called auto-channel and auto-power. 

For each access point with auto-channel enabled, the AP senses the surrounding environment and then select the “best channel”, i.e. the channel that is presumably least in use by surrounding APs.  This is not officially part of the 802.11 standard, so each vendor implements this feature differently.  It is generally intended to make Wi-Fi deployments easier, and to react to changes in interference from the external environment by not requiring a static channel plan.  The problem with this approach is that these algorithms, while great for vendor marketing, tend not to work in actual practice, and in fact can make the network performance worse.   The standard implementation has each AP perform a periodic scan of all of the channels, typically on the order of 250 ms per channel, to give it enough time to hear at least two beacon frames from surrounding APs.  Once the access point has measured out all of the channels, it will then select the channel that is the least noisy.  This approach, however, is fundamentally flawed:  not only does the AP not get a true understanding of the channel usage over time, but it can be deceiving, especially on the 2.4 GHz band, where a channel can be seen as “clear”, even when there is a lot of traffic on overlapping channels.   On the 5 GHz band, there are many more channels to scan.  The number of channels also increases as you go to bonded 40 MHz, 80 MHz, and 160 MHz channels (though as of 802.11ac which introduced 80 MHz and 160 MHz bonded channels on the 5 GHz band, beacons are broadcast on all of the 20 MHz sub-channels).    Additionally, such algorithms tend to be convergent, meaning that neighboring APs have a tendency to settle on the same or overlapping channels, thereby increasing co-channel interference between APs. 

Auto-power is similar in form, function, and limitations as auto-channel.  As the name implies, auto power dynamically adjusts the transmit power levels used by each AP, with the intent of reacting to changes in interference from the external environment by adjusting the effective area of coverage from each AP to minimize co-channel interference.  

It should be noted that some vendors have proposed more sophisticated proprietary methods to automatically control channel and/or transmit power.  Generally, such alternatives tend to be divergent, i.e. change channels very frequently and not settle down.  It is therefore considered best practice to turn off auto channel and auto channel, and use static channel and transmit power plans for both the 2.4 GHz and 5 GHz bands.

References


  1. Coleman, D. and Westcott, D.  CWNA Certified Wireless Network Administrator Official Study Guide: Exam PWO-105.  3rd edition.  John Wiley & Sons, Inc., Indianapolis, IN.  ISBN 978-1-118-12779-7.   Copyright 2012.
  2.  Jackman, S., Swartz, M., et al.  CWDP Certified Wireless Design Professional Official Study Guide: Exam PW0-250.  .  John Wiley & Sons, Inc., Indianapolis, IN.  ISBN  978-0-470-76904-1.  Copyright 2011.
  3.  Hintersteiner, J. EnGenius Certified Operator.  EnGenius Technologies, Inc. certification program course.    Copyright 2014-2015.
  4. COMMENTS OF THE NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, Copyright 2005. 

Friday, August 21, 2015

Why You Shouldn't Worry About Wi-Fi Radiation Exposure



 
Source:  https://pbs.twimg.com/media/CM8NvZFUsAEsup1.png


I was recently approached by a client to comment on the growing trend of pseudo-science alarmist claims related to the alleged health risks of radio frequency (RF) exposure from Wi-Fi.   

The concern over RF radiation exposure is not new.  However, despite an ever-growing mountain of scientific evidence debunking claims that exposure to RF radiation from Wi-Fi and cellular phones cause health issues, there seem to be an ever-growing number of subscribers to this belief. Unfortunately, this is not a debate that can be won with reason or scientific evidence.  If that were possible, this would've been resolved a long time ago.  Rather, this is the telecommunication industry's version of the anti-vaccination movement.  It has nothing to do with logic or scientific proof, and is instead all about fear-mongering to push a radical anti-science and anti-technology agenda with a religious zealotry.  Quite plainly, this is one, of unfortunately several, 21st century Luddite movements.   

Nonetheless, this blog post is an attempt to compile several various sources of scientific studies on this topic done by government health agencies around the world, along with the World Health Organization.  It is not by any means an exhaustive list, but is provided as a general reference for a community that keeps being plagued by questions on this issue.

The general consensus of the scientific community is that radio frequency exposure from Wi-Fi in one’s home or work environment is negligible, especially when compared to other types of RF exposure (e.g. cell phones, radio transmitters, microwaves, cordless phones, etc.) that people are regularly exposed to in today’s modern environment.  

Unfortunately, by definition, it is scientifically impossible to prove the absence of something, which is what gets latched on to in order to keep this debate alive.

 

The Fundamentals

Wi-Fi networks function by using radio waves to transmit information from wired access points to the wireless adapter cards in laptop computers, smartphones, tablets, and network appliances (IoT). The reality is that Wi-Fi APs are limited to around 1 W (30 dBm) of output power at the radio, which is at least 2-3 orders of magnitude less RF energy as always-on sources that we have accepted in our environment without protest for the last 100 years, including cell towers, radio stations, analog TV stations (prior to digital conversion), weather radar systems, and the like.  For comparison with Wi-Fi on 2.4 GHz and 5.2 - 5.8 GHz,  licensed cellular runs on frequencies between 700 MHz to 2.1 GHz, and weather radar is on the UNII-2e portion of the 5 GHz band (roughly 5.5 - 5.7 GHz).  These frequencies are in the range of the electromagnetic spectrum defined as "radio frequency (RF) radiation".  RF radiation (non-ionizing radiation) is much lower in energy than the radiation emitted by x-ray machines (ionizing radiation). The biological effect associated with each type of radiation is very different.

Exposure to high levels of radiation results in tissue and cell heating, which is harmful to health.  The debate centers around the question as to whether low level radiation, where cell and tissue heating is not detected, is also harmful.  Several studies have been conducted to find links between low level RF radiation exposure and cancer, or the effects on pregnant women.  Most studies until now have been concerned with RF emissions from mobile phones, which (as you will see below) provide about 100x the exposure to RF radiation as a Wi-Fi access point. The close distance between the head and the mobile phone and the higher power levels used in mobile devices means that a frequent mobile phone user is exposed to much higher levels of radiation than a person in a residential or commercial Wi-Fi environment. Studies by highly regarded researchers have not shown harmful health effects resulting from either cellular Wi-Fi RF radiation exposure, which are below US regulation limits.  

Several examples of such studies are summarized or referenced below.  Prior to the studies, the assertion that a person gets much more RF radiation exposure from a cell phone or microwave oven than from Wi-Fi access points can be shown mathematically.

 

Let the Math Happen

(Thanks to Daniel Koczwara at EnGenius for inspiring me to include this section.)
The propagation of all radio signals is subject to Free Space Path Loss (FSPL), which is a mathematical definition of the geometric property that the further away you are located from the source of a radio transmission, the energy level in that signal drops as a function of the square of the distance.  You can think of throwing a pebble into a pond; as the wave ripples out, the energy is spread over a wider and wider area, and the level of energy at any one point is proportionally smaller.

While this is a geometric effect, wavelength is included in the calculation in order to account for the fact that, mathematically, transmission energy is defined as coming from a point source known as an isotropic antenna.  An isotropic antenna is defined as an antenna that radiates energy evenly in a perfect sphere with 0 dBi of gain.  While defining such an antenna is mathematically convenient, it is physically impossible to build.

The following graph shows the free space path loss for Wi-Fi at 2.4 GHz and 5 GHz.

Source:  http://www.imperialnetsolutions.com/
Per FCC and other worldwide government regulations, a Wi-Fi signal, at most, has a maximum initial power of  30 dBm (1 W, or 1000 mW), and within the first 3 feet over 40 dB of energy is lost (100 W), meaning that the level of exposure 3 feet away is below -10 dBm (0.0001 W, or 0.1 mW). 

Comparative Example 1:  Microwave Oven   A microwave oven operated on the 2.4 GHz bands at around 1000 W (60 dBm). Granted microwaves are shielded, but the shielding is not perfect and deteriorates over time, which is why microwave ovens typically interfere with Wi-Fi when in operation, because they put out more energy on the 2.4 GHz band than an AP and this flood the channel, causing wideband interference.  Three feet away from a leaky microwave oven, the 2.4 GHz energy level decreases by 40 dB to 20 dBm (0.1 W, or 100 mW), or about 1000x higher than a Wi-Fi access point.

Comparative Example 2:  Ham Radio  A ham radio typically operates at 50 W (47 dBm) at 440 MHz.  The FSPL at three feet away (i.e. where the operator is sitting) is about 25 dB (approximately 0.32 W or 320 mW), leading to an exposure level of 22 dBm (0.16 W or 160 mW), or about 1600x higher than a Wi-Fi access point.

Comparative Example 3:  Cell Phone  A typical cellular phone operates at 23 dBm (0.25 W or 250 mW).  However, it operates very close to your head when on a call (about 2 inches), which provides a minimal FSPL of only 3.5 dB (0.0022 W, or 2.2 mW) at 700 MHz (Verizon LTE). This leads to an exposure level of 19.5 dBm (90 mW or 0.09 W), or approximately 100x higher than a Wi-Fi access point.


Comparative Example 4:  Cell Tower  It's somewhat interesting that the output of an entire city or town from a nearby cell tower is never brought up in the debate, though again the math explains this.  A typical cellular tower operates around 40 W (46 dBm).  At 700 MHz (Verizon LTE), the FSPL at about 1/2 mile - 1 mile away is about 90 dB, leading to an exposure level of -44 dBm (0.00004 mW or 0.00000004 W), or about 4000x lower than from a Wi-Fi access point.

  
Short answer: you get more exposure to RF energy from using your cell phone on the cellular network than you do from Wi-Fi.

Some Highlighted Scientific Studies

The Federal Communications Commission (FCC), the Occupational Health & Safety Agency (OSHA), the Institute of Electrical and Electronics Engineers (IEEE), and the International Commission on Non Ionising Radiation Protection (ICNIRP) have established standards and guidelines for general public and occupational exposure to radio frequency radiation.

Prestigious organizations have conducted surveys of the RF radiation field levels encountered in the presence of wireless networks. For example, a 2007 published paper entitled Radiofrequency Exposure from Wireless LANs UtilizingWi-Fi Technology discusses a study in which measurements were conducted at 55 sites in four countries, and measurements were conducted under conditions that would result in the higher end of exposures from such systems. An excerpt from the abstract states “.…In all cases, the measured Wi-Fi signal levels were very far below international exposure limits (IEEE C95.1-2005 and ICNIRP) and in nearly all cases far below other RF signals in the same environments.”

In 2007, Princeton University commissioned the New Jersey Department of Environmental Protection (NJDEP) to perform a survey of the RF levels associated with the wireless network at Princeton’s Firestone Library.  Spot RF levels in the line of sight with the antennae were measured in addition to spatially-averaged levels in the vicinity of the antennae. NJDEP limits for RF radiation exposure are defined specifically in terms of spatially averaged values, averaged over the dimensions of the human body, rather than for ‘spot’ measurements.  Measurements were also made within an Office of Information Technology (OIT) closet, a student carrel (because cable runs overhead and a map indicated that an antenna was located overhead), and at the ceiling in the vicinity of a radiating (‘leaky’) cable.  One of the most noteworthy points of these measurements is that the RF levels present in all locations were so low that the levels were close to the lower limit of detection of the RF survey equipment. The maximum spatially-averaged level measured was 10.9 Volts2/meter2, directly below an access point antenna. This measurement should be compared to NJDEP’s allowable limit of 20,000 Volts2/meter2, spatially averaged over the dimensions of the human body. The NJDEP limit does not differentiate between exposure of the general public and occupational exposure.

Another survey report,
ACRBR EME In Homes Survey: Final Report was conducted by the Australian Centre for Radiofrequency Bioeffects Research in 2009.  Radio frequency radiation levels were measured in 20 suburban homes in Melbourne, Australia. The survey assessed RF levels from individual devices (cordless phones, Wi Fi, etc) in isolation, as well as their cumulative exposure. The results generally indicated very low levels with the highest level being less than 1% of the limits in the Australian RF Standard. For individual devices, the highest level was less than 10% of the limits in the Standard. Having all RF devices on simultaneously in a home (cumulative exposure) had little effect on the results. The authors conclude that the use of RF devices in an average suburban home results in very low RF exposure (compared to the Australian Standard), even when numerous RF devices are operating simultaneously. 

The ultimate authority on this in the USA is the FCC.  They have an official statement, primarily related to cell phones but. as demonstrated, also applicable to Wi-Fi.  Basically, it states that the FCC regulates all radio devices sold in the USA and there is no scientifically proven causal relationship between RF energy and health.  Health Canada presents, in my opinion, a more in-depth analysis of the issue, reaching the same conclusions as the FCC.   As a reminder, Wi-Fi's regulated frequencies and power limits are identical in the USA and Canada, and fairly similar throughout the rest of the world.

On the access point front itself, there is a 2005 position white paper by Cisco indicating that they are in compliance with FCC, European, and World Health Organization regulations and guidelines for their equipment.  All AP vendors, including EnGenius, comply with these, else they would not be able to sell equipment in these world markets.  The paper has some interesting secondary references as well.

 

Additional Resources

If the above was not sufficient to convince, these are several additional studies and papers that I've come across. 

  1. Lin, J, Update of IEEE Radio Frequency Exposure Guidelines, IEEE Microwave Magazine; 2006.
  2. Lin, J, Safety Standards for Human Exposure to Radio Frequency Radiation and Their Biological Rationale, IEEE Microwave Magazine; 2003.
  3. Osepchuk, JM, and Petersen, RC, Safety Standards for Exposure to RF Electromagnetic Fields, Microwave; 2001
  4. World Health Organization. ElectromagneticFields and Public Health: Mobile Phones. Fact Sheet No. 193; October 2014
  5. Australian Radiation Protection and Nuclear Safety Agency. Maximum Exposure Levels to Radiofrequency Fields - 3 kHz to 300 GHz.  Radiation Protection Series Publication No. 3; 2002.
  6. Australian Radiation Protection and Nuclear Safety Agency.  An Explanatory Question & Answer Guide to the ARPANSA Radiation Protection Standard for Maximum Exposure Levels to Radiofrequency Fields – 3 kHz to 300 GHz. 2002.
  7. M. Kundi.  The Controversy about a Possible Relationship between Mobile Phone Use and Cancer. Environmental Health Perspectives; 2009.