Tuesday, June 30, 2015

Wi-Fi Sense: How Microsoft Has Effectively Broken Wi-Fi Security

Originally debuted in smartphones running Windows Phone 8.1, Wi-Fi Sense is a standard feature of the upcoming Windows 10 operating system.  It is designed to automatically connect Windows 10 devices to "shared Wi-Fi networks".    The idea is that it automatically shares your Wi-Fi SSIDs and passphrases between friends over social media, eliminating the need for communicating them the "old fashioned way" and typing in the passphrases by hand.

While this may sound attractive enough to technophobes, it is in actual fact an extremely dangerous and irresponsible system that effectively breaks the security of all existing Wi-Fi networks relying upon WPA2 Personal (i.e. passphrase) security.

Here is the fundamental problem:  the decision of what is a "shared Wi-Fi network" relies upon the operator of a Windows 10 device connected to the network, not the owner or administrator of the Wi-Fi network   Thus, any Windows 10 client device, even one with a legitimate connection to your network, such as a member of your household, or an employee of your small/medium business, becomes a vector to sharing that security information across social media.

The really disturbing part:  as network administrators, there is virtually nothing we can do to prevent it (more on that below).

I would go so far as to classify Wi-Fi Sense an entire new category of phishing attack.  It isn't actually a problem with the 802.11 standard itself, per se.   Rather, it is a coordinated effort by one external entity (i.e Microsoft) to use distributed nodes (ie.. the network devices running Windows 10) to gather and spread Wi-Fi security information over social media, generally without even the direct knowledge or informed awareness and consent of the user or the network owner / administrator. 

How Wi-Fi Sense Works

The information in this section is based on an article on howtogeek.com.  

Wi-Fi Sense shares Wi-Fi login information (i.e. SSIDs and passphrases) with their friends, specifically their contacts on Outlook.com  and Skype, as well as optionally Facebook.  Similarly, when enabled, the network information from friends is shared with you, so that your Windows 10 device can (and will) automatically connect to Wi-Fi networks shared by your friends.

When a Windows 10 user connects to a Wi-Fi network, they'll get a "Share network with my contacts" check box. 

Presumably, the box is unchecked by default.  However, if the user checks the box, then the SSID and passphrase are uploaded to Microsoft's servers and shared with your social media contacts.  If someone connects to your shared network with their Windows 10 device, they similarly can choose to share the connection details with all of their friends, and so forth.  


The security ramifications are readily obvious.   The following two examples will illustrate the main issues for network administrators.

(1) Home network:  Most home networks use consumer-grade wireless routers, which are only capable of providing either an open (no encryption) or WPA2-Personal (i.e. passphrase) encryption.  Multiple SSIDs and client isolation are generally not standard features, though some consumer-grade wireless devices do come with the ability to set up a separate "guest network" for visitors.    When your teenage son connects to your home network on the Windows 10 laptop he uses for school, the SSID and passphrase are shared with all of his Facebook friends.  When his friends come over with their Windows 10 laptops, instead of connecting on to your guest network, they are connected to your main home network and have full access to everything else you may have on your network, including your PC with your financial and health records, multimedia applications, smart-home appliances, etc.

(2)  SMB network:  A network in a small / medium business (e.g. small office, restaruant, retail outlet, hotel, etc.) may use either consumer-grade or enterprise grade equipment, and will typically have multiple SSIDs, one for public access (i.e. visitors, customers, hotel guests, etc.) and one for private staff access for operations.  The public access network may or may not use a WPA2 passphrase (I generally recommend against - see my blog post on VLANs), but the staff network typically uses a WPA2 passphrase.   Note that a staff network may include financial and operational data, as well as credit card transactions, requiring PCI-DSS compliance, and patient health records in medical professional offices and assisted living facilities, requiring HIPAA compliance.  An employee with a legitimate need to connect to the staff network may inadvertently share that SSID and passphrase over social media via Wi-Fi Sense.  Any friend coming to the place of business connects automatically to the staff network and shares those credentials via Wi-Fi Sense with their friends.  Now, not only is the network compromised in terms of PCI-DSS and/or HIPAA compliance, but financial and operational records are also exposed to anyone with an indirect social media connection.

While I write this from the perspective of a network designer and administrator, there is also a notable vulnerability for the Windows 10 clients themselves.   A malicious user can set up a malicious Wi-Fi network designed for a man-in-the-middle attack in a public venue and distribute the credentials via social media.  Anyone connecting to that malicious Wi-Fi network will further spread the credentials.  Since Windows 10 clients are designed to automatically connect to networks shared over social media, a client device could connect to the malicious Wi-Fi network without even realizing it, leaving their devices and data subject to sniffing and attack.

Microsoft also claims that a user is isolated on a shared network from other devices on the LAN.   In order to do this completely effectively, the client isolation settings on the AP, along with appropriately defined ACL rules on your Layer 2 managed LAN switch, need to be set appropriately.  Rather, this isolation is illusory, created by the OS itself.  In Windows, when you normally connect to a Wi-Fi network, it asks whether the network is "Home", "Work", or "Public".  If "Public", then Windows sets up some internal firewall rules to block incoming ping and other traffic from the LAN.  This is only for inbound traffic, however:  from the PC you can still see and access all other devices on the LAN.  So nobody can hack you on the LAN, but you are still free to hack anyone else on the LAN.  As a network administrator, this does not meet the definition of "client isolation", as now I am trusting the security of my network to the cluelessness of users who, as an administrator, I never invited to connect to my network in the first place!

What Can Be Done To Prevent This?

Unfortunately, there isn't much in the way of practical solutions at this time.   Here are the options, such as they are:

(1) Opt Out:   Microsoft has "generously" given network operators the ability to opt out of having their networks shared.  To opt out, you need to append the characters "_optout" to the end of your SSID.   Let that sink in:  You need to actively change your SSIDs on your networks to opt out of a feature you didn't implement or ask for.  If you have an SMB network consisting of multiple access points, you need to change the SSID on all of them.  Also when you change the SSID, you need to touch all of the legitimate devices on your network to make sure they now connect to the new SSID.   If that isn't painful enough, Microsoft is also vague on what opting out actually entails:   it is not clear whether the SSID and passphrase data will simply not be collected by Microsoft, or if the data is collected but there's a flag in their database indicating that it shouldn't be shared.  The latter option is unfortunately more likely and significantly less secure, since Microsoft is still collecting your data without your permission, and you rely upon Microsoft to ensure that your Wi-Fi network information is never shared, even accidentally due to a mistake in their 100% guaranteed bug free code.

(2) Implement WPA2 Enterprise Security:   Larger businesses and enterprises typically implement WPA2 Enterprise, which requires using 802.1x with a RADIUS server.   It is true that WPA2 Enterprise is not subject to the security issues created by Wi-Fi Sense.  That said, most small businesses have neither the money, the time, nor the IT acumen to properly configure all of the pieces of WPA2 Enterprise and keep it maintained properly.   Thus, WPA2 Enterprise is an unrealistic solution for private home and most SMB networks.  The whole point of WPA2 Personal was to provide secure Wi-Fi connectivity for consumer and SMB networks. [Update:  Use of PPSK (personalized pre-shared key) has been suggested, where every device gets its own WPA2 Passphrase.  Since PPSK not part of 802.11 standard, only 3-4 AP vendors support PPSK.  Furthermore, client devices still need to be registered, either manually or via automated software download to device, which is logistically infeasible in most SMB environments.]

(3) Ban Windows 10 Devices from Your Network:  While I find this option viscerally appealing, it is eminently impractical.  Large businesses can sometimes maintain control over what devices connect to their staff network, but even in that space the "bring your own device" (BYOD) trend has been accelerating for several years and shows no signs of abating.  In private home and SMB networks, BYOD has been the norm since long before the term "BYOD" was coined.  Furthermore, as Windows 10 gains inevitable traction in the marketplace, it will be impossible to prevent these devices from connecting to your network - like all things Microsoft, eventually you won't be able to buy new PCs or laptops without Windows 10 on it, pre-installed for your convenience.   It's true that Wi-Fi Sense has been around for about a year on Windows Phone 8.1, but hasn't been much of a security issue so far because those devices have virtually no market share.

(4) Beacon Information Element:  One of the solutions being discussed amongst the professional Wi-Fi community is having the AP beacon (an advertisement message sent out by an AP every ~100 ms for each SSID) include an additional information element that tells client devices to either "opt in" or "opt out" of Wi-Fi Sense.   While technically this is doable, it is an impractical fix, at least in the short term, for numerous reasons:
  •  Microsoft and the Wi-Fi community would first have to develop a standard on what this information element looks like, and Microsoft would need to agree to respect it.
  • Microsoft would have to patch all Windows 10 devices and NIC drivers to recognize and properly act upon the information element. [Correction:  Existing NIC drivers can handle an additional Information Element and pass it up via existing mechansims.  Microsoft would still need to tell Windows 10 OS how to interpret such an Information Element.]
  • Every single consumer and enterprise AP manufacturer would need to provide a firmware update for every single (active) model of AP to provide information element. 
  • If your Wi-Fi network is older and has APs that are beyond the manufacturer's end of life (EOL), don't expect any patches from the manufacturer.  The APs and network will need to be physically upgraded.   
  • Every Wi-Fi network that you want to secure would need to have its firmware upgraded across all of its APs, and the new setting to enable this information element enabled appropriately.
This is an enormous and costly undertaking, the scope of which compares to when WEP was cracked in 2001 and patched in 2003.  Considering that there are thousands of times more Wi-Fi networks deployed in 2015 as in 2003, and the scope starts becoming overwhelming.

(5) Convince Microsoft to remove this ill-conceived feature from Windows 10.  Yeah, I don't know how to go about doing that either.  However, that doesn't mean that we shouldn't try.  How:  writing blogs like this, preaching the message on social media, and delaying my transitions to Windows 10 as long as possible.  Perhaps if enough of us in the professional Wi-Fi community perform a concerted and organized effort, we can bring sufficient pressure to bear, but I'm not optimistic.

Monday, June 29, 2015

IoT: A Renaissance of 802.11b

I recently saw this question posted on LinkedIn:  Is energy efficient 802.11b chipsets for IoT applications a boon or bane for modern WLANs?  (See Arsen Bandurian's blog post on this.)

If one believes even 1/10 of the hype about IoT, it is reasonable to expect that there will be a surge of numerous IoT devices on enterprise, SMB, and even private home networks over the next few years.  The reality is that most of these devices will probably be running 802.11b for wireless connectivity.  Most IoT devices neither require high data rates nor real-time responsiveness, so 2.4 GHz 802.11b is perfectly suitable for such applications.  It is in the IoT manufacturer's best interest to rely upon 2.4 GHz vs. 5 GHz, as 2.4 GHz has better range, especially given low Tx power and small low-gain antennas to keep device size minimal.  Additionally, the devices will work in legacy / non-upgraded environments.  At 2.4 GHz, a single stream 802.11n chipset does not provide the benefit of 802.11b/g antenna diversity, and a dual-stream 802.11n chipset requires more streams, thus more power and more space.

Those of us in the Wi-Fi industry can decry the use of 802.11b in IoT;  the hard truth is that there are neither financial nor performance incentives for IoT devices to use anything BUT 802.11b.

So what are we to do?  Fortunately, most large volume data and latency-sensitive devices have been and continue to move towards 5 GHz (802.11ac).  We are rapidly converging on a Wi-Fi reality that the 2.4 GHz band will be even "junkier" than it is today, meaning that any meaningful / critical data transmission must be shifted to 5 GHz for any type of QoS.

My colleague and friend Devin Akin (@DevinAkin on Twitter) has been decrying "2.4 GHz is dead."  For conventional thoughput and latency applications, he is correct.  However, there is still a use for the 2.4 GHz band in Wi-Fi deployments, and that's to support IoT. 

I'll even go one further:  I foresee the time in the not-too-distant future when we will be deploying dual-band access points with the 2.4 GHz radios purposely set to "802.11b only".

Of course, vendors will continue to manufacture and sell dual-band APs that are 802.11b/g/n on the 2.4 GHz band and 802.11a/n/ac/?? on the 5 GHz band.  As a network designer and installer, however, I expect that we'll purposely want to limit my 2.4 GHz band to 802.11b ONLY, especially if most 2.4 GHz clients are IoT devices with 802.11b low power chipsets.

When there is at least one 802.11b client associated to an AP, every single 802.11g or 802.11n transmission by associated clients, as well as 802.11g/n transmissions from the AP itself, must be preceded by either an RTS/CTS exchange between the sender and the receiver, or a CTS-to-Self message by the sender.  These messages are control frames sent at the lowest 802.11b rate of 1 Mbps, and are implemented as a protection mechanism so that the 802.11b clients, which cannot distinguish 802.11g or 802.11n frames, will reset their NAV timers so as to not attempt transmission during the 802.11g / 802.11n portion of the frame exchange.

In practice, this protection mechanism adds a lot of overhead to the channel.  While 802.11g and 802.11n devices are transmitting their data at OFDM MCS rates, this additional overhead means that the effective throughput is substantially lower, and can even be below 11 Mbps (i.e. the fastest 802.11b rate).   There is also the reality of implementation in that, while access points tend to do a RTS / CTS exchange, most client devices opt to only do the faster CTS-to-Self.  This can create a hidden node problem:  If an 802.11b client is far away from the AP and on the opposite side of the AP from the 802.11n client, the 802.11b client may not hear the CTS-to-Self message transmitted by the 802.11n client. The 802.11b client therefore won't reset its NAV timer and could potentially transmit during the 802.11n transmission, causing a collision. Obviously, a CTS-to-self is really only suitable for use by the AP, since by definition all associated clients are in range of the access point, whereas client devices should all use RTS / CTS, on the assumption that the 802.11b client will hear either the sender or the receiver if not both.   In reality, it doesn't get implemented that way.

Irregardless of the hidden node issue, 802.11g / n client devices and the AP must use the RTS / CTS or CTS-to-self protection mechanism, which drastically reduces the effective throughput of these transmissions. As mentioned earlier, this can lower the effective throughput to below 11 Mbps, which is the fastest 802.11b rate. Accordingly, the 802.11g or 802.11n devices could actually get better throughput if they transmitted at 11 Mbps and didn't bother with the protection mechanisms.

Additionally, there's the issue of airtime utilization. For equivalent data frames, an 802.11b device will consume more airtime than an 802.11n device, since time = data volume / speed. (Physics, especially high school physics, is a harsh mistress.)   While some AP vendors do have mechanisms to enforce airtime fairness, this only occurs on the AP side and thus is of limited effectiveness, especially when the majority of clients are 802.11b.  Since most IoT devices shall spend most of their efforts uploading data of whatever they are responsible for sensing, airtime fairness mechanisms on an AP are simply irrelevant.

One objection to the "802.11b only" scenario is that there are several 2.4 Ghz only 802.11n devices out there, especially for patient monitoring.   Yes, this is true.  However, such chipsets are backward compatible to 802.11b.  Furthermore, many healthcare appliances and capital equipment (even newly manufactured) still run 802.11b anyway, as medical vendors (a) don't want to have to go to FDA for a change and (b) they still want to use the cheapest possible network components, even in a $30k device.   The simple reality is that any equipment manufacturer that is putting data capacity / latency sensitive applications on the 2.4 GHz band is doing themselves and their customers a disservice.

Another objection I've heard is "it will all get fixed in 802.11ax", when, amongst other things, bi-directional MU-MIMO becomes a reality.   Please read my prior blog post on why uni-directional MU-MIMO is likely to be ineffective in most environments, as the argument readily extends to bi-directional MU-MIMO.   Furthermore, 802.11ax won't be around unitl ~2019 at the earliest - the spec is still being negotiated, and I would be very surprised to see a finalized spec before the Wi-Fi industry gets a real understanding as to how MU-MIMO works in actual practice vs. the theory on paper.   I hope I have many, many networks to deploy between now and the emergence of the mythical 802.11ax.   That said, even if I am totally wrong about MU-MIMO, and it turns out to be an invention comparable in societal impact to fire, the wheel, the automobile, the airplane, sliced bread, and central air conditioning, as the industry analysts and AP vendor marketing folks are all touting it, there is still no financial or performance incentive for IoT devices to adopt any Wi-Fi technology above 802.11b

So, to quote President Obama only slightly out of context, "...something that rhymes with 'bucket'..."   (Source)

Instead of fighting this trend, we network designers should embrace it.   The surge of IoT devices is great for our business, as it will mean lots of networks to install, upgrade, and re-vamp.  But we have to do it intelligently.  You have data-hogging and/or latency-sensitive applications, put them on 5 GHz 802.11ac and optimize the 5 GHz networks for maximum performance.   You have IoT, put it on 2.4 GHz and let the 2/4 GHz networks run at 802.11b only, so we don't need to worry about protection mechanisms or even throughput or real-time performance, so long as 2.4 GHz AP channel and Tx power settings are such that we mitigate co-channel interference.  (Yes, you can turn off some of the 2.4 GHz radios if you want...)

IPv4 Subnet Masks Made Simple

Subnet masking is one of those "black magic" topics of networking that you either get or you don't.  Unfortunately, most published explanations of subnet masking do a very poor job of explaining it to the lay person or to the beginning IT engineer. 

Accordingly, this blog post is an attempt to offer a simple explanation for people who need to understand this.  The contents of this blog primarily come from the EnGenius Certified System Engineer training course, written by Jason D. Hintersteiner, Copyright 2014-2015.

What is a subnet mask?

When configuring a static IPv4 address on a network device (i.e. PC, laptop, network switch, AP, etc.), you need to provide at least four parameters:

  • IP Address:  This is the unique 32 bit address of the network device
  • Subnet Mask:  This defines the scope (i.e. size) of the LAN
  • Default Gateway:  This defines the location of the router on the LAN which will route traffic intended for external networks (e.g. Internet)
  • DNS Server(s):  These define the IP addresses of domain name resolution (DNS) servers to convert human-friendly (well, friendlier) server names (e.g. www.emperorwifi.com) into its corresponding IP address (e.g.     

The subnet mask, therefore, defines the size of the local area network (LAN), so that the device can determine whether other network devices it communicates with are on the internal LAN or on the external wide area network (WAN).

Why do we distinguish between LAN and WAN?

Quite simply, a network device needs to know how to address traffic.  Every network interface card (NIC), whether wired or wireless, has a MAC address.  This is a unique 48 bit address, expressed in 6 hexadecimal octets as in 88:dc:96:ab:cd:ef, that is burned into the hardware when the NIC is manufactured (though it can be spoofed in most operating systems).   When network devices intercommunicate, the first address used in that communication is the MAC address, not the IP address.  MAC addresses, though, do not leave the realm of a LAN.   If two devices are on the same LAN, they will communicate directly by specifying each other's MAC addresses.  Note that the gateway / router is not involved in this communication, and will probably never even see the traffic depending on how the network is structured.  When a device wants to communicate with another device not in the LAN, the device communicates to the MAC address of the default gateway, which in turn routes the traffic on to the WAN.

How is a subnet mask structured?

By definition, a subnet mask must be of equal length to the IP address, as it is used as a filter to express what portion of the IP address is "local" vs. global".  In IPv4, therefore, the subnet mask is a 32 bit number, usually expressed in dotted decimal notation.  Similarly, in IPv6, the subnet mask is a 128 bit number expressed in hex.  

Unlike an IP address, however, a subnet mask only has a specific set of values.   To understand this, it makes more sense to look at the subnet mask in binary.   The subnet mask consists of a contiguous set of "1"s followed by a contiguous set of "0"s.    The "1"s indicate the bits in the IP address used for comparison - when that portion of the IP address matches between two hosts, the two hosts are on the same LAN.  If that portion of the address doesn't match, then they are on different networks and traffic must be sent through a router / gateway in order to communicate. 

As a shorthand, classless inter-domain routing (CIDR) notation is used with a "/" after the IP address.  The CIDR notation is actually more intuitive than the dotted decimal notation, as it indicates the number of "1"s in the subnet mask.   The following table shows the complete set of valid IPv4 subnet masks.

The larger the subnet, the lower the CIDR number, and the more hosts that can share the same LAN.  The total number of devices on the LAN is 2^(32 - CIDR). 

Practical Examples and Reserved Addresses

Most consumer and enterprise equipment come with a or /24 subnet mask by default, and this is the one people are generally familiar with.  

A network that is means that it has 256 total addresses on the LAN [i.e. 2^(32-24) = 2^8 = 256] with an address range of - 

A network that is means that it has 16 total addresses on the LAN [i.e. 2^(32-28) = 2^4 = 16] with an address range of -  

A network that is means that it has 2048 addresses on the LAN [i.e. 2^(32-21) = 2^11 = 2048] with an address range of -

In IPv4, there are also reserved addresses, as follows:
  • Network Address:   This identifies the subnet.  This is always the first address of the range. 
  • Broadcast Address:   This is the address that is used to communicate broadcast messages to all devices within a LAN.   (An example is DHCP responses from the DHCP server).  This is always the last address of the range.
The network address and the broadcast address must be defined with the subnet mask in order to provide context.  As an example, the three networks above (,, all have the same network address, namely, but different broadcast addresses (i.e.,,    Similarly, two different networks can share the same broadcast address (e.g. is a valid broadcast address for as well as 

One additional note, because of the network address and broadcast address, the number of LAN addresses that can be used for hosts is always 2 less than the number of addresses on the subnet. It is actually 3 less, because a host address has to be reserved for the gateway to route traffic to the external network (e.g. the Internet).  By convention and convenience, the gateway is assigned to either the first or the last usable host IP address (e.g. or in our example), though strictly speaking it can be any valid host address on the subnet.

Why Do We Care?

We care about subnet masking when architecting wired and wireless networks, to ensure that we can accommodate all of the hosts we expect to see on the network.  As the number of devices per user continues to increase (who doesn't have a smartphone, tablet, e-reader, and laptop when they travel, and perhaps a game console and other network-enabled devices as well), and as the IoT trend introduces a torrent of small devices on our person and in our environments to measure all sorts of things, the typical subnet mask is no longer sufficient.

Take the example of a 100 room hotel that has a guest network.  We generally want all of our guests on the same subnet, to facilitate their ability to wirelessly roam around the hotel (i.e. they can use the device in their room and stay connected when they go down to the hotel's pool or restaurant).   If we use a subnet mask, we have 253 possible hosts to accommodate.  When guests average 1 device per user, 100 devices on the network falls well within our 253 possible hosts.  At an average of 2 devices per user, 200 devices on the network starts to come uncomfortably close to our 253 address limit.  At an average of 3 devices per user (3-4 devices per guest is currently "common"), we've exceeded our address pool of 253 devices, meaning that 47 devices are not getting an address and getting on the network, leading to complaints, aggravated guests, aggravated staff, and future lost business.  The same scenario in a 200 room hotel reaches that 253 host limit a lot more quickly.

However, if we use a subnet that is a /22 (1021 usable addresses for guests) or /21 (2045 usable addresses for guests), we have increased the size of the local area network and thus the total address pool.

Hopefully you found this explanation clear and straightforward.  Any questions?  Please post a comment.

Friday, June 19, 2015

The elephant in the room: Will MU-MIMO work at all?

There is a rash of discussions  regarding 802.11ac wave 2 with multi-user MIMO, and how this is going to "dramatically change the Wi-Fi industry".  According to the analysts and marketeers,  everybody needs to run out immediately and buy new enterprise switches that overdrive existing CAT5e up to 5 Gbps, along with claims that wave 2 APs will obviously displace wave 1 APs, so that you'll be getting MU-MIMO whether you need it or not.

For the record, the overhead in the Wi-Fi protocol generally means that the throughput on the wire is, at best, 40% - 50% of the consumed airtime throughput, so you won't be hitting 1 Gbps on the wire for quite some time.   Also, it's been 6 years since 802.11n was introduced, and AP vendors are still managing to sell quite a number of 802.11n 2x2:2 APs; 802.11ac wave 1 APs won't be disappearing from shelves anytime soon.

These discussions have primarily ignored the elephant in the room.  MU-MIMO adds A LOT OF COMPLEXITY to Wi-Fi.  Given that, even in high-density environments, can MU-MIMO actually provide any practical benefit?

I've blogged before somewhat obliquely on this topic (see my blogs on how MU-MIMO works and whether MU-MIMO is appropriate for the SMB market).   In low to medium density environments, where one is not actually consuming all of the airtime, you need not make the airtime more efficient by talking to multiple clients simultaneously.   The benefits of MU-MIMO in such environments are, at best, de minimis.

But does MU-MIMO fair any better in high density environments, where we have a bunch (say >>50) of 802.11ac wave 2 compatible clients all trying to communicate at once?  In the past, Wi-Fi engineers would put multiple APs in the area, carefully controlling channel and power settings to minimize co-channel interference, perhaps using highly directional antennas so as to divide the space into the smallest cells possible.  But now, we apparently don't need to worry about such things; just put in a big honking wave 2 AP with a ludicrous number of antennas and we'll carry on multiple conversations simultaneously.  Easy, right?!   Not so much.

The race to push wave 2 product to the market means that not all of the technical complexities have been solved by the AP vendors, despite marketing claims. Since there are virtually no 802.11ac wave 2 clients to actually put these new APs through their paces, these vendors still have time to fix it in firmware.

One glaring example of this technology shortfall: how does an AP determine WHICH clients should be part of a simultaneous communication session?  Such an algorithm is NOT part of the 802.11ac spec, and thus is left to the individual vendors.  No vendor has published how they are doing this, and it is by no means an easy problem to solve.  It has been observed that for each MU-MIMO group, you need your clients spatially separated with respect to the location of the AP, and you need their transmissions to take roughly the same amount of time (though not necessarily the same volume of data or MCS rate).  Now add in QoS, because all client devices are equal, but some client devices are "more equal" than others.   In a high density / very high density environment, we have by definition a lot of clients to choose from, and these choices have to be made in near real-time (every few milliseconds) for processing that 5 Gbps torrent of data streaming in on the wired port(s).  This, of course, assumes that the AP is frequently winning contention, though the hundreds of client devices that are now able to associate to a single AP are also competing for the same air time, quite effectively cutting down the number of contention rounds that an AP can win in a given interval.  

While I personally lack the multiple mathematics Ph.D.'s required, I can envision that such algorithms can be defined to take all this information into account, and make some rational and reasonably fair set of choices for selecting groups of APs for MU-MIMO.  Alas, however, we cannot just solve the problem once, or even once per client device either connecting to / disconnecting from the AP.  Transmit beamforming updates the position of each client at a rate of 40 times per second (i.e. every 25 ms).  While this loop of sounding PPDUs and feedback is (presumably) happening in parallel on top of the data or other overhead packets being exchanged with client devices, it does mean that the grouping of clients won't remain fixed because clients devices may be in motion, so the determination of these groups of clients must be recalculated continuously.   

Accordingly, I'm highly skeptical about the efficiency of such an algorithm to solve the problem "well", defined as a noticeable improvement in client capacity (# of clients per AP) and/or client throughput (i.e. download speed per client device) when compared to the "conventional approach" of using multiple APs with tightly controlled coverage areas.

For those of us who've suffered through the CWAP, we know that there are plenty of technologies cluttering up the 802.11n spec that increase complexity in the name of improved performance that never got implemented in practice.  Remember the multiple ways of doing transmit beamforming, including implicit and explicit with a bunch of calibration and antenna selection parameters? How about Space Time Block Coding (STBC)? DSSS/CCK mode? L-SIG TXOP Protection? Reverse Direction (RD) protocol? How about asynchronous MCS rates, where the different streams transmit at different MCS rates?  For that matter, what about A-MPDU?  All of these technologies looked interesting enough on paper to get themselves put in the 802.11n spec, yet few if any vendors offered these features commercially.

The main difference with MU-MIMO and the other deprecated performance technologies above is the huge marketing push in time and dollars that's been spent on promoting it. This is undoubtedly because "talking to multiple clients at once" is undeniably sexy; the concept is really easy to explain and the result of either higher throughput per client and/or increased client count is readily obvious.

Just because it's sexy, however, doesn't mean it's going to work. 

Wednesday, June 17, 2015

Wi-Fi in Assisted Living: It’s all about Operations Infrastructure

As the baby boom generation continues to age, the demand for assisted living is going to continue to grow dramatically.   Tech-savvy seniors living in these facilities will expect, at least, the same quality of wireless Internet access that they had at home.   To be competitive, assisted living facilities will need to keep up their Wi-Fi infrastructure to attract residents.

That said, it’s hard to justify large capital expenditures on Wi-Fi, just so Millie can Skype with her grandkids.

In reality, however, resident Internet access is actually of only secondary or even tertiary importance.  The true power of Wi-Fi in assisted living facilities is the ability for the staff to access electronic patient records securely and use an ever-growing array of wireless sensors to monitor residents and improve both operational efficiencies and maintain the health and safety of the residents.  The Wi-Fi capital investment is actually one of infrastructure, as Wi-Fi is a critical component to the daily operations of the facility.

The emerging Internet of Things (IoT) is creating a network of wireless-enabled wearable and other sensing devices to collect data, report on abnormalities, and even make appropriate adjustments.   Some of these devices, like the FitBit, Nest thermostat, and Apple Watch, are already on the market.   But while these may still be considered in the realm of high-tech toys, the real applications will not just be life-enhancing, but life-sustaining.  Consider the following representative applications for continuous, real-time measurements:

  • Wearable sensors: 
    • Vital signs, including blood pressure, pulse rate, and temperature
    • Orientation:  “I’ve fallen and I can’t get up” without pushing a button
    • Location:  Where are your residents within your facility?  Absolutely essential for residents suffering from Alzheimer’s or dementia
  • Room sensors and actuators:
    • Pressure sensors in the bed and other furniture:  When a resident gets up in the middle of the night to use the restroom, do they get back to bed in a reasonable amount of time?
    • Environmental sensors measuring and adjusting room temperature and humidity
    • Motion:  Has anyone been in to provide meals or check on the resident?
    • Lighting and Electrical Usage:  Is the resident keeping to their standard living pattern?  Can lights / televisions / etc. be turned off to save electricity?
    • Safety:  Heat, smoke, and carbon monoxide detection
  • Facility sensors and actuators:
    • Location:  Always know where drug carts, medical devices, and other facility assets are located
    • Security:  Motion sensors, access control, video surveillance
    • Safety:  Heat, smoke, and carbon monoxide detection
    • Lighting and Electrical Usage:  Are the commercial kitchen facilities wasting energy due to poor freezer seals, high-energy devices being left on, etc.
    • Lighting and Electrical Control:   Turn off lights, televisions, and the like in unused common areas.

Figure 1:  Concept of the Wireless Internet of Things to monitor patients and assisted living residents.

To take full advantage of these applications, a properly designed and functioning Wi-Fi system is essential.   If your Wi-Fi doesn’t work properly, your critical applications won’t work effectively.   Infrastructure Wi-Fi™ from WIoT Solutions offers a holistic approach to making sure that the Wi-Fi you implement will meet all of your operational needs.

First and foremost, every Wi-Fi deployment needs to be tailored to the specific layout, building materials, and constraints of a facility.  This is required to ensure proper coverage on both the 2.4 GHz and 5 GHz bands by selecting the correct AP locations, channel, and transmit power settings.   In a facility serving tens or even hundreds of residents, such a system needs to be engineered and optimized for the space, requiring qualified and knowledgeable Wi-Fi engineers and installers.  The access points should also be centrally managed from either an on-site controller or a cloud-based application, so that proper functionality and usage statistics can be monitored and changes can be rolled out consistently and universally.  The figures below show a predictive model of Wi-Fi coverage in an assisted living facility for both the 2.4 GHz and 5 GHz bands, using EnGenius Neutron EWS360AP 802.11ac dual-band centrally-managed access points.  

Figure 2:  AP positions and expected coverage from predictive modeling (2.4 GHz).

Figure 3:  AP positions and expected coverage from predictive modeling (5 GHz).

Long gone are the days where you can simply go to your local electronics superstore, buy a box for $50, slap it on the wall, and call it usable Wi-Fi.   

Compared to the 5 GHz band, the 2.4 GHz band has comparatively low throughput and is subject to more external sources of interference, though also travels further and attenuates less rapidly.   This is also the band preferred by IoT devices, and will likely remain so for many years to come.  The 2.4 GHz Wi-Fi technology is older, and therefore cheaper, for IoT device manufacturers to incorporate.   Furthermore, the data requirements per device are miniscule:  most IoT devices only need to report small amounts of data on a periodic scale of minutes.   While a large number of IoT devices can start to add up to significant data consumption, it is more likely for an AP to reach its simultaneous connection limits before the trickle of IoT data becomes a torrential flood.

Of course, this assumes that the IoT devices are not competing for wireless resources with high data applications, such as video conferencing on Skype or accessing patient medical records on an employee’s tablet.  Fortunately, most data devices are already capable of operating on either the 2.4 GHz or 5 GHz bands.  Enterprise APs include band steering to encourage connections on 5 GHz, which utilizes wider channels and is subject to less external interference than the 2.4 GHz band. 

In terms of access, a typical assisted living facility is likely to have at least three SSIDs / VLANs per band:

  • Staff:   This is the network for facility-owned computing devices, such as tablets and laptops, for both resident management and facility operations.   To meet HIPAA-compliance, the network incorporates WPA2 Enterprise security with a central RADIUS or Active Directory server managing the users and devices that can log onto this network.
  • Resident / Visitor:   This is the network for residents of the facility, their guests, and any personal devices brought in by staff members.   The SSID should be unencrypted, to allow for easy access to the network.   However, client devices should be fully isolated from one another, so that they can only get Internet access and not access to each other.  The use of a (well-designed) captive portal for guests, as well as bandwidth limitations per device to prevent abuse, is also of key importance.
  • Infrastructure:  This is the network for the array of IoT sensors and actuators used to monitor residents and measure / control the environment.  As much of the data will be patient-specific and medical in nature, HIPAA-compliance also applies, necessitating the use of WPA2 Personal security to prevent unauthorized access.   Client device isolation is also required, with appropriate exceptions for the on-site servers or monitoring computers that need on-site real-time access to the data.

There may also be additional VLANs / SSIDs required for additional applications, such as for video surveillance, access control, and / or Voice over IP.  

In addition to the Wi-Fi, an appropriate supporting Information Technology infrastructure is required, incorporating network security and firewalls, Active Directory / RADIUS servers, video surveillance, access control, Voice over IP, and cloud-based backup of mission critical data.  

I specialize in designing and troubleshooting these types of Wi-Fi deployments.  If you’d like more information, please respond to this blog, contact me on Twitter (@EmperorWiFi), or email me at jason@imperialnetsolutions.com.