Wednesday, September 21, 2016

Best Practices for Managed Network Switches

This blog post provides guidelines on best practices for selecting the right quantity and size of managed network switches for particular applications.

Application Guidelines:  Frequently Asked Questions

How do network switches work?

In the early days of networking, wired Ethernet network devices were interconnected through a device called a “hub”, where a wired frame would come in on one port and be broadcast out to all other ports.  While relatively simple, this technology caused a lot of noise and consumed a lot of excess capacity on wired networks, as all messages were broadcast to all devices connected to a hub, regardless of their intended destination.

With network switches, each switch port creates a point-to-point link with the device it is connected to.  The network switch maintains a database of MAC addresses indicating what devices are connected on individual ports.   When a frame enters the switch on a particular port, the switch examines the source MAC address and destination MAC address.   The source MAC address is used to update the database to indicate that the client is accessible on that port.  If the destination MAC address is already in the database as being connected to a different switch port, the frame is only forwarded out along the indicated port.  If the destination MAC address is not in the database, or if is a broadcast message (e.g. DHCP request), the packet is sent out all other ports as is done in a hub.  

What are the differences between unmanaged, smart, and managed network switches?

An unmanaged switch maintains its database but is inaccessible via any interface (e.g. web, CLI, SNMP, etc.).    It is simply present on the network to route frames to appropriate ports.  Unmanaged switches are also incapable of handling any type of advanced Layer 2 features, including VLANs.

A managed switch has a full set of OSI Layer 2 features used for managing the wired traffic on a network.  It is addressable via an IP address and can generally be accessed via both a web interface (e.g. http or https) and a CLI (e.g. telnet or SSH).  Managed switches are capable of supporting a long list of industry-standard OSI Layer 2 features, including but not limited to the following:


  • VLANs
  • Viewable dynamic MAC address table (i.e. the switch port database)
  • Link Aggregation with Link Aggregation Control Protocol (LACP)
  • Spanning Tree Protocol (STP)
  • Access Control Lists (ACLs)
  • SNMP
  • Logging (local and remote)
  • Port mirroring
  • Cable and other diagnostics

A smart switch is a limited managed switch, which is typically less expensive than a managed switch but also typically only supports a subset of features found on a managed switch.   Smart switches will typically only have a web interface and support a limited set of VLANs.  However, unlike managed switches, there is no industry standard for the term “smart switch”, and what constitutes a “smart switch” can vary widely both between vendors and between different switch models from the same vendor.

It is best practice to use managed switches on the LAN side of a network.  This ensures that the full set of OSI Layer 2 features are available, and to facilitate troubleshooting by enabling network devices can be monitored and managed remotely.  Unmanaged switches should generally be avoided.

What are the differences between non-PoE, PoE, and PoE+?

A non-PoE switch is a switch that provides network connectivity only, and does not supply DC power to connected devices.   These switches are suitable when there are a large number of non-powered network devices on the network, such as PCs and laptops.  Such switches are commonly deployed in offices, as well as in hotels, student housing, assisted living, and other multi-dwelling unit (MDU) environments where there is a wired Ethernet wall jack in each unit.  

Power-over-Ethernet (PoE) switches provide both DC power and data connectivity over a single Ethernet wire.  These are extremely useful for connecting powered network devices to a network, as only one cable needs to be run to the device, as opposed to separate cables for data and for power.  Per the IEEE standards, switches are able to detect whether a connected device is powered or not, and will therefore only provide power to devices that are not being powered by an alternate power connection.   When using managed Power-over-Ethernet switches, the connected device can also be rebooted remotely by turning off and on the power on the Ethernet port, which is very useful when doing network troubleshooting.

A PoE switch conforms to the IEEE 802.3af standard, which provides 48V up to 15.4 W per port.  PoE (802.3af) is sufficient for powering older generation access points (i.e. pre-802.11ac) and for most other powered network devices, such as IP cameras, VoIP phones, access control locks, etc.

A PoE+ switch conforms to the IEEE 802.3at standard, which provides 48V up to 30 W per port.  PoE+ (802.3at) is required for 802.11ac access points because of the large number of radio chains required for MIMO and MU-MIMO.  

It is best practice to not fully load a PoE (802.3af) or PoE+ (802.3at) switch, to ensure that the total power budget of the switch is not exceeded.   I generally recommend a “3/4 rule”, meaning that a network design should plan on only using ¾ of the ports for powered network devices, as follows, with remaining ports being reserved for non-powered network devices, backhaul to other infrastructure (e.g. other switches or routers), or spares:

  • 8 port PoE/PoE+:   Only 6 ports should be used for powered network devices
  • 16 port PoE/PoE+:  Only 12 ports should be used for powered network devices
  • 24 port PoE/PoE+:   Only 18 ports should be used for powered network devices
  • 48 port PoE/PoE+:   Only 36 ports should be used for powered network devices

Most PoE and PoE+ switch models come with some non-PoE ports for backhaul, consisting of either Ethernet ports and/or SFP ports (for mini-GBIC fiber modules).  For a detailed explanation of SFP modules, please read the following blog:


How do you determine the number and size (port count) of switches needed for a project?

The telecom wiring in a property is divided into “verticals” and “horizontals”.   The distinction is actually not about the orientation of the cable run, but rather to distinguish backhaul cabling used to interconnect switches in different telecom closets (verticals) vs. cabling that connects a telecom closet to endpoints on the network, such as wall jacks in units, access points, cameras, etc. (horizontals).   This terminology is based on a high-rise building, where each floor has a telecom closet on each floor that are vertically stacked floor-to-floor, with endpoints on each floor connected horizontally to the telecom closet on that floor.

Every property must have one telecom closet which contains the Internet bandwidth circuit from the provider and the router for the network.  This closet is referred to as the main distribution frame (MDF).   For smaller properties, there may only be the one telecom closet, and all endpoint devices are “home run” from this closet to their desired locations.  (In such a scenario, there are no “verticals” but there will be “horizontals”).   As Ethernet wiring has a distance limitation of 100 meters / 328 feet, it is usually necessary and convenient for larger properties to establish additional telecom closets, and have the endpoints connected to these intermediate locations.  These additional telecom closets are referred to as intermediate distribution frames (IDFs).  In a high-rise building, IDFs are commonly stacked in the same location on each floor and are located either on every floor or every third floor.  Larger facilities will even have multiple IDFs on the same floor.  In multi-building environments, each building usually has a telecom closet, so each building is an IDF.  The IDF(s) require some type of backhaul connection to the MDF, typically using either Ethernet, fiber, or wireless point-to-(multi)point links.

Accordingly, the number and size of the network switches required ultimately depends on the number of MDF and IDFs as well as the number of “horizontals”, i.e. the number of powered network devices and unpowered network devices connected into each telecom closet.  


What is link aggregation?

Link aggregation is a feature available in managed switches to have multiple physical ports act as a single virtual port with the aggregated capacity of all of the physical ports.   It is commonly deployed for backhaul between the MDF and IDF(s) in networks requiring very high local data capacity, such as when using storage area networks (SANs) or in networks consisting of several surveillance IP cameras streaming data to a network video recorder (NVR) or in the MDF.   An example application is shown in Figure 1.  An aggregated link can also serve to provide redundancy (at reduced capacity) in case one of the connections should be broken.

Figure 1:  Example of Link Aggregation to connect a switch to a storage area network (SAN).

As an example, on EnGenius switches, a link aggregation group (LAG) can be established under L2 Features --> Link Aggregation --> Port Trunking, as shown in Figure 2.  Up to eight link aggregation groups can be defined on a particular switch, and are referred to as “trunk groups” with port numbers t1 – t8.   A physical port can be a member of only one trunk group.  The ports that make up a group need not be sequential, though it is often convenient to use sequential ports from a wiring perspective.  There is also no limit as to how many physical ports can be aggregated into a single group, until one physically runs out of ports on the switch.

There are two modes defined for establishing a trunk group.  In “static” mode, the ports are always considered part of the trunk group, and the switch will always load balance outbound traffic on the trunk port across all of the physical ports.  In “LACP” mode, the switch uses Link Aggregation Control Protocol (LACP) to periodically verify that each physical link is established end-to-end, so LACP must be running on both sides of the link (i.e. both switches connected via an aggregated link).  It is best practice to use LACP mode to establish an aggregated link between two switches.


Figure 2:  Setting up a link aggregation group on an EnGenius managed switch.

What is spanning tree protocol (STP)?

As mentioned above, a switch maintains a database of MAC addresses and ports.  When there is a wiring loop in the network, there are multiple physical paths between switches and endpoint devices, meaning that a switch will see the same MAC address on multiple ports.    This generally causes a broadcast storm, where the same message for a device will loop through the network repeatedly, eventually filling up the RAM on the switch and causing the switch to slow down significantly or simply crash.
Physical wiring loops in the network can often occur accidentally during a service operation.  Wiring loops are also often desirable from a redundancy standpoint, to ensure that the loss of a single cable or switch in the network does not take down everything downstream in the network.

Spanning tree protocol (STP) is a feature in managed switches that is designed to detect network loops and block redundant paths.  The simple explanation of the protocol is that it calculates a cost function for each path through the network and then only allows the least-cost path to operate, discarding all incoming traffic on higher cost paths.  Should the least-cost path fail (e.g. a physical cable gets disconnected), the algorithm immediately falls back to the next least-cost path.   This is shown in Figure 3.   

The STP algorithm allows a priority number to be set for each switch in the network (and even each port on a switch), where a smaller priority number indicates a lower cost for that switch so that a desired path can be established.  For the algorithm to work, one switch must have the lowest cost, and this switch is designated the “root bridge.”   In most network topologies, the root bridge should be the switch in the MDF connected directly to the LAN port of the router.   If the priorities are not specified (i.e. all switches are left on their default priority values of 32768), the STP algorithm will automatically designate the switch on the network with the smallest numerical MAC address as the root bridge.  



Figure 3:  Generic image of spanning tree blocking a loop in the network.

It is best practice to have spanning tree enabled on all networks.  Most managed switches have rapid spanning tree protocol enabled by default.   The priority of each switch can be set manually under L2 Feature  STP  CIST Instance Settings, as shown in Figure 3.   It is generally only necessary to change the default on the core switch in the MDF.   In networks consisting of several switches in a complex tree topology, it can be desirable to lower the priorities on some of the intermediate core switches.


Figure 4:  Setting up spanning tree protocol (STP) on an EnGenius managed switch.

No comments:

Post a Comment