Ubiquitous connectivity, to anything, anywhere, and at any time. That is the promise of the Internet of things (IoT) and we’re all aware that it will enable tremendous changes to the way people live. Outside plant (OSP) usually gets left out of any mention alongside this topic. However, OSP plays a crucial role in providing the very connectivity that IoT needs to be realized. Without OSP, there is no IoT.
Providing that ubiquity is a constant battle for wide area network operators, and OSP is the front line. Network disruptions come in the form of squirrels, copper thieves, dump trucks driving down the highway with their beds extended (seriously), storm damage, power outages, and more. Each and every one of these events results in a network impact. The mission of OSP leaders is to take action to prevent damage and, most importantly, protect any impact to the network that results in a disruption to the customer. That means we need to harden the network.
Hardening the Network
Network hardening is something that every Information and Communications Technology leader should be continually focused on. As the services and customer expectations of today are no longer forgiving of disruptions, an answer of “A technician is working on your problem.” has often become unacceptable. The bar is being raised to the point of no network disruptions being expected. Connectivity needs to be always on, and there are many ways technology leaders can ensure this is the case.
The best option available is logical protection. If physical diversity is available, especially on backbone or critical services, protecting them through the network is the best way to ensure no single event will cause an outage, even if it’s a copper-thieving squirrel driving a dump truck down the highway. While there are many ways to go about protection, from MPLS, ERPS, statically routed diverse connections, or more, the first step to enabling any of these is to already have multiple physical paths in the OSP.
Having children, I have had to employ the “out of sight, out of mind” approach with things at times. The same applies to network infrastructure. If vandals, dump trucks, or squirrels can’t see it, then they can’t damage it. Said differently, one of the best ways to deter several forms of cable damage is to put it underground. Where it makes financial sense, cables that are direct-buried are some of the best protected against rodents, storm damage, vandals or other dangers that above-ground plant is exposed to.
Since buried construction can be somewhat cost-prohibitive in areas where aerial plant is possible, using armored fiber (copper cables are already armored) for above-ground plant can keep rodents at bay, especially if the cable route is through a wooded area. Although there are various rodent deterrents available, I have found deploying armored fiber for new construction or damage replacements to be a more effective solution, both practically and financially. I would even advocate using armored fiber in urban areas through manhole and conduit systems where possible. Mice can find their way into those ducts during winter and chew the cable for nesting material. Having that armored cable jacket is another layer of defense against those dreaded outages and customer disruptions.
Sealing the duct openings and proper installation of U-guards on risers is also a critical component of protecting fiber damage. These mundane, often overlooked items are simple ways to prevent mice from getting into cables each time the mercury drops. The cost saved in not inspecting for these items can be quickly lost through credits to SLA-backed services being out.
Aside from the choice of method of construction, material selection comes first, and is also an important aspect of hardening the network. Above and beyond the armored vs. non-armored discussion, cable sizing, if used smartly, can also provide additional defense. For example, in an area where 144-fiber might be called for, upsizing to 192-fiber allows for a central core of fibers which can contain the most critical services, leaving the outer most tubes to handle non-backbone services. If and when something starts to impact the integrity of the cable, these outermost fiber tubes will show a problem condition first, allowing for more response time before the most critical services are affected.
Power is another key factor to preventing widespread outages. Anyone who has seen a critical network node alarm a “battery discharge” condition knows the sense of panic that ensues to get the problem fixed and avoid disaster. Having a robust network power plant is just as important as protecting the physical layer from a falling tree in a storm.
Even though having tier 3 data-justify grade power redundancy is not a realistic approach for most, having back-up power in place is still essential. Points-of-presence (POPs) should have batteries or flywheels capable of holding the power load at least long enough for an on-site, permanent generator to start and come up to operating speed. For a non-manned POP, having batteries or flywheels to handle just enough time for a generator to spool up is not going far enough for the most critical services. Generators can fail to start or power-transfer switches can malfunction, so the on-site power reserve should be capable of carrying the load long enough for an on-call technician to get to the site and troubleshoot a failure for 1 hour.
Having spent well over a decade in this industry, I know full well the condition of many of the batteries out there, with some locations even having the batteries removed and power coming directly off the A/C rectifiers. With continually increasing competition and price pressure, battery and generator maintenance has often fallen to the cutting room floor. Batteries that are cracked, leaking, or just cannot handle the equipment load for a realistic amount of time are not only dangerous, but are of no use when the power goes out.
Likewise, generators need regular maintenance to verify proper running condition and even to check for replacement of the diesel fuel. The petroleum industry recommends diesel fuel be stored no longer than 12 months, so generators with diesel tanks that have been sitting for 5, 10, or even more years need to be looked at. The network is tremendously exposed when you decide to ignore the need maintenance on the backup power infrastructure.
In addition, a failure of the HVAC system can lead to a serious condition. While this is not often a concern during the winter months, summer heat can make a failed HVAC system and the resulting high-temp alarms as worrisome as a generator failure. It is important to ensure that the thermal accumulation of the POP will allow for enough time for a technician to travel to the site (if not manned) and begin ventilating to prevent overheating. In the most critical locations carrying the most sensitive services, backup HVAC units can be a smart addition to eliminate the crisis situation caused by a simple compressor failure.
The bottom line is about having backup. Just as a skydiver wouldn’t jump without a reserve chute, critical network elements should have a backup where practical. However, even with the best planning, the best maintenance, and the best equipment available, the reality is that we will still have network disruptions. With the right preparation, we can avoid many of them becoming customer-impacting. But Murphy’s Law dictates that many will still lead to out-of-service situations. When this happens, the speed of outage response becomes paramount.
Laying out a solid outage response plan with appropriate materials, crews, and equipment staged throughout the network is the best way to ensure that when things do go out, the customer disruption is minimized. We all know that when an outage occurs, the speed of outage response becomes paramount for both you and your customers.
From OSP Magazine, by: Brian Riley