Share

Data growth. The world revolves around digital data. We now rely on data to conduct business, engage in social activities and manage our lives. There is no sign of slowed growth in the production of, and demand for, more data – as well as faster access to it. According to the 2014 IDC Digital Universe Study sponsored by EMC [1]: “Like the physical universe, the digital universe is large – by 2020 containing nearly as many digital bits as there are stars in the universe. It is doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes.”

Cloud. Among several other factors, the increase in cloud storage will drive the need for data throughput. “In 2013, less than 20% of the data in the digital universe [was] ‘touched’ by the cloud, either stored, perhaps temporarily, or processed in some way. By 2020, that percentage will double to 40% [1].”

The Internet of Things. Another factor contributing to the exponential growth of information is the advent of the Internet of Things (IoT). “Fed by sensors soon to number in the trillions, working with intelligent systems in the billions, and involving millions of applications, the Internet of Things will drive new consumer and business behavior that will demand increasingly intelligent industry solutions… [1].”

This exponential growth in information means processing speeds have to increase as well, so as not to slow access to data. High-performance cabling that can transfer data over 40/100G Ethernet will be a necessary addition to data centers looking to keep up with this digital data growth.

Virtualization: the double-edged sword. Virtualization can help dat centers save on capital expenses, improve operational efficiency and create more agile infrastructures.

There are many types of virtualization, from desktop to storage to server virtualization. Server virtualization, in particular, calls for fewer, more efficient servers, which translates to fewer server connections. Because there are fewer connections, is it important that these connections work properly. Unfortunately, most data centers do not contain cabling infrastructure designed to meet the high-performance capabilities that virtualization demands. This is particularly true for data centers built in the 1980s, before high-performance cabling even existed.

Decreasing tolerance for downtime. When data transactions are interrupted due to network downtime, it translates to a very real loss of revenue. When Amazon.com went down in August 2013, the company lost $66,240 per minute [2]. Considering how quickly lost revenue can add up, it makes sense that there is an extremely low tolerance for network downtime.

The effect of downtime on revenue is even greater when considering end-user experience. According to one source, network downtime measured for user experience and business needs costs an average of $5,600 per minute [3]! Network administrators should have a contingency plan in place in the event of network failure. However, one of the most effective ways to mitigate this issue is to make sure the existing network is able meet the demands of increasing data throughput, including upgrading networks to be capable of handling 40/100G speeds.

Managing capital expenses. While migrating to 40/100G Ethernet creates an up-front capital expense, it saves data centers in the long run by future-proofing infrastructure. Not only will data centers be prepared for the increasing demands on data throughput, but the high-performance cabling infrastructure required of 40/100G Ethernet can grow with future hardware upgrades. This will reduce the need to tear out and replace cabling with each upgrade.

Share

Leave a comment