Networks

Viewing the Network as an Ecosystem

Viewing the Network as an Ecosystem

Many of us have or currently operate in a stovepipe or silo IT environment. For some, this may just be a way of professional life. But regardless of how the organizational structure is put together, having a wide and full understanding of any environment will lend itself to a smoother and more efficient system overall. As separation of duties continues to blur in the IT world, it is becoming increasingly important to shift how we as systems and network professionals view the individual components and the overall ecosystem. As such changes and tidal shifts occur, Linux appears in the switching and routing infrastructure, servers are consuming BGP feeds and making intelligent routing choices, creating orchestration workflows that automate the network and the services it provides—all of these things are slowly creeping into more enterprises, more data centers, more service providers. What does this mean for the average IT engineer? It typically means that we, as professionals, need to keep abreast of workflows and IT environments as a holistic system, rather than a set of distinct silos or disciplines.

This mentality is especially important in monitoring aspects of any IT organization, and it is a good habit to start even before these shifts occur. Understanding the large-scale behavior of IT in your environment will allow engineers and practitioners to accomplish significantly more with less—and that is a win for everyone. Understanding how your servers interact with the DNS infrastructure, the switching fabric, the back-end storage, and the management mechanisms (i.e. handcrafted curation of configurations or automation) naturally lends itself to faster mean time to repair, due to a deeper understanding of an IT organization, rather than a piece, or service that is part of it.

One might think “I don’t need to worry about Linux on my switches and routing on my servers,” and that may be true. However, expanding the knowledge domain from a small box to a large container filled with boxes will allow a person to not just understand the attributes of their box, but the characteristics of all of the boxes together. For example, understanding that the new application will make a DNS query for every single packet the application sees, when past applications did local caching, can dramatically decrease the downtime that occurs when the underlying systems hosting DNS become overloaded and slow to respond. The same can be said for moving to cloud services: having a clear baseline of link traffic—both internal and external—will make obvious that the new cloud application requires more bandwidth and perhaps less storage.

Fear not! This is not a cry to become a developer or a SysAdmin. It’s not a declaration that there is a hole in the boat or a dramatic statement that “IT as we know it is over!” Instead, it is a suggestion to look at your IT environment in a new light. See it as a functioning system rather than a set of disjointed bits of hardware with different uses and diverse managing entities (i.e. silos). The network is the circulatory system, and the servers and services are the intelligence. The storage is the memory, and the security is the skin and immune system. Can they stand alone on technical merit? Not really. When they work in concert, is the world a happier place to be? Absolutely. Understand the interactions. Embrace the collaborations. Over time, when this can happen, the overall reliability will be far, far higher.

Now, while some of these correlations may seem self-evident, piecing them together and, more importantly, tracking them for trends and patterns has the high potential to dramatically increase the occurrence of better-informed and fact-based decisions overall, and that makes for a better IT environment.


Nick has been involved in the networking industry in varying roles since 1997 and currently works on the network planning and architecture team for a major international research network. Prior to his current role, Nick was employed by the University of Illinois as the Lead Network Engineer working on research and HPC, campus, and wide area connectivity. In this role, Nick also functioned as the lead network engineer and IP architect for the National Association of Telecommunications Officers and Advisors (NATOA) broadband project of the year, UC2B. Nick has also held network engineering positions at early regional broadband internet providers as well as at the National Center for Supercomputing Applications. Nick has participated in the SCinet working group on many occasions and has been involved in R&E, high performance networking and security for the last 15 years. In addition to network engineering positions, Nick has been involved in cybersecurity from the campus, enterprise, and service provider perspective, as well as the Federal Bureau of Investigation, and has been involved numerous software defined networking projects over the last 8 years.