Viewing the Network as an Ecosystem
July 20, 2018
Network
Many of us have or currently operate in a stovepipe or silo IT environment. For some, this may just be a way of professional life. But regardless of how the organizational structure is put together, having a wide and full understanding of any environment will lend itself to a smoother and more efficient system overall. As separation of duties continues to blur in the IT world, it is becoming increasingly important to shift how we as systems and network professionals view the individual components and the overall ecosystem. As such changes and tidal shifts occur, Linux appears in the switching and routing infrastructure, servers are consuming BGP feeds and making intelligent routing choices, creating orchestration workflows that automate the network and the services it provides—all of these things are slowly creeping into more enterprises, more data centers, more service providers. What does this mean for the average IT engineer? It typically means that we, as professionals, need to keep abreast of workflows and IT environments as a holistic system, rather than a set of distinct silos or disciplines.
This mentality is especially important in monitoring aspects of any IT organization, and it is a good habit to start even before these shifts occur. Understanding the large-scale behavior of IT in your environment will allow engineers and practitioners to accomplish significantly more with less—and that is a win for everyone. Understanding how your servers interact with the DNS infrastructure, the switching fabric, the back-end storage, and the management mechanisms (i.e. handcrafted curation of configurations or automation) naturally lends itself to faster mean time to repair, due to a deeper understanding of an IT organization, rather than a piece, or service that is part of it.
One might think “I don’t need to worry about Linux on my switches and routing on my servers,” and that may be true. However, expanding the knowledge domain from a small box to a large container filled with boxes will allow a person to not just understand the attributes of their box, but the characteristics of all of the boxes together. For example, understanding that the new application will make a DNS query for every single packet the application sees, when past applications did local caching, can dramatically decrease the downtime that occurs when the underlying systems hosting DNS become overloaded and slow to respond. The same can be said for moving to cloud services: having a clear baseline of link traffic—both internal and external—will make obvious that the new cloud application requires more bandwidth and perhaps less storage.
Fear not! This is not a cry to become a developer or a SysAdmin. It's not a declaration that there is a hole in the boat or a dramatic statement that "IT as we know it is over!" Instead, it is a suggestion to look at your IT environment in a new light. See it as a functioning system rather than a set of disjointed bits of hardware with different uses and diverse managing entities (i.e. silos). The network is the circulatory system, and the servers and services are the intelligence. The storage is the memory, and the security is the skin and immune system. Can they stand alone on technical merit? Not really. When they work in concert, is the world a happier place to be? Absolutely. Understand the interactions. Embrace the collaborations. Over time, when this can happen, the overall reliability will be far, far higher.
Now, while some of these correlations may seem self-evident, piecing them together and, more importantly, tracking them for trends and patterns has the high potential to dramatically increase the occurrence of better-informed and fact-based decisions overall, and that makes for a better IT environment.