While it’s inevitable that new technologies
usually create new business opportunities, they also often create new problems for IT pros to solve. It’s a natural cycle for operations with which we’ve become reasonably comfortable. We look at a new platform/protocol/architecture/interface/tool, search a bit on Google, and generally find that manageability challenges fall somewhere under a normal curve, or at least within the guardrails. But this new wave of change—of cloud,
hybrid IT, containers, microservices, CI/CD, DevOps, SDX, {insert buzzword-y thing here}—is dragging unfamiliar requirements. While we’re not required to adopt all of them wholesale and at once to keep our systems running, there’s one that’s increasingly impossible to avoid: distributed application components. And there’s always heartburn when we’re accountable for infrastructure our tools can’t see.
One of the core tenets of DevOps culture is ensuring feedback loops with robustness directly proportional to the primitivity of services. I don’t mean primitive in the pejorative sense—far from it— but primitive in the sense of intentional simplicity and less opinionated interfaces. Cloud-native is a great example of where lots of independent, simple services are easier to repurpose over time. Once you break monolithic applications into distributed architectures, the question isn’t will there be an increase in the interconnected complexity and number of components, but how much? And how many?
This creates the need for all tech pros and DevOps teams to prioritize new tech, and moreover, new learning that will facilitate innovation and lead to the new opportunities our businesses demand. Unfortunately, it’s an awkward time for the industry, with every vendor seemingly on a different journey along this path toward application performance monitoring parity with the tools they’ve relied on for years, or even decades. So how are technology pros reaching new heights of expertise and systems performance, even while experiencing some growing pains along the way?
Modern Applications Deserve a Modern Approach to Monitoring
As you think about the components of business services, consider applications the prime interaction channel for customers, driving the experiences that define brands. Digital user experiences are no longer optional and, in many ways, “mean time to delight” is a prime
competitive differentiator. When the modern application cycle emerged, the need to add new features and respond to user experience on a more frequent basis shook the app development landscape, with many organizations still playing catch-up in
monitoring applications, let alone
observing them. As the ease of development, frequency, and automation of deployment increases, so does the likelihood of introducing dark corners that the ops team may have no idea exist.
Apps become much more complex and difficult to monitor not because they are intrinsically more complicated, but because of increasing deconstruction of legacy applications into new, alien forms. Polling-based monitoring methods remain key for infrastructure, network, and package applications, but don’t understand the huge volume of distributed back-end transaction steps for single mobile app screen refreshes. At the same time, the desire to understand actual user experience, with real user transaction data for troubleshooting, becomes critical. So, once again, Application Performance Monitoring (APM)—with a broad focus on
observability, and not just monitoring—has come to the forefront as an important tool. Spend a little time chatting with trade show attendees, students studying for certification, or technical user communities, and you’ll find
APM tools like transaction tracing aren’t yet standard-issue in monitoring toolboxes. It wasn’t really necessary before custom apps and continuous delivery culture.
This seems to be the year where a growing number of administrators are asking vendors for just that: extensions of what was once
DevOps-driven instrumentation into existing NOC dashboards. It’s natural to assume, given the lag between new application technology and tools to manage them, that you may be implementing a 5
th (or 15
th or even 25
th) monitoring tool to fill the gap. But IT’s deeply ingrained tendency to wait, be conservative, and let things shake out is an advantage in this case. Skipping over bleeding-edge tools to land on an increasingly mature, if still new, reality offers integrated options that didn’t exist even 12 months ago.
Emerging on the scene (finally) are a new breed of tools, focused on overall observation of multiple elements of applications that follow the same model as the distributed services they monitor. The whole point of smaller, single purpose services is to be “brick-in, brick-out,” replacing individual components over time to finally escape “big bang” deployments. Likewise, these new monitoring tools are designed to augment capabilities as needed and evolve over time. A dedicated search to finding tools that integrate with and broaden the capabilities of your current toolset may be a slight change, but offers the best of both worlds. New options exist for operations to provide additional and shifting capabilities without the burden of a completely new solution, or duplication of functions you already own.
Visualization Across Environments
The need to gather meaningful insight from applications that originate across the range of on-prem, hosted, hybrid IT, and cloud environments has increased. Performance issues, bottlenecks, downtime—not to mention, data to drive orchestration, dynamic resource allocation, and more—lie at the heart of poor user experiences. Without the right tools, the cause(s) of these problems and troubleshooting complexity aren’t exactly returning us to the dark ages, but they are beginning to keep admins up at night. By combining traditional monitoring capabilities with tracing (and ideally, events and logs), this synthesizes polled and observed metrics into a useful,
actionable whole.
As modern tech pros working with full-stack development teams, the pivot between application metrics, traces, and infrastructure statistics quickly becomes second nature to not just keep the lights on or to solve performance problems, but to identify opportunities for improvement. And that’s something the business is always excited to see. If it seems like environments are becoming increasingly more distributed and complex to keep an eye on, it’s because they
are. And while we’re not yet at a new golden age of chameleon tools that seamlessly work together to provide complete pictures of our environments, as IT pros with some time on our Hobbs meters, we can see it from here. Perhaps it will be application performance data and operations feedback loops that finally bridge the gap between application developers and operations. I’m pretty sure they have the same answer to the question, “Would you like uninterrupted weekends?” “Yes, thank you very much!”
Looking for a log management solution? Download a free trial of SolarWinds® Log Manager for Orion®.