DevOps

Making Dev + Ops Possible with Log Management

Making Dev + Ops Possible with Log Management

You know how the saying goes—you can’t optimize what you can’t measure, and you can’t measure what you can’t monitor.

Perhaps this isn’t a time-tested proverb just yet, but it is one that should become a mantra for the DevOps world. The idea points to a key need in log management.

In development environments, one of the most important ingredients for success is the ability to constantly improve. When it comes to the cloud, this is no easy feat since your logs are everywhere—some might be in AWS, some on-prem—and each set of logs is generating subcomponent logs, database logs, infrastructure logs, and more. This creates a multitude of potential sources when an issue arises, meaning developers and operations teams have a big task on their hands.

Log management provides a realistic and holistic view that allows development and operations teams to scope out room for improvements and feed into the continuous DevOps cycle. It is, most simply, the enabler of visibility into application information at the most granular level—visibility that can mean the difference between identifying and solving a problem immediately or searching arduously for the root cause.

The three elements of log visibility

Breaking it down, a full log solution includes the management piece, but also monitoring (alerts) and metrics. These three elements combined truly provide the highest level of visibility into the depths of your application environments. More specifically, this includes:

  1. Log Management – Collecting and collating logs to see what all elements in the environment are doing at the same time
  2. Alerts – Setting a search query to identify specific errors and then sending it through collaboration channels to make sure code is running correctly—or through operations tools to make changes in the staging or production environment
  3. Metrics – Rolling those alerts into a complete picture to understand not only current state but also forward-looking elements

Why is this so important? Because “you can’t optimize what you can’t measure, and you can’t measure what you can’t monitor.” Development and operations teams are constantly working within a continuous improvement cycle to identify a problem, develop a fix, test and release the problem, and measure the fix.

These logs are your first indication a problem is occurring. So, the ability to collect, decipher, and identify issues within the logs is a critical component to enabling the continuous improvement cycle.

As an example, applications are sending alerts constantly. Sometimes, it’s easy to tell if the alerts are stable; other times it’s difficult to decipher an anomaly in the amount of ‘server unavailable’ errors you’re receiving.

With a comprehensive log management tool, you can omit the guesswork by understanding the history of an error message over time, and then identifying other errors associated with the alert in question (even if they are coming from other systems, and even if you don’t have alerts set up for those errors).

With this level of visibility in a log management solution, elements of the unknown in the application error landscape are eliminated. Problems based on alert volume, historical context, and the broader application environment are readily available, so developers and operations can identify and fix problems quickly.

Making Dev + Ops possible with log management

With so much potential for error, latencies, and issues in any given application environment—and so much at risk in terms of what an error can mean for app availability—why not have the ability to drill down into your logs?

This level of visibility and knowledge about log environments also facilitates the continuous improvement inherent to DevOps. It enables you to continually customize alerts based on what you’re learning from your log management solution. It gives you assurance that your log management is sound and complementary to your other internal monitoring functions like metrics and traces, as well as the external user experience monitoring capabilities that combine to provide an in inside out view into cloud environments.

When application performance is the end goal, distributed tracing can get to the heart of a problem, but log management will help you find it faster. It can help you pinpoint a problem in a specific database, for example, versus digging for it within each database instance, process, etc. to identify the root cause.

Finding, fixing, and continuously improving while spending more time optimizing the application is a win-win for developers and operations as it truly embodies the DevOps principles.


Melanie Achard is a director of product marketing at SolarWinds. Achard’s career in technology started out as favor to a friend when she wrote her first software program on a Morrow Micro System. Her curiosity piqued, she accepted a software development position at a startup and was instantly hooked. Since then, she has worked in software engineering, product management, and product marketing at a variety of companies—from small, three-person startups to large Fortune 500® organizations—all with a focus on solving problems that people care about.

With 30 years of experience across multiple technology disciplines—from systems administration, IT security, and regulatory compliance to external device management and mobile applications—she continues to advance her knowledge and understanding of the continually evolving technology landscape. In 2018, Melanie joined SolarWinds to focus on the growth of the company’s DevOps products—Pingdom®, AppOptics, Papertrail, and Loggly®.