DevOps

The Ins and Outs of Microservices

The Ins and Outs of Microservices

Microservices are a development architecture that can help DevOps teams innovate faster, iterate more efficiently, and create a better end-user experience by democratizing language and technology choice across independent service teams for continuous delivery of improvements. Microservices prioritize each individual service rather than many services simultaneously, enabling agility and supporting the DevOps cycle. However, with these opportunities come challenges.

The full benefit of microservices can only be derived through proper levels of monitoring and management. The high number of moving pieces and additional services add a great deal of complexity to the environment and among teams.

Consider this scenario: 10 containers run one service. An application is comprised of five different services that are made up of different containers. Tracking can be mind-boggling, especially because these moving pieces literally move around.

Unlike static, consistent VMs, which traditional monitoring tools support, microservices require tools that provide a higher degree of observability. For example, a DevOps team can’t know how an app is performing without visibility into where issues occur. The application might be spun up and down in another location, with conventional tooling that might not be visible.

Microservices deployment also creates a need for new organizational dynamics. This development architecture enables individual teams to perform updates independently, so communication and collaboration is critical to ensuring continuous delivery and agility.

Adding Service Mesh Architecture

So, what does it take to successfully deliver and operate microservices?

Fewer than 10 microservices could likely be managed with existing tools and security, but once a team starts running more than 10, a service mesh architecture becomes necessary. It provides policy-based networking for microservices, describing desired behavior of the network in the face of constantly changing conditions and network topology.

The additional layer of tooling delivers observability, independent control over microservices, and enhanced security. Ultimately, service meshes result in a developer-driven, services-first network—a network primarily focused on alleviating application developers from building network concerns into their application’s code.

Service meshes create a network that empowers operators with the ability to define network behavior, node identity, and traffic flow through policy. They deliver visibility, resiliency, traffic, and control of distributed application services and offer immediate observability into the number of requests, distributed tracing, and the latency based on response time.

By deploying microservices on top of a service mesh, DevOps teams can get immediate metrics, logs, and tracing—without making application code changes. Service mesh provides visibility into why apps are running slowly, which is one of the most bothersome issues to end users.

Implementation Tips

The type of infrastructure will help determine which service mesh to leverage. For example, some work well with Kubernetes or Docker Swarm, but when no services are running inside containers, the only benefit would be security or observability. The use case will determine whether a highly-capable service mesh like Istio or simpler tooling like Linkerd 2.0 is needed.

Several microservices communicate over the network through a service mesh architecture, so it’s also important to understand the network’s criticality. It should be as intelligent and resilient as possible to route traffic away from failures to increase the aggregate reliability of the cluster.

The network should avoid unwanted overhead like high-latency routes or servers with cold caches.

It should ensure that the traffic flowing between services is secure against trivial attack and provide insight by highlighting unexpected dependencies and root causes of service communication failure.

Starting off instrumenting code with the right observability framework and using a cloud-native approach to designing scalable, independently delivered services can be hugely beneficial.

Microservices have significant benefits to a DevOps culture, which can be recognized when a service mesh architecture is implemented. When implementing or considering a service mesh infrastructure, first gauge necessity. Once you’re sure a service mesh is the right fit, consider the use case before choosing a provider. Finally, become educated on the inner workings of the network.


Lee Calcote is the Head of Technology Strategy at SolarWinds, where he stewards strategy and innovation across the business. Previously, Calcote led software-defined data center engineering at Seagate, up-leveling the systems portfolio by delivering new predictive analytics, telemetric and modern management capabilities. Prior to Seagate, Calcote held various leadership positions at Cisco, where he created Cisco’s cloud management platforms and pioneered new, automated, remote management services. In addition to his role at SolarWinds, Calcote advises a handful of startups and serves as a member of various industry bodies, including Cloud Native Computing Foundation (CNCF), the Distributed Management Task Foundation (DMTF) and Center for Internet Security (CIS). As a Docker Captain and Cloud Native Ambassador, Calcote is an organizer of technology conferences, an analyst, author, speaker in the technology community. Calcote holds a bachelor’s degree in Computer Science, a master’s degree in Business Administration from California State University, Fresno and retains a list of industry certifications.