DevOps

2019: The Year of Meshing Around and Creating Chaos

2019: The Year of Meshing Around and Creating Chaos

It’s that time of year again—the time to reflect on the past year and anticipate the year ahead. For a number of years now, organizations of all sizes have been running containers in production. Now, with containers, microservices, and functions interweaving through modern application design, DevOps teams have also begun harnessing the value promised by incorporation of a service mesh into the technology stack. The lack of homogeneous, reliable, and unchanging networks is one significant challenge posed by distributed systems. Service meshes are aimed directly at addressing this challenge by providing a new layer of cloud native visibility, security, and control.

In 2019, I predict interest in and deployments of service meshes will continue to rise. Expect new emergence in the space and a number of existing offerings to affix “Mesh” onto their offering name as they jump on the coattails of this hot technology. While some service meshes assist in modernizing existing, non-containerized workloads, they are particularly helpful in sophisticated, distributed systems given that such deployments only exacerbate the need for visibility, control, and security of their networks. The good news here is that for the non-network-savvy, service meshes immediately eliminate hard-to-solve challenges, facilitating a decoupling of developers and operators who may now exert control over their services’ networks declaratively and independently—without the need for rolling a new release. As something of a third step in containerized deployments, service meshes captured much mind share in 2018 and will only grow in popularity and adoption in 2019.

In addition to the increased implementation of service mesh, this year brought with it the nascent practice space of Chaos Engineering. In 2019, principles and tools in this emergent practice space will evolve and expand in use as the complexity and rate of change of large-scale distributed systems demand new tools and techniques for increasing reliability and resiliency. Some organizations will push past chaos engineering tools such as Chaos Monkey inducing machine failures, skip Chaos Kong evacuating entire regions, and move to Gremlin to perform precise experiments on their path to improving resiliency through orchestrated chaos. It’s through exploration of the impact of increased latency and methodical failure of specific services that service teams will build confidence in their system’s capability to withstand turbulent conditions in production—and begin to sleep more soundly in 2019.


Lee Calcote is the Head of Technology Strategy at SolarWinds, where he stewards strategy and innovation across the business. Previously, Calcote led software-defined data center engineering at Seagate, up-leveling the systems portfolio by delivering new predictive analytics, telemetric and modern management capabilities. Prior to Seagate, Calcote held various leadership positions at Cisco, where he created Cisco’s cloud management platforms and pioneered new, automated, remote management services. In addition to his role at SolarWinds, Calcote advises a handful of startups and serves as a member of various industry bodies, including Cloud Native Computing Foundation (CNCF), the Distributed Management Task Foundation (DMTF) and Center for Internet Security (CIS). As a Docker Captain and Cloud Native Ambassador, Calcote is an organizer of technology conferences, an analyst, author, speaker in the technology community. Calcote holds a bachelor’s degree in Computer Science, a master’s degree in Business Administration from California State University, Fresno and retains a list of industry certifications.