Home > Building and Monitoring Robust Automatic Deployments

Building and Monitoring Robust Automatic Deployments

Have you ever heard the mantra “Don’t deploy on Fridays?” There are blog posts, tweets, and t-shirts shouting this slogan to anyone who will listen. Maybe your organization follows this policy. The purpose of not deploying on Fridays is to give the team better work/life balance by preventing them from having to work on the weekends if something goes wrong during or after the deployment. A popular opposing idea is that you should be able to deploy on any day because you have confidence in the tools and processes you have in place to support your deployments. Reaching this point requires very mature deployment practices, usually including robust testing and an automated deployment process. If you’ve been hearing everyone is on the continuous deployment bandwagon and your organization can barely get through a manual deployment, you’re not alone. While continuous deployment sounds great, it’s not appropriate for every situation. Even if it could be a good fit for your organization, automate small steps and build out from there. Value is often found in partial automation of deployments. No organization’s deployments are error-free all the time. But it doesn’t mean you can’t work toward automated deployments that reduce the risk of error. Deployments can fail for all sorts of reasons, but a major one is someone forgetting a step somewhere in a manual process. It could be a folder in a file system wasn’t configured with the correct permissions. Or someone left an old value in a config file. Another reason for difficult deployments is insufficient testing. Perhaps you have good code coverage, but you didn’t do much performance/load testing before moving to production. It’s easy to let the scope of a deployment get too large—a few changes that didn’t make it into the last sprint are thrown into the current sprint, and a few new features become larger in scope than anticipated. Then when you finally deploy the large batch of changes and find a bug, it’s difficult to tell what caused it.

Successful Automated Deployments

Automated deployments can ensure the deployment process is documented, a human didn’t forget a step, and you can roll back if something goes wrong. Your deployment code can check if a folder has the right permissions and replace a value in a configuration file. Using tools such as Terraform, Ansible, Jenkins, and YAML allows you to write your deployment process as code, store it in source control, and test it. When the actual act of deployment is closer to the click of a button than a three-hour manual ordeal, you can deploy small batches of changes more frequently. It also makes it easier to test deployments in a staging or pre-production environment before they go to production.

Gates and Approvals

Many deployment automation tools allow you to build gates and approvals into your process. Gates can be used to require approvals outside the deployment pipeline, perform quality validation such as code coverage or test pass rate, and collect health signals from external services before the deployment can be completed.

Automated Rollback

Another critical part of successful automated deployments is a rollback strategy. The use of feature flags and blue-green deployments can make it easy to roll back to the previous state instead of having to take frantic manual steps to undo a change. Feature flags allow you to separate deployment and release, which helps roll out new features for a subset of users. After monitoring shows the new features are successful, you can then roll out them to more users and monitor again, until all users have the new features. If a feature causes problems in the limited rollout, it can simply be turned off instead of requiring a rollback of code. In a blue-green deployment, you deploy to a separate production environment and then swap it with the current production environment, allowing you to swap back if issues are encountered.

Monitoring

Automated deployments should be accompanied by automated monitoring. In a data-driven application, you should watch for changes in system metrics such as:
  • Memory usage
  • Disk usage
  • Errors logged
  • Database throughput
  • Database average response time
  • Long running queries
  • Concurrent database connections
  • SQL query performance
If you have mature monitoring systems in place, it’s easy to get a pre-deployment baseline and watch for deviations after the deployment. Holistic hybrid cloud monitoring tools that alert you to errors or abnormal patterns are an important part of feature flags and blue-green deployments. They’re the indicators to let you know if you need to turn off a feature or swap back to the previous production environment.

Tools, Process, and Culture

While deployment and monitoring tools alone don’t ensure a successful deployment, they certainly help. But it’s also important to build a DevOps culture of good communication, design reviews throughout development, and thorough testing. As shown in Figure 1, automated deployments are just one part of the DevOps lifecycle. Figure 1: The DevOps lifecycle

Figure 1: The DevOps lifecycle

You can decide where automation brings value in the cycle and create the automation in small chunks over time. Automated deployments can reduce risk and required effort, so you can deploy on a Friday if you need to. Their high ROI often makes them a great place to start automating with DevOps best practices in mind.

Deployments and Monitoring

Automated deployments reduce manual errors and ensure a documented process, but they should be accompanied by good monitoring tools and tested rollback plans. Is your organization spending more time reacting to issues than focusing on impactful activities to drive business? Learn how SolarWinds Observability Self-Hosted (formerly known as Hybrid Cloud Observability) can help make your teams more collaborative and efficient through cross-domain visualizations and automation.
Avatar photo
Meagan Longoria
Meagan Longoria is a business intelligence consultant at Denny Cherry & Associates Consulting, blogger, speaker, author, technical editor, and Microsoft Data Platform MVP.
Read more