Real World Experiences of Hybrid IT – Part 6
March 28, 2019
Networks
As we head into the final post of this series, I want to thank you all for reading this far. For a recap of the other parts, please find links below:
Part 1 – Introduction to Hybrid IT and the Series
Part 2 – Public Cloud Experiences, Costs, and Reasons for Using Hybrid IT
Part 3 – Building Better On-premises Data Centers for Hybrid IT
Part 4 – Location and Regulatory Restrictions Driving Hybrid IT
Part 5 – Choosing the Right Location for your Workload in a Hybrid IT World
To close out the series, I’d like to talk about my most recent journey in this space: using DevOps tooling and continuous integration/deployment (CI/CD) pipelines. A lot of this will come from my experiences using Azure DevOps, as I’m most familiar with that tool. However, there are a lot of alternatives out there, each with their own pros and cons depending on your business or your customers.
I’ve never been a traditional programmer/developer. I’ve adopted some skills over the years as that knowledge can bring benefit to many areas of IT. Being able to build infrastructure as code or create automation scripts has always served me well, long before the more common use cases evolved from public cloud consumption. I feel it’s an important skill for all IT professionals to have.
More recently, I’ve found that the relationship between traditional IT and developers is growing closer. IT departments need to provide tools and infrastructure to the business to speed development and get products out the door quicker. This is where the DevOps culture has come to the forefront. It’s no longer good enough to just develop a product and throw it over the fence to be managed. The systems we use and the platforms available to us mean that we must work together. To help this new culture, it’s important to have the right DevOps tools in place: good code management repositories, artifact repositories, container registries, and resource management tools like Kanban boards. These all play a role for developers and IT professionals. Bringing all this together into a CI/CD process, however, involves more than just tools. Processes and business practices may need to be adjusted as well.
I’m now working more in this space. It’s a natural extension to the automation space that I previously worked in, and it overlaps quite nicely. Working with businesses to set up pipelines and gain velocity in development has taken me on a great journey. I won’t go into detail on the code side of this, as that’s something for a different blog. What’s important and relevant in hybrid IT environments are how these CI/CD process integrate into the various environments. As I discussed in my previous post, choosing the right location for your workloads is important, and this carries over into these pipelines.
During the software development life cycle, there are stages you may need to go through. Unit, functional, integration, and user acceptance testing are commonplace. Deploying throughout these various stages means there will be different requirements for infrastructure and services. From a hybrid IT perspective, having the tools at hand to deploy to multiple locations of your choice is paramount. Short-lived environments can use cloud-hosted services such as hosted build agents and cloud compute. Medium-term environments that run in isolation can again be cloud-based. Longer-term environments or those that use legacy systems can be deployed on-premises. The toolchain gives you this flexibility.
As I previously mentioned, I work mostly with Azure DevOps. Building pipelines here gives me an extensive set of options, as well as a vibrant marketplace of extensions built by the community and vendors. If I want to deploy code to Azure services, I can just call on Azure Resource Manager Templates to build an environment. If I include cloud-native services, I have richer plugins available to deploy configurations to things like API Management. When it comes to on-premises deployments, I can have DevOps agents deployed within my own data center, allowing build and deployment pipelines. I can configure groups of deployment agents that connect me to my existing servers and services. There are options for me to deploy PowerShell scripts, call external APIs from private cloud management platforms like vRealize Automation, or hook in to Terraform/Puppet/Chef, etc.
I can also hook in these deployment processes to container orchestrators like Kubernetes, storing and deploying Helm charts or Docker compose files. These are the best situations for teams that were traditionally siloed to work together. Developers know how the application should work, and operations and infrastructure people know how they want the system to look. Pulling together code that describes how the infrastructure deploys, heals, upgrades, and retires needs input from all sides. When using these types of tools, you’re looking to achieve an end to end system of code build and deployment. Putting in place all the quality gates, deployment processes, and testing removes human error and speeds up business outcomes.
Outside of those traditional use cases for SDLC, I’ve found these types of tools to be beneficial in my daily tasks as well. Working in automation and infrastructure as code has a similar process. I maintain my own projects in this structure. I keep version-controlled copies of ARM templates, Cloud Formation templates, Terraform code, and much more. The CI/CD process allows me to bring non-traditional elements into my own deployment cycles, testing infrastructure with Pester, checking security with AzSK, or just making sure I clean up my test environments when I’ve finished with them. From my experiences so far, there’s a lot for traditional infrastructure people to learn from software developers and vice versa. Bringing teams and processes together helps build better outcomes for all.