Hybrid IT Opens the Door for New Skills

Hybrid IT Opens the Door for New Skills

Hybrid IT merges private on-premises data centers with a public cloud vendor. These public cloud vendors have a global reach, near limitless resources, and drive everything through an API. Tapping into this new resource is creating some new technologies and new career paths. Every company today can run a business analytics system. This was never feasible in the past because these systems traditionally took months to deploy and required specialized analysts to maintain and run. Public cloud providers now offer these business analytics systems to anyone as a service and people can get started as easily as swiping their credit card.

We’re moving away from highly trained and specialized analysts. Instead, we require people to have a higher holistic view of these systems. Businesses aren’t looking for specialists in one area, they want people who can adapt and merge their foundation with cross-siloed skillsets. People are gravitating more and more towards skills in development. Marketing managers, for example, are learning SQL to analyze data to see if a campaign is working or not. People are seeking to learn skills that allow them to interact with the cloud through computer program languages because while the public cloud may act as a traditional data center, it’s still a highly programmable data center.

The IT systems operator must follow this same trend and continue to add more software development and scripting abilities to their toolbelt. The new role of DevOps engineer has emerged because of hybrid IT. This role was originally created because of the siloed interaction between software developers, who are primarily concerned about code, and the systems operation team, who must support everything once it hits production. The DevOps role was created to help break the silos and create more communication between the two teams, but as time has passed, the two groups have merged together because they need to rely on each other. Developers showed operations how to run code effectively and efficiently, and SysOps folks showed developers how to monitor, support, and back up the systems on which their code runs.

A lot of new skills are emerging, but I want to focus on the three I feel are becoming the heavy favorites: infrastructure as code, CI/CD, and monitoring.

Infrastructure as Code (IaC)

According to this blog from Azure DevOps, “Infrastructure as Code is the management of infrastructure (networks, virtual machines, storage) in a descriptive model,” or a way to use code to deploy and manage everything in your environment. A lot of data centers share similar hardware from the same set of manufacturers, yet each is unique. That’s because people configure their environments manually. If a change is required on a system, they make the change and complete the task. Nothing is documented, nobody knows what the change was, or how it was implemented. When the issue happens again, someone else repeats the same task or does it differently. As this goes on, our environments begin to morph and change and there’s no resetting it, thus causing unique snowflakes.

A team that practices good IaC will always make their change in code, and for good reason. Once a task is captured in code, that code is a living document and can be shared and run by others for consistent deployment. The code can be checked into a CI/CD pipeline to test before it hits production and to version it for tracking purposes. As the list of tasks controlled by code gets larger, we can free up people’s time by putting the code in a scheduler and start letting our computers manage the environment. This opens the doors for automation and allows people to work on meaningful projects.

Continuous Integration/Continuous Deployment (CI/CD)

When you’re working in a team of DevOps engineers, you and your team are going to be writing a lot more code. You may be writing IaC code for deploying an S3 bucket, while your team member might be working on deploying an Amazon RDS instance, both writing in an isolated environment. The team needs a way to centrally store, manage, and test the code. Continuous integration helps merge those two different pieces into one unit, but the frequency of merging is very high. Continually merging and testing code helps shorten feedback loops and shorten the time it takes to find bugs in the code. Continuous delivery is the approach of taking what’s in the CI code repo and continuously delivering and testing it in different environments. The end goal is to deliver a bug-free code base to our end customer, whether that customer is an internal employee or someone external paying for our services. The more frequently we can deliver our code to different environments like test or QA, the better our product will become.


We can never seem to get away from monitoring, but this makes sense given how important it is. Monitoring provides the insight we need into our environments. As we start to deploy more of our infrastructure and applications through code and deliver those resources quickly and through automation, we need a monitoring solution to alert us when there’s an issue. The resources we provide with IaC or …manually… will be consumed and our monitoring solution will help tell us how far out those resources will be available or if we need to provide additional resources to prevent loss of service. Monitoring gives us historic trends of our environment, so we can compare how things are doing today to what they were doing yesterday. Did that new global load balancer help our traffic or hurt it? In today’s security landscape, we need eyes everywhere, and having a monitoring solution can help notify if there’s a brute force attack or if a single user is trying to log in from another part of the country.

Hybrid IT is a fast-growing market bringing lots of change to businesses as well as creating new careers and new skillsets. From hybrid IT, new careers are emerging because of the need to merge different skillsets from traditional IT, development, and business units. This combination is stretching people’s knowledge. The most successful people are merging what they know about traditional infrastructure with the new offerings from the public cloud.

As the senior systems administrator at a healthcare organization, Aaron leads a team of highly skilled engineers tasked with keeping core clinical and business applications online for patients and customers. He likes getting nerdy with engineers and architects and discussing complex problems and solutions. Aaron also enjoys interpreting these discussions by communicating with decision makers and end users in non-technical terms. His 10+ years of hands-on experience in application virtualization, server virtualization, storage management, data center monitoring, and automation gives him the tools he needs to guide organizations on how best to protect their applications and data.