If you’re just joining the series now, you can find the first two posts here
Last time, I talked about the various decision points and considerations when using the public cloud. In this post, I’m going to look at how we build better on-premises data centres for hybrid IT. I’ll cover some of the technologies and vendors that I’ve used in this space over the last few years, what’s happening in this space, and some of the ways I help customers get the most from their on-premises infrastructure.
First of all, let’s start with some of my experience in this area. Over the years, I have spoken at conferences on this topic, I’ve recorded podcasts (you can find my most recent automation podcast here
), and I’ve worked on automation projects across most of the U.K. In my last permanent role, I worked heavily on a project productizing FlexPod solutions, including factory automation and off-the-shelf private cloud offerings.
Where Do You Begin?
Building better on-premises infrastructure doesn’t start where you would expect. It isn’t about features and nerd-knobs. These should be low on the priority list. Over the last few years, perhaps longer than that, since the mainstream adoption of smartphones and tablets, end users have had a much higher expectation of IT in the workplace. The simplicity of on-demand apps and services has set the bar high; turning up at work and having a Windows XP desktop and waiting three weeks for a server to be provisioned just doesn’t cut it.
I always start with the outcomes the business is trying to achieve. Are there specific goals that would improve time-to-market or employee efficiency? Once you understand those goals, start to look at the current processes (or lack thereof) and get an idea for what work is taking place that creates bottlenecks or where processes spread across multiple teams and delays are created with the transit of the tasks.
Once you’ve established these areas, you can start to map technologies to the desired outcome.
What Do I Use?
From a hardware perspective, I’m looking for solutions that support modularity and scalability. I want to be able to start at the size I need now and grow if I must. I don’t want to be burdened later with forklift replacement of systems because I’ve outgrown them.
Horizontal growth is important. Most, if not all, of the converged infrastructure and hyper-converged infrastructure platforms
offer this now. These systems often allow some form of redistribution of capacity as well. Moving under-utilized resources out to other areas of the business can be beneficial, especially when dealing with a hybrid IT approach and potential cloud migrations.
I’m also looking for vendors that support or work with the public cloud, allowing me to burst into resources or move data where I need it when I need it there. Many vendors now have at least some form of “Data Fabric” approach and I think this is key. Giving me the tools to make the best use of resources, wherever they are, makes life easier and gives options.
When it comes to software, there are a lot of options for automation and orchestration. The choice will generally fall to what sort of services you want to provide and to what part of the business. If you’re providing an internal service within IT as a function, then you may not need self-service portals that would be better suited to end users. If you’re providing resources on-demand for developers, then you may want to provide API access for consumption.
Whatever tools you choose, make sure that they fit with the people and skills you already have. Building better data centres comes from understanding the processes and putting them into code. Having to learn too much all at once detracts from that effort.
When I started working on automating FlexPod deployments, the tool of choice was PowerShell. The vendors already had modules available to interact with the key components, and both myself and others working on the project had a background in using it. It may not be the choice for everyone, and it may seem basic, but the result was a working solution, and if need be, it could evolve in the future.
For private cloud deployments, I’ve worked heavily with the vRealize suite of products. This was a natural choice at the time due to the size of the VMware market and the types of customer environments. What worked well here was the extensible nature of the orchestrator behind the scenes, allowing integration into a whole range of areas like backup and disaster recovery, through to more modern offerings like Chef and Ansible. It was possible to create a single customer-facing portal with Day 2 workflows, providing automation across the entire infrastructure.
More recently, I’ve begun working with containers and orchestration platforms like Kubernetes. The technologies are different, but the goals are the same: getting the users the resources that they need as quickly as possible to accelerate business outcomes.
You only have to look at the increasing popularity of Azure Stack or the announcement of AWS Outposts to see that the on-premises market is here to stay; what isn’t are the old ways of working. Budgets are shrinking, teams are expected to do more with less equipment and/or people, businesses are more competitive than ever, and if you aren’t being agile, a start-up company can easily come along and eat your dinner.
IT needs to be an enabler, not a cost center. We in the industry all need to be doing our part to provide the best possible services to our customers, not necessarily external customers, but any consumers of the services that we provide.
If we choose the right building blocks and work on automation as well as defining great processes, then we can all provide a cloud-like consumption model. Along with this, choosing the right vendors to partner with will open a whole world of opportunity to build solutions for the future.
Next time I will be looking at how location and regulatory restrictions are driving hybrid IT
. Thanks for reading!