This month, we’ve spent time discussing how cloud will affect traditional on-premises IT operations staff. Many IT pros have given feedback on how their organizations view cloud computing, whether it’s a strategic solution, and what you give up when you go with a cloud solution. There doesn’t seem to be much middle ground between people who are doing cloud and those who aren’t, which is indicative of the tribalism that’s developed around cloud computing in the last decade.
So instead of beating this horse to death once more, let’s consider what nascent technologies lay in wait for us in the next decade. You’re probably tired of hearing about these already, but we should recall that we collectively viewed the cloud as a fad in the mid-2000s.
I present to you, in no particular order, the technologies that promise to disrupt the data center in the next decade:
Artificial Intelligence (AI) and
Machine Learning (ML).
I know, you all just rolled your eyes. These technologies are the stuff of glossy magazines in the CIO’s waiting room. Tech influencers on social media peddle these solutions
ad nauseam, and they’ve nearly lost all meaning in a practical sense. But let’s dig into what each has to offer and how we’ll end up supporting them.
AI
When you get into AI, you run into Tesler’s Theorem, which states, “AI is whatever hasn't been done yet.” This is a bit of snark, to be sure. But it’s spot-on in its definition of AI as a moving, unattainable goal. Because we associate AI with the future, we don’t consider any of our current solutions to be AI. For example, consider any of the capacity planning tools that exist for on-prem virtualization environments. These tools capitalize on the data that your vCenter Servers have been collecting for many years. Combine that with near-real-time information to predict the future in terms of capacity availability. Analytics as a component of AI is already here; we just don’t consider it AI.
One thing is certain about AI: it requires a ton of compute resources. We should expect that even modest AI workloads will end up in the cloud, where elastic compute resources can be scaled to meet these demands. Doing AI on-premises is already cost prohibitive for most organizations, and the more complex and full-featured it becomes, the more cost prohibitive it will be on-premises for all companies.
ML
You can barely separate ML from AI. Technically, ML is a discipline within the overall AI field of research. But in my experience, the big differentiator here is in the input. To make any ML solution accurate, you need data. Lots of it. No, more than that. More even. You need vast quantities of data to train your machine learning models. And that should immediately bring you to storage: just where is all this data going to live? And how will it be accessed? Once again, because so many cloud providers offer machine learning solutions (see Google
AutoML, Amazon
SageMaker, and Microsoft
Machine Learning Studio), the natural location for this data is in the cloud.
See the commonality here? These two areas of technological innovation are now cloud-native endeavors. If your business is considering either AI or ML in its operations, you’ve already decided to, at a minimum, go to a hybrid cloud model. While we may debate whether cloud computing is appropriate for the workloads of today, there is no debate that cloud is the new home of innovation. Maybe instead of wondering how we will prepare our data centers for AI/ML, we should instead wonder how we'll prepare
ourselves.