In part one of this series
, I tried to apply context to the virtualization market—its history, how it’s changed the way we deploy infrastructure, and how it’s developed into the de facto
standard for infrastructure deployment.
Over the last 20 years, virtualization has had a very positive impact in enabling us to deliver technology in a smarter, quicker way, bringing more flexibility to our infrastructure and giving us the ability to meet the demands placed upon it much more effectively.
But virtualization, or maybe more specifically, server virtualization, is now a very traditional technology designed to operate within our data centers. While it’s been hugely positive, it’s created its own set of problems within our IT infrastructures.
Perhaps the most common problem we witness is server sprawl. The original success of server virtualization came from allowing IT teams to reduce the waste caused by over-specified, underused, and expensive servers filling racks in their data centers.
This quickly evolved as we recognized how virtualization allowed us to deploy new servers much more quickly and efficiently. Individual servers for specific applications no longer needed new hardware to be bought, delivered, racked, and stacked. This made it an easy decision to build them for every application. However, this simplicity has created an issue like what we’d tried to solve in the first place. The ease of creating virtual machines meant instead of dealings with tens of physical servers, we had hundreds of virtual ones.
The impact of virtual server sprawl introduced a wide range of challenges, from the simple practicality of managing so many servers, to securing them, networking them, backing them up, building disaster recovery plans, and even understanding why some of those servers exist to better control VM sprawl
. Having an infrastructure at such a scale also becomes cumbersome, reducing the benefits of flexibility and agility that made virtualization such a powerful and attractive technology shift in the first place.
With size comes complexity. Multiple servers, large storage arrays, and complex networking design make implementation more difficult, as of course does the management and protection of more complex environments.
The complexity highlighted in the physical infrastructure environment is also mirrored in the application infrastructure it supports. Increasingly, enterprises struggle to fully understand the complexity of their application stack. Dependencies being unclear and concerns changing to one part of the infrastructure could have unknown impacts across the rest. Complex environments come with increased risk of failures, security breaches, running costs, and are slower and more difficult to change, innovate, and respond to demands placed upon it.
The Evolution of Virtualization
With all of this said, the innovation that gave us virtualization technology in the first place hasn’t stopped. The challenges growing virtual environments have created have been recognized. Significant advances in virtualization management
now allow us better control across the entire virtual stack. Innovations like hyperconverged infrastructure (HCI) are making increasingly integrated and software-driven hardware, networking, and storage elements much more readily available to our enterprises.
In the remaining parts of this series, we are going to look at the evolution of virtualization technologies and how continual innovation now means virtualization has moved far beyond the traditional world of server virtualization, becoming more software-driven, with increased integration with the cloud, improved management, better delivery at scale, and adoption of innovative deployment methods, all to ensure virtualization continues to deliver to our environments the flexibility, agility, innovation, and speed of delivery that has made it such a core component of today's enterprise IT environment.