When Should You NOT Virtualize That System?
November 11, 2015
Systems
Virtualization is an amazing and valuable tool for so many reasons. I’ve been a proponent of the concept since GSX server, and early Workstation products. In the early days, I used to use it as a fresh image to load on a workstation for a development crew who required a pristine testing environment for a specific environment. When their code inevitably blew up the image, all we did was recreate that test machine OS with the existing image already located on that machine.
Things have only gotten better, with additional capacities in terms of disc size, processor capacities, and amount of RAM that could be added to an individual machine.
But, are there circumstances wherein a device should NOT be virtualized? Perish the thought!
I can envision only a few cases.
For example, one might say that a machine that requires all the resources that a host might have to offer shouldn’t be virtualized. I still say though, that in this case, a VM is preferable to a physical machine. Backup and recovery is easier, up time can be far more viable, in that DRS allows the movement of the virtual machine off the host and onto another one for hardware maintenance, etc. However, licensing may just find this unacceptable. When you’ve an ELA in place, and can virtualize as much as you want, this actually does become a great solution.
Maybe, in another case, the application hosted on that server is not approved by the vendor. It's my experience that the OS is the key, and while the app may not have approval by the creator, testing often makes that a non-issue. However, there may be circumstances wherein the app is tied to a physical hardware entity, and the process of virtualizing it keeps it from functioning. I would call this poor application development, but these things are often hard to get around. Another similar case is when, as seen in many older apps, the server requires a hardware dongle or serialized device connected to a physical port on the server. These create challenges, which often can be overcome with the assistance of the creator of the app.
I would posit that in some cases, when a VM relies on a time-sync, specific concerns may pose an issue. An example of this machine is a RADIUS® or RSA Server device, in which the connecting device must sync with the connection device as part of its remote access. Assuming that you’ve configured all hosts to connect to an authoritative NTP source, and connection to this is both consistent and redundant, there still exists some small likelihood of a time drift. Most importantly, one must be aware of this issue and ensure all efforts to resolve it have been made before virtualizing that server.
And, finally, operating system interoperability should be considered before making a move toward virtualization. I recently (and remember this is the latter half of 2015), had a customer ask me to virtualize their OpenVMS server. It’s generally not an X86 Operating system, as, for example, OpenSolaris is a port of the original RISC-based OS to be able to run on X86. This may be virtualized, but OpenVMS is still a proprietary, hardware-reliant OS, and thus cannot virtualize into X86 architectures. I am aware that there is a way to virtualize this, but the tech on the hypervisor is not at all standard.
Generally, any X86-based operating system or application is fair game. While it's unlikely that we'll be able to achieve 100% virtualization nirvana anytime soon, there are benefits to ensuring that an application resides within a virtual environment.