Networks

How to Hyper-converge Effectively

How to Hyper-converge Effectively

Hyper-converged offerings are gaining popularity, and more vendors are entering the arena. Everything is software-defined; therefore, the technology cannot be separated like in a converged offering. This can bring a new challenge when it comes to monitoring. In the case of a converged offering, you still have your separate storage, compute, network, and hypervisor. Hyper-convergence brings all that together with software, and can bring about more capabilities, such as backups, disaster recovery, performance assurance, and more. So, how does an organization monitor a truly software-defined data center offering? With tightly integrated components, can a traditional monitoring solution work?  Yes and no.

Traditional monitoring of applications using SNMP can work in this space, but does it truly give you a full perspective of your environment?  You have to monitor the hardware, virtual environment, storage, and the application itself. As hyper-convergence breaks down the silos, so are monitoring solutions. Just like a hyper-converged offering where you can scale linearly, monitoring solutions are offering add-on features to drill down into what is happening in a customer’s environment. Management information base (MIB) from a hyper-converged vendor may already be integrated in your favorite monitoring solution. This type of integration is imperative to get a true look inside. Most offerings offer their own monitoring with the hyper-converged bundle. It still is, in its basic form, application monitoring and infrastructure monitoring. A dual approach is necessary to drill down to the root cause. Infrastructure usually gets the brunt of the blame, but now a SysAdmin can point out that a bad SQL query is causing latency, not the storage.

Hyper-convergence simplifies the infrastructure piece, and can bring much-needed focus to application monitoring. Most issues come down to slowness, not a complete system failure. Base-lining applications, real-time analytics, end-to-end transaction tracing, and code visibility lead to the faster resolution of application issues. Most environments do have more than one solution, and even multiple teams have their own monitoring preference. While having a single pane of glass is always desired, there are still major differences in the way infrastructure and application monitoring tools operate today, and so having a best-of-breed solution is still common. The focus at the end of the day is to prevent and reduce the time of failures as much as possible. Hyper-convergence reduces the complexity and number of vendors you have to go after, but you still may need multiple tools.


Amy Manley is a Systems Engineer Specialist for the University of Chicago Medical Center with over a decade of experience in IT, crossing platforms such as networking, data center, telephony, web design, virtualization, and automation. As she came up through the ranks, she was on a team that won the SIM/AITP Most Effective IT Team award two years in a row. For the past four years, her focal point has been virtualization and automation. Amy is a two-time vExpert and has presented at Chicago and Wisconsin VMUGs. She has also been a Virtualization Field Day 4 and 5 delegate, and has been featured on VMware's PowerCLI user spotlight. She enjoys bridging the gap between business needs and IT's capabilities to enhance a company's chance for success.