Hybrid IT can create issues for IT operations. Here's an article from my colleague, Mav Turner, that suggests ways to keep things running smoothly.
Trying to find the root cause of IT problems can often feel like looking for a needle in a haystack. Worse, there are often multiple haystacks, and sometimes the haystack you need to search is located on a completely different farm.
Issues may exist on-premises in their complex application stacks, or they could exist far away, somewhere in the cloud. Without visibility into all aspects of the network, it can be very difficult to tell where the problem lies, and finding those proverbial needles can be nearly impossible.
Today, many federal network managers do not have the ability to continuously monitor both on- and off-site environments. Tools used to monitor what happens on-premises will not necessarily pick up all of the interactions throughout hybrid infrastructures.
A Hybrid IT Canvas
Today’s federal IT managers need a network view that is broad and expansive and offers visibility into resources and applications, including virtualization, storage, applications, servers, cloud and internet providers, users, and more. Managers must be able to see, correlate, and understand the data being collected from all of these resources and share it with their colleagues.
Network managers must also deploy methods that allow them to compare data types side by side to more easily identify the cause of potential issues. Timelines can be laid on top of this information to further identify the cause of slowdowns or outages. For instance, a manager who is alerted to a non-responding application at 11:15 a.m. can review the disparate data streams and look for warning signs in those streams around the time that the issue first occurred. Managers can share these dashboards with their teams to get everyone on the same page and verify that the problems are resolved quickly.
Dependency mapping can be critical in complex environments where one application depends on another. Unlike traditional IT, dependencies are highly dynamic in a cloud environment. Databases can move around, and containers can pop up and disappear. Being able to quickly and automatically identify dependencies and the impact that events can have on connected resources—whether on-premises or hosted—can save precious problem-solving time.
A Window into the Future
Reacting to an incident is usually more time- and resource-intensive than preventing the problem in the first place. It’s far better to use predictive analytics to avoid the issues altogether. By collecting and analyzing all of the aforementioned network and systems data, federal IT managers can better predict when capacity problems or failures may happen and take steps to mitigate issues before they occur. Based on trends, anomalous patterns, and other algorithms, managers can be alerted prior to an event, receive insight into its potential impact, and advice on how best to react.
For example, at some point in the past, an agency may have experienced an issue with CPU and memory being oversubscribed on a set of virtual machines. If the events that led up to that issue recur, the manager can receive recommendations on how to address the problem before it becomes a real concern. Those recommendations could include relieving memory problems or high CPU usage by moving a VM from one host to another, allowing IT managers to optimize workloads and avert problems.
A Network that Runs Smoothly
One of the primary jobs of any federal IT manager is to keep their network running smoothly so the user experience does not degrade. Sometimes that involves sorting through increasingly complex hybrid IT environments to find that one little needle. Managers must discover and implement new ways to gain complete network and system visibility and continuously monitor all of their resources.
Find the full article on Government Computer News.