I’ve covered a lot in this series of posts around artificial intelligence (machine learning); from the beginning (Not Another AI Post…
) to why I love it (My Affection for AI
) to what I hate about it (My Animosity Toward Artificial Intelligence
) to what scares me about it (AI-nxiety
) and even why we need to decide what words to use when talking about it (Words Matter
). One common theme is that a lot of the discussions, when it comes to artificial intelligence and machine learning, aren’t around the technology directly. In my experience, the focus of many conversations I’ve had about AI and ML center around the politics, marketing, high-level capabilities, and immediate problems that will be solved. When it comes to the actual technology and how to implement, feed, and care for it properly, things get complicated. I’m not a data scientist or a software developer, so there are a lot of concepts I can’t--or don’t care to--wrap my head around. Most people don’t care about the methods, just the results, and I’m no different.
I covered a few scenarios along the way that have some huge benefits with things like large-scale historical analysis to create event correlations of disparate system logs, as well as failure predictions with infrastructure systems, but those are things we can get now (with enough time and money). What does the future hold for artificial intelligence and machine learning within the enterprise network and systems space? It’s a massive field that’s growing and changing all the time with newfound capabilities, so predicting what is coming is an extremely tough thing to do. But I can tell you what I’d like to see (and maybe give some ideas to a vendor or two stealthily working on a new product).
First, as I covered in my post "AI-nxiety
," I want to be able to inform an AI system of my office politics. Whether it’s as simple as supplying an organizational chart or as complicated as mapping MAC addresses and applications to varying roles, this is something that is definitely needed before I’ll begin to let HAL 9000 take the wheel. By creating a system that assigns a value based on importance to a user or application, issues can be routed better and responded to faster when needed, or left for tomorrow when not--automatically. That’s the whole point of these systems, limiting and avoiding human intervention. I could go on with this topic and possible ways to configure or even teach a system, but I'm staying high-level here.
Second, I want an AI system that sits 100% in my company’s data center. I deal with a lot of sensitive data, and giving a cloud provider unfettered access to all of it just doesn’t sit well with me (or the regulators that oversee my networks). There has been a lot of development in the custom silicon space as of late, and if these chips are even remotely affordable, I don’t see an issue with bringing this kind of workload in-house. The usual response is "Well, we won’t monitor that information," or "We only grab metadata." No dice. It’s important and needs to be monitored and under some regulatory bodies, if it touches the data, it’s in scope. This includes gathering the data.
Third (and last for now), I don’t want a laser-focused AI system that only sees applications or servers or the WAN or the WLAN. I want something that sees it all and takes everything into account. As a wireless architect, I’m no stranger to the difference between actual issues and issues that present to clients deceptively. When a RADIUS or DHCP service fail, all the user knows is that the wireless is down. I need something that sees into the back-end, the front-end, and everything in between, constantly watching and correlating every single packet along the way. From the border firewalls to the cloud apps to the wireless clients, it’s all important and should be treated as such.
Maybe this stuff is out there. If so, they need to step up their marketing. Maybe if I cobble a few solutions together I could get my wishlist. If so, those companies should look at partnering up and showing off what they can do together.