Speaker 1: You ever watch a movie or TV show where someone is using a computer to hack into the mainframe and think to yourself, “That’s not how that works. That’s not how any of this works.” Movies and Mainframes, the fun podcast that explores the representation of technology in media. Join hosts Tom and Andy as they review your favorite movies and TV shows and run a query if the tech is done right. Download Movies and Mainframes wherever you get your podcasts.
Sean Sebring: Hello and welcome to another episode of SolarWinds TechPod. In this episode, we look to the future, specifically the future of AIOps. We’ll try to gain an understanding of AI’s role in technology today, where it’s heading, and maybe even some of the ethical considerations when designing and implementing AI. I’m your host, Sean Sebring, joined by fellow host Ashley Adams. Over to you, Ashley.
Ashley Adams: Hello. We have a SWUG plug. Are you ready for an amazing opportunity to network, learn, and share with other SolarWinds users? If so, you do not want to miss the SolarWinds User Group in Charlotte, North Carolina on August 8th and 9th. Register today and join us in Charlotte for a memorable event.
Sean Sebring: Thank you, Ashley. For this episode, we’re joined by an expert with a welcome obsession with automation, tools, and platforms, most specifically in the cloud and infrastructure space. With a very impressive resume, I want to welcome Aswin Kumar and give him a chance to add and append to my introduction. Welcome, Aswin. We’re thrilled to have you with us. Please take a few minutes, introduce yourself to the audience.
Aswin Kumar: Yeah, thanks Sean, thanks Ashley for this opportunity. Great to be here. This is Aswin and I lead the AIOps and Observability practice at Infosys Limited, a global team. My experience is around 23 years plus and around all the tools and automation across service management, AIOps, SRE, DevOps, and the process consulting bit of it. My experiences have been on various multiple organizations, whether it is financial services, insurance, manufacturing, and the services construct and even the digital consulting practices. I’m happy to be part of this dialogue today.
Sean Sebring: Yeah, I know. I’m super excited for it and yeah, impressive resume. I was just stalking you a little bit on LinkedIn and I was like, man, I don’t know where to start. I’ll get a little bit in here and I’ll let Aswin introduce himself for the most part. Yeah, I think you’re incredibly qualified for this. This is exciting. I love talking about the future and I love learning about things and AI is a very fascinating subject. There’s a lot of things that circle AI, cool movies as well, not just technology, which is what we are mostly specializing in. I think what would be good to get us started is having you provide us an overview of what is AIOps and its significant role in modern IT operations .
Aswin Kumar: That’s a very good question to start. AIOps, as per our view, as per my experiences, it’s a transformation framework. It’s a framework which is enabled by automation, enabled by people change, enabled by the global revolution in data analytics, artificial intelligence, and now we can even say generative AI. It is a combination of processes, people, mindsets, and I’ll come in a bit more on why I feel so.
Sean Sebring: I like that you said a transformation framework. I didn’t have a specific expectation for how you would introduce that, but that’s a really cool way to look at what is AIOps and a transformation framework is really neat to me. Do you think it’s always a transformation or right now because it’s so new and emerging, it’s transformation framework? Do you think maybe there will be a point where it’s less transformative, more expected, normalized and it kind of is sometimes nowadays, right? Expand on transformation framework for me if you can.
Aswin Kumar: Yeah, so your point is right, since this space is evolving very fast and also the hype cycles are coming down on this, it’s because of lot of factors and the factors being there’s a lot of data available now, data across IT operations, across the various other applications platforms. This is a revolution of data happening and also there is a good innovation happen on usage of AI, which is making more useful and convenient to use this data. This data has been around, but were we able to use it? The answer is no, because there is a limit to the human intelligence and how much of data it can travel through at a time and make decisions.
The complex part is these teams will be multiple teams, some will be SRE teams, some will be DevOps teams, some will be operations, some will be cloud engineers and maybe application developers. It’s humanly not possible to make them collaborate together, make them talk on every issue. The beauty and the transformative aspect of AIOps is, it has to actually help in predicting rather than only reacting. At one side there is a customer experience, there’s a customer success. Another side is where there are application third party system, there’s a cloud environment. Both have to work together in ecosystem manner. Then what happens is, there are a lot of telemetry data coming up. A lot of telemetry data is emitting across the value streams, whether it is the type of operations or process or the applications giving the feature sets, a lot of telemetry data is available. Now this data is getting ingested into typical AIOps platform and there is where the birth of AIOps happens.
Sean Sebring: That’s really neat. Our last two episodes were actually about database and how necessary, were you’re going to say the same thing, Ashley?
Ashley Adams: Go for it.
Sean Sebring: It’s just fascinating to me to think about how we’ve evolved as a, I guess just service providers in the global economy. Everything is complimenting one another and it makes sense what’s a next step. As data is what’s becoming more available and we are getting smarter at how we contain and transform that data, that it’s now being leveraged by AI and AI is able to do more with it than we can. It’s just really cool to see these sequences of events and what’s more important to us now is making smart data, right? Collecting and making smart data and then allowing tools or using tools like AIOps and this technology to leverage that smart data to make it better, make it more efficient, take it to places we couldn’t, like you said, things beyond human capacity. What were you going to add, Ashley?
Ashley Adams: Well, I was going to ask, I know telemetry is a word that is more frequently used and for some of our listeners who may not be as familiar, if you could explain or define that for us a little bit.
Aswin Kumar: Telemetry, what we mean here is there are logs coming in from applications, there are alerts coming in from infrastructure devices, from cloud environments, there are traces. The telemetry is across this. It is getting logically correlated because when you see a suppose alert coming from a CPU or alert coming from an application, it is going to give a point in time scenario of that entity, that configuration item, how it becomes intelligent telemetries when all this starts correlating little bit. What happens actually on the ground is the noise, which is not correct alert, not an urgent alert I mean. That is where the first purpose of AIOps is to reduce the noise first, so that what is real, what is real and to be showing to the real personas, which we say the personas can be SRE engineer, they can be cloud engineers, they can be application engineers and more.
Ashley Adams: That’s perfect. We always talk about alert fatigue, the poor engineers who suffer from many alerts and having to figure out which ones are the most important.
Sean Sebring: This is actually a perfect transition. My next question was going to be about challenges and you just mentioned something that AIOps is helping to address, which is that noise, the alert fatigue. I want to know a little bit more about what it’s trying to address and how it’s different from traditional IT operations, right? How is it being folded into organizations, not just it as a tool or it as a practice, but how is it being folded into operations as opposed to traditional IT operations management?
Aswin Kumar: See, like I mentioned, AIOps is like a marathon. It’s a journey where the whole IT operations organization gets a new life. Now why do we say this? Generally the IT operations teams will be mostly reactive based on what they see and today’s problem and outages or something which has become highly critical. The move towards is right from this reactive to what I call like a hyperresponsive IT operations. Why I say hyperresponsive, is because it is about, we got the data, but when we ingest it in the AIOps platforms and those diligent routines, it starts converting into correlated surrounding events and giving the insights. Once we start getting those insights that if there is an issue in a database cluster and there is a web application which is working through this database and both are in different environments, one may be in AWS, one may be in Azure, and both admin engineers are working separately, they’re not able to see that.
This is one simple example where an insight can be there which can say that while you saw a database becoming slow or the load was higher, it became nonresponsive or maybe becoming nonresponsive. At the same time, there is a slowness in the application which that engineer or that admin will not know whether there is a reason behind it. The beauty is about that we in AIOps will work is, it will give predictive alarms. It will start seeing many more odds in advance, much in advance so that actually there can be a remediation plan. That remediation can be either a partial cell feeling or a full cell feeling and it can maybe scale the database or route it to a different database. You can see it’s all happening, it should happen in a sort of autonomous way. How much of autonomy is what we will speak down the line on, that is where the little bit of responsible use of AI comes.
Sean Sebring: When I heard you say hyperresponsive, I was curious at first what you meant. Let me try and make sure I understand, ’cause I really like this. One way to put it is moving from being reactive to proactive. I think something else I gathered from hyperresponsive is, I don’t want to have my day-to-day being fixing only, instead I’d rather enhance it before it needs to be fixed. Is that fair statement?
Aswin Kumar: Yeah, I think you added a good aspect to it. What I want to add also, is there is a feedback going into the development teams. That is where it is not about preventing the issue, which always happens, but the real value, which has to also come, is that feedback has to go to the development teams, those who are the creators of those applications or those clusters where they can permanently resolve it. Actually, that’s also a true purpose so that we don’t land up into preventing the same issue every time. That’s where I can bring something on the service management where we can say it’s effective problem management that way. By the way, after AIOps also, there will always be relevance of ITSM service management. It’s not going anywhere.
Sean Sebring: Right. I love that. And you said one of my favorite F words, feedback. Love feedback, feedback and using that feedback for improvement is something I’ll never not say and never not give up on. I’m here for ITSM forever. That also made me happy. Thank you, Aswin.
Ashley Adams: We talk to a lot of IT leaders, IT managers in their respective fields, whether it be database, security. How do you see today that IT teams are managing AI? What roles are responsible for implementing AI and how do you see that evolving today? Do you think it’s still in nascent stages in terms of having somebody be the director of AI within an organization, or are we headed that direction?
Aswin Kumar: That’s an excellent question and I’m sure many will benefit on this now. My personal take is AIOps programs are not succeeding enough the way we want to be. There are a couple of reasons and I might give you my view on this and I’ll try to put it across. See one is that the composition of the teams are not appropriate. It is about, it’s taking a tool transformation programs, AIOps implementation is not a tools implementation exercise. That gives a way forward to say that. What should be the teaming around it? The teaming has to be the one like you said, there has to be the AI leads coming together. There has to be those Org change champions because there is a lot of process changes to happen. There has to be the data governance teams which has to come together.
There has to be the selection of the right observability tooling because there is a lot of tool sprawl there. Any large enterprise has more than seven plus observability tools. There has to be a learning and culture management because the associates, the engineers, the champions have to be used to be using AI and get assisted by AI. There has to be a mindset shift on that and that will make them responsible and then that can make a responsible AI function also. Both are important here.
One thing which is also missing as of now, which is not bringing the total value out, is the type of the value engineering concept. What I mean here is, that there has to be a shared understanding of the investment and the value across the organization, not only from a tools program manager, but it should be understood by the DevOps owner, it should be understood by the CIO, it should be understood by even the marketing officer because at the end, the AI ops is making ops look good, which is making the application run very well. That’s resulting into a huge customer experience and a success. That’s a very commonsensical relationship back and this value engineering, it should have cross-disciplinary teams where it can bring those, even the statisticians, it can bring those tools engineers together, it can bring the process contacts together where they can really understand what are the value points they want to uncover now. They want to do it.
Every organization is different. There is not a cookie cutter approach where it works for a size of this organization, it’ll work for another or the same size, not at all. These are the things where we feel along with the mindset improvement on that we have to be working alongside with AI, I think can bring the real value, but as of now I repeat, it’s taken as a mere tools implementation exercise. Most of the AIOps programs are not yielding the right results, but there is a huge, huge potential for a success of this if we do it right.
Ashley Adams: Yeah, I think the idea of connecting AIOps across an internal organization, we gather lots of intelligence from different internal products, whether it be marketing insights or in-product insights. If a team is working collaboratively to bring all that information together, that can bring better success for sales and everybody likes that.
Aswin Kumar: That’s a very good point, Ashley. I get an idea to mention one analogy. See in our lives, we keep working on a technology life cycle. We are on the technology side, but there’s a whole crowd, there is a whole leadership working on ensuring that a customer’s journey is good, the way customers consume the systems. It is well thought of, it’s mature and even the single second the customer should not feel bored or less optimal about the system. What I feel is, if I do AIOps well, it has a potential to bring those systems and hardware technology lifecycle, those type of insights together with the customer experience journey metrics.
I can tell you, there is no solution for this, but working together can really give a good correlation to the business KPIs, whether sales is healthy or there is a new market getting acquired or are we secure enough or is it a threat to competition or similar thing, the business success is innovation. What I feel is if we do AIOps well, then it creates a foundation that it can start talking with the customer journey together because you’ll have to mature both the sides of the thing so that they can talk through systems. This concept to make it humanly possible through multiple meetings across stakeholders, it’s not possible. Let us embrace AI, let us embrace this systems way of thinking on this, and make IT operations get the right seat on the table as well.
Sean Sebring: I’m getting more and more trust in you, Aswin, as you talk about the customer journey and value engineering, things that I have close to my heart. Something you said that I find very, very important and I actually use it every day. My job is selling software, that’s on my resume right now. I’m a solutions architect and people come to me and they say, “Show me the product.” I say, “Okay, here it is,” but the product can’t fix what you’re not doing. The product can do things, but if you as an organization aren’t practicing, embracing, if you’re not using the feedback, if you’re not leveraging the data, if you aren’t doing it, then you’re not going to be successful with just a tool. I hear you so loud and clear when you say a lot of people misunderstand this as just a tooling issue or a tooling adventure endeavor, when it’s actually something culturally you need to address with potentially adding tools into the environment. Love that. Love that.
Aswin Kumar: AIOps is not meant or designed for telling you the what. It is designed to help every stakeholder with the why part. Yes, we understand this problem is there but it is unable to express systematically on why this is happening and what we think may be a problem. It has a heavy bias towards that. At the same time, like you said, if we attack it like a tools problem, then a hundred percent we will fail because the tool can just sit on top of our data and our process. It cannot be other way. From our experience, one of the basic issues we find is large users and stakeholders, due to multiple constraints, they’re not able to take ownership of the data around it and govern it properly. Like I said in the beginning, AIOps is fueled by revolution in data analytics and then AI is coming to the rescue because data has become humanly impossible to analyze and use it for prevention and constructive purposes. AI is to the rescue actually, it is not going to kill the data.
Sean Sebring: I love that. A good next topic or part of the topic rather, and you mentioned this very briefly while you were giving that explanation is about KPIs. It can help with that, make you look good. I think you even said it makes you look good. What are some measurements for you to implement to know if your AI operations are successful? You said a couple of times earlier if we do it right, what are some KPIs that we can look at to show we’re doing AIOps well?
Aswin Kumar: That’s a very, very good question to ask because the value has to be set up upfront and how do we measure it? The number one KPI, the success category, is about mean time to recovery. Are we seeing improvement in recovering faster and smarter or are we spending the same time in recovering the same issues and same people? Then something is really wrong, then it’s not working. The second is, is your customer experience improving or there is a curse around it and there is a satisfaction issue? That can come from the feedback from customers also. We also say a net promoter score also, but it can be even fundamental basic feedbacks also. The third one is about very high level on it.
When we do all this, then what is the net net cost takeout and that’s touches the ethical part of AI also. Where if we suppose do all of this, then there will be a huge productivity improvement also. Now that productivity improvement has to be smartly trapped also. What I mean is, that it cannot mean that there will be 10 engineers who will be sitting free after AIOps is done. No, what we mean is there’ll be a partial bandwidth time available for the same engineer to become smarter, to become proactive to see that what can be the real self-healing done, what can be the self-remediation, that same persona has to invest time on those, and also work with the application teams to make sure it is prevented right at the development test level.
Sean Sebring: Innovation I think is one, right? If my AI is helping with this percentage of work that I used to have to do, I now have time for innovating, for trying to continue to disrupt the market in my space so to speak. The two metrics you specifically brought up mean time to remediate and the net promoter score the value perceived by the customer. Those feel like they’ve been pretty standard metrics in traditional IT operations. When we’re applying them with AIOps in mind, is it about the timeliness of we’re changing something in AI, let’s see where we are today to where it improved right after we added some AI components to our services? Is that how you would approach it?
Aswin Kumar: There are two dimensions we approach it. One is definitely the time quote unquote there but also the risk and the impact. There is always a potential that one minute of the disruption also would have resulted into a lost customer or a lost transaction and we cannot predict which transaction will be lost, whether there’ll be somebody trading at a 1 million level or a $100 level. It is like subject to guess. We always have to wear balance of not only the time, but also the potential impact of it. For that I mean, are you losing time on a reporting module rather than you’re losing time on a mutual fund transaction or a typical financial bank transaction? Both have a different kind of risk and impact and it can be becoming geopolitical also. If your system has a high security risk, then it is always have to be in the radar of the cybersecurity experts. You have to save the system because it can jeopardize the public data. It can result into loss of data and privacy there and it can result into a massive damage for our given state or our country also.
Sean Sebring: AI doesn’t sleep so if the timeliness of that outage was at 3:00 AM and it takes my engineer so long to answer the phone, get to the office, AI was already awake. Yeah, that’s a lot of value there too. I appreciate that. Aswin, do you have any real world examples of success where AI was able to come in and save the day?
Aswin Kumar: Yeah, so that’s very interesting and we can all be perfect at that. It is the applicability of the AI. It is like a simple change management where there is an issue happened in one of the load balancer in a cluster in AWS environment, and the engineers are not sure whether it has happened in their environment or whether it has happened in another environment. This organization uses two environments, AWS and Azure both. It is just for a sample purposes, I don’t represent any of this organization and I’m just sharing an example. With usage of AI, with usage of predictive intelligence, what had happened is it was able to correlate the alert, correlate the alert that this system is passing an alert for a slowness and that may be due to the thumb stuck transactions in the database, but that’s not completely hosted out of AWS. It’s also hosted out of the Azure part, which is also having the load balancing setup in both environments.
That type of issue was found out and it took around I think five minutes to understand that how the routing is happening and what should we do. We should route the traffic to another instance and then it resolved it. It’s like a simple example where there is a human involvement also because it cannot be done a hundred percent without any supervision. That’s where we call there has to be a supervised machine learning also, because when we do this issue resolutions, we apply machine learning feedback also, back to the person. That person has to evolve, that person has to learn, and as of now we are not seeing any unsupervised learning happening here. It is mostly supervised and of course the data matters there again. That is where the stage is, but definitely it’ll evolve and innovate further off.
Ashley Adams: That’s a fantastic example and kind of builds us into the next section maybe pivoting to what a lot of people have been hearing about in the news and what’s topical. You mentioned unsupervised data, so can we talk about your thoughts on generative AI, open AI, ChatGPT, and the ability for the common user to start using AI for a number of things today?
Aswin Kumar: Okay, so you have now touched the very relevant and very hot topic now and I’ll try to do some justice here with respect to AIOps. I personally feel that this is a good momentum happened and of course it depends on how we use it responsibly. AIOps today what you see, I feel that it’ll not be the same in next 14 to 18 months. It’ll be something around generative AIOps. Why I say so, because lot of this large language models, lot of these NLPs, all the products are inheriting, all the products are partnership there. You can hear typical partner, we have ServiceNow, they are partnering with Nvidia. You can see this type of revolution happening and we heard it a few weeks back. These are examples where there will be a combination of innovation here and how this generative AIOps will be used.
Couple of purposes I see. One, it can give easy understanding to the various stakeholders because remember in the beginning I said it has to be a shared understanding of the problem. Now our database admin understanding a database alert which may be a 50 characters, may not be the same as the QA engineer might understand. That happens because it is designed by the given product vendor and a provider. What generative AIOps will bring also is, is to make it evenly readable to make it understandable when non-technical people also. If there is a marketing owner of the application reading it, they might as well understand in the language they want. That type of thing is going to happen because all these alerts are standardized. It is only time that how it gets processed by language models. That’s the one clear change I feel it is coming very soon.
The second what I see is AIOps will go more towards the human portion of it. What I mean there is and we all understand the nudge part of it. Now, a nudge also has a connotation. It can be a soft nudge, it can be a nudge with data, it can be a nudge with lot of data so that the person is acting. I feel a generative AI ops will help in bringing the right nudges across the ecosystem, across the value chain so that when issue is going to happen, it has not yet happened. If we give the right nudges in the right communication medium and collaborate across those channels very fast, it’ll naturally reduce the impact because the problems will be solved faster, the root cause will be understood faster, and there’ll be a problem management team who can work on it, which as of now they’re not able to understand this.
If you ask a process person, they may not understand a log given by a database server, by a switch storage device or similar. This are the purposes I see. Of course there’ll be a lot of innovation as we go, but definitely generative AI is happened as a good event for this transformation. I’m very, very positive about it.
Sean Sebring: Good to hear. Me too. It has replaced me as a point of advice at my house. My wife now asks ChatGPT for advice instead of me, which I don’t know if I feel replaced or relieved, but it’s definitely a very neat tool. I’ve leveraged it a handful of times. The ideas of what it can do are just mind-blowing and fascinating. I think one of the things you said early on was about, and it was in the AIOps regard, not about just this generative AI, but it can do things the human mind cannot, right? It allows me to reach to places that I could never reach before. I think it’s super cool and I’m glad to hear that someone who’s an expert in this space is positive about it as well. That brings up the next question of ethics. How do you ensure the ethical use of this, right? What steps do you take to mitigate the biases of the AI or potential risks associated with it? This can be specifically more in the AIOps space if you want to. It doesn’t have to be for the generative AI.
Aswin Kumar: Oh yes. That’s a very good question and I don’t think we have yet innovated around it. We are also thinking as we proceed, that’s a frank opinion I have, but the thoughts are how can we make it more ethical? First thing is, it’s the people mindset. When we adopt a technology, suppose today we adopt a ChatGPT in the environment, but what we need also is we have to explain the engineers, we have to explain it properly that this is the way of working. This is an assisted way of working. It is not a replacement way of working. Taking ownership across the organization by a people leader or by our business head is very important.
The second portion is about now we have adopted some part of AI suppose using a product or doing our own innovation. Now while we are using it, we also have to see what is the value stream across? Where are the human touchpoints and where is the AI touchpoint? That shared understanding of the value stream is not emerged as of now, but I’m sure work will happen around this when there will be a common understanding of the value stream, that these are the AI portions, these are the non-AI portion. Everybody today is talking the AI portion, but are we so clear about the non-AI portion because humans are going to stay. They’ll be decision maker, there’ll be a person who will give feedback, who will apply feedback to the machine learning algorithm because they need feedback. It cannot auto work on that without feedback.
That part is also going to become clearer, which will help bring more responsible AI, more usage of these algorithms accurately. Also, it is required because it can do damage also. If we don’t have the right prompts and if there is a wrong instruction given, definitely the system will go and trigger a transaction and after transaction is done, then we may be seeing some bank accounts are ruined and some things are going haywire. Those things might happen. There has to be a proper governance around it. Of course that can be taken as an investment now till it becomes mainstream. And we are also banking on the policies and processes from the moment bodies also because those are done at an industry-wide level. It’s not only the responsibilities of one organization.
Ashley Adams: I was just going to ask in this case, who really becomes responsible for the ethics of it? Is it the government? Is it corporations? Is it a governing body? Both? You kind of answered that, but I have a follow-up question because this was explained to me one time that I thought was so interesting and I’m curious of your thoughts. Basically since the invention of technology, which some might argue is the caveman inventing the wheel, everything that we’ve adopted since then from cars to spaceships to the iPhones that we hold in our hand every day, the human brain has invented those things and has an intellectual concept of exactly how it works. I’ve heard that when we talk about AI, it’s ability to progress much faster than the human brain would be able to conceptualize or internalize and that’s where some of those ethical risks come in. Would you agree with that statement?
Aswin Kumar: Oh yes, I fully agree on that. The speed of innovation at the AI life cycle is much, much faster happened and thanks to innovation from companies like Nvidia where the lot of compute power is available of course for a cost. At the same time, have we done enough to explain the new change, the new way of working to the people who are going to use it? I don’t think we have done enough. There might be the commercial reasons and just reasons for this, but yes, somewhere we will have to tie both. There is a technology innovation and there is the people side of the things. If both do not work together then there will be problems, there will be unexpected disruptions and that can undercut the value of the AIOps also.
Sean Sebring: I like that you included when you were explaining from the ethical standpoint and how we don’t acknowledge it enough, there’s regular touchpoints. You said touchpoints and I appreciate that. That just leads me, since you said it’s not as appreciated or considered as often as it should, are there current best practice frameworks? For example with ITSM, you think ITIL is the best practice framework for ITSM. Does something like that exist for AIOps today that you can reference?
Aswin Kumar: No, not that I’m aware of. I see various industry analysts talking in a different manner and of course everybody has a commercial interest there. Some will call IT operations management and somebody may say observability, somebody may say AIOps. Everybody has their own way of thinking and it might change every maybe hundreds of miles. Everybody will define it differently and we are also working towards collating the best practices. Yes, it is not going to be possible by one organization. It has to be an industry effort. I’m sure even the ITSM bodies are seeing it because at the end, it has to be rolled up into the service management discipline also because it is helping getting IT managed smartly. That’s like one analogy I can give that who should own it, but as of now there is no taker who can say that, “I want to collate the AIOps framework and best practices and propagate it globally.” That owner has not emerged yet.
Sean Sebring: Yeah, well ITSM is definitely one of the touch points for sure. I guess a follow-up then since there’s not really a best practice, so with ITIL I can see what new certifications are out there. I can train myself to see what the industry standards are at. How do you stay up to date with the latest advancements and trends with AIOps?
Aswin Kumar: Yes, definitely. Every practitioner should be staying latest on this and I’ll just share some of the things we do, I personally read also. We follow a lot on the open source academies where we see what are the new models coming up. Open source brings lot of communities together and less of commercial nature there. Knowledge gets shared easily. That is one.
There is another one on the DevOps Institute also. DevOps Institute also is taking SRE and AIOps in a serious manner. Of course their lens to see this is how to excel in the DevOps discipline because that’s more on the developer side as of now though we would want on the operations side. There are analysts like Gartner and GigaOhms that’s working around it. We keep getting continuous feeds from them on how they are perceiving it and it’s pretty much good as of now. Yes, has the knowledge reached mainstream, is it well understood like we understand IT framework? The answer is no. We have to do more work, make it more available, make it more understood in a shared manner. We are not talking operations organization only. If AIOps is not understood well by the applications and business teams, I again repeat it will not succeed. There has to be a common literature also, which is easily understood and more of the things for dummies and that sort of things. Yeah.
Sean Sebring: I like the community was a theme in that answer. Whereas there’s individual bodies and parties who have their own specific frameworks and agenda and interest and a lot of the other best practice scenarios, it doesn’t exist yet, but a lot of what you said was about community. I like to see this as an evolution, as a practice, that the community is owning AIOps and AI in general versus it being something that a single governing body is deciding what to say is the best practice. That’s kind of how AI is learning anyway. It’s funny that it’s synonymous in that nature with how it’s being controlled is, it’s community driven, it’s open-source shared, collaborative. We all help each other grow because it needs so much data. It makes sense that it’s only natural that’s how it’s going to continue.
Ashley Adams: I agree. I’m a big proponent of AIOps, the future, let’s embrace it, be a community, the globalization bringing us all together. These are all things that at the end of the day, I think is the evolution of human nature and hopefully going in a positive light.
Aswin Kumar: It’s been an excellent interaction here and I feel that more and more discussion should happen around this discipline, because this is going to place us all globally in a different level. The best part is that there are IT operations practitioners, there are product providers and even SolarWinds also is doing a lot in this space. I appreciate that.
In the whole of the hybrid working space, this is going to become a glue because the same insight somebody is seeing miles away has to be understood by the creator of the application. This is the only common layer which can make things observable and it will be very, very crucial for the success of the stability and the reliability of the system. It is fundamental I’ll say, but definitely needs participation support from the community, from the practitioners, from the business heads because they all have to take it as a strategic priority because this is one golden nugget, which can take them in a very, very good manner and place the organization well in front of the end users of their customers. At some point, I see customers asking for it also, if that is what the pinnacle is.
Sean Sebring: Absolutely. All right, so we do have a closing segment. I’m not sure if we prepared you for this, but that makes it even more fun. We’re going to do some rapid-fire questions, so feel free to just blurt out an answer with a brief explanation if you want to. Otherwise, you can just respond. I’ll get us started. Rapid-fire questions. Aswin, would you rather travel to the past or future?
Aswin Kumar: Future.
Sean Sebring: Awesome. I had a feeling. Very relevant to this future of AIOps conversation today.
Ashley Adams: Aswin, what is your favorite tech invention?
Aswin Kumar: It’s the program I wrote for my son recently where he’s working on a mobile app and I just helped him around to get in his summer days.
Sean Sebring: Oh, I love that.
Ashley Adams: That’s fantastic.
Sean Sebring: That’s so cool. That’s awesome. If it were an option, Aswin, would you live in a city in space?
Aswin Kumar: Not at all.
Sean Sebring: Okay. Yeah, I think you’re actually the first person to say no. Can you tell me why?
Aswin Kumar: It’s about the human portion and I would like to see human as human and be around the nature as it is. I love the air and the heat.
Ashley Adams: Love feeling the sun on your face and the wind in your hair.
Sean Sebring: It’s very humbling for an AI specialist to say, “No. I love the human element of being part of this earth.”
Ashley Adams: What talent would you most like to have? If you could pick any talent that you don’t possess today, what would it be?
Aswin Kumar: A very difficult one. It is if I can read very fast because I want to read a lot, but it’s limiting me because of the speed and the time I have. Something skill or a talent which can make me read enormous things and whatever I have shortlisted to read.
Sean Sebring: I just pictured in The Matrix, if you’ve seen those films, where they plug in the back of your head and just upload, man. Yeah, I would. I’m a slow reader by nature, it’s very difficult for me to get through content. That would be a cool skill. Never thought of that one.
Ashley Adams: That’s a great one.
Sean Sebring: If you could choose to come back in another life, what would you be?
Aswin Kumar: I would like to be an artist and a designer rather than an engineer. I miss that aspect.
Sean Sebring: You got a creative bone in you that you need to scratch?
Aswin Kumar: Yeah.
Ashley Adams: I feel like there’s some overlap there sometimes. There’s artistry in engineering, so it goes both ways.
Sean Sebring: Yeah, can be.
Aswin Kumar: Yeah, you are right. We talk about design thinking, but yes, how an artist sees it, we will not be able to get to that lens because we are groomed from some way.
Sean Sebring: Too analytical, yeah.
Ashley Adams: Okay. Maybe last one. What is your most treasured possession?
Aswin Kumar: Yes. I think one is a thing and one is my family and thing is always my iPhone and my family. I will always say two things, not one.
Ashley Adams: Perfect.
Sean Sebring: Well thank you for joining us today on SolarWinds TechPod. We are joined today by our guest, Aswin Kumar. Thank you for joining us, Aswin.
Aswin Kumar: Thanks a lot, Sean and Ashley. Great talking to you on this favorite topic I have and looking forward to interact again. Thanks a lot.
Sean Sebring: Thank you. I’m your host, Sean Sebring, joined by co-host Ashley Adams.
Ashley Adams: Thanks everyone for joining. We’ll see you on the next one.
Sean Sebring: If you haven’t yet, make sure to subscribe and follow for more TechPod content. Thanks for tuning in.