IT Trends and Predictions for 2025 - SolarWinds TechPod 094

Stream on:
SolarWinds Evangelists Sascha Giese and Kevin Kline join hosts Sean Sebring and Chrystal Taylor to discuss Gartner's (and their own) predictions for the technology world in 2025. They explore the potential disruptions AI may bring to various sectors, including healthcare, education, legal professions; the future of AR and VR technologies; and the implications of quantum computing in relation to AI. RELATED LINKS:
Sascha Giese

Guest | Head Geek

Sascha Giese holds various technical certifications, including being a Cisco Certified Network Associate (CCNA), Cisco Certified Design Associate (CCDA), Microsoft Certified Solutions Associate (MCSA), VMware… Read More
Kevin Kline

Guest | Head Geek

Kevin Kline is a Head Geek, noted database expert, and software industry veteran. As a 13-time Microsoft Data Platform MVP and with 35 years' experience… Read More
Sean Sebring

Host

Some people call him Mr. ITIL - actually, nobody calls him that - But everyone who works with Sean knows how crazy he is about… Read More
Chrystal Taylor

Host | Head Geek

Chrystal Taylor is a dedicated technologist with nearly a decade of experience and has built her career by leveraging curiosity to solve problems, no matter… Read More

Episode Transcript

Announcer:

This episode of TechPod is brought to you by THWACK, the ultimate IT community that is here to help you start your year on the right note. With over 200,000 IT pros in your corner, January is the perfect time to dive into fresh missions, connect with global experts, and collect those coveted internet points. Whether you’re solving tech puzzles, sharing insights, or just exploring what’s next, THWACK is your destination to level up and make 2025 your best year yet. Join today at thwack.com.

Sean Sebring:

Welcome to SolarWinds TechPod. I’m your host, Sean Sebring. And with me, as always, is my co-host, Chrystal Taylor. It’s that time of year again where we look forward on our new year. We’ll be discussing IT trend predictions, and today we’ll be joined by not one veteran TechPod guest, but two. We have with us today Sascha Giese and Kevin Kline. Sascha, would you give us a quick intro for those who may not have heard from you before?

Sascha Giese:

Is that even possible? I would be… I’m disappointed. Anyway. Yes. I’m Sascha. I’m with SolarWinds for a bit over ten years, actually. Ten and a half, almost 11. Whatever. And I’m one of the evangelists, amongst Kevin and Chrystal and, as you probably hear from my accent. No, I’m not doing my usual introduction, but I’m the international guy, working out of the Berlin office. That doesn’t exist anymore. So my office is the Berlin office. Actually, my office is SolarWinds Germany. Isn’t that amazing? Wow. Yeah. That’s me.

Sean Sebring:

SolarWinds Germany. There you go. Very good. Thank you, Sascha and Kevin, how about you?

Kevin Kline:

For those of you who don’t know me, my name’s Kevin Kline. I’m a longtime fixture in the world of databases. If we could see it here. I’ve got database on my shirt. And, so I’ve been working heavily in and writing about databases since the 90s, actually, and, have been in the database tools side of the world since the year 2001. So I’m thrilled to be here with SolarWinds, been here five years now, going on five years. So, it’s been a great time. And, I’m looking forward to our discussion today.

Sean Sebring:

Me, too. Kevin. And while you have the mic, as we were getting started today, you mentioned some potential drama in your topic today that we could all discuss and share our opinions on. Would you share that with us, Kevin?

Kevin Kline:

Well, yeah, I do have some predictions that are on the negative side. So a lot of times we tend to talk about what’s shiny and new and what’s great about that. What I was going to focus in on is some historical precedent that we can draw on to understand what’s next for us in terms of the world of AI. And so we, we know there’s going to be a lot of change coming. And so what does that mean? And it’s not all good.

Sean Sebring:

Okay. Well, I’m not surprised AI is coming up. Not surprised AI was coming up first. And does this AI potential negative have anything to do with security?

Kevin Kline:

You know, that is one aspect of it. But in fact, I was not going to focus in on security. So, maybe that will be something that Chrystal or Sascha will bring up.

Chrystal Taylor:

I think there are several potential landmines for an AI conversation. Right? Data integrity. Data privacy. Which that kind of leads into security. But I think that bias is a problem. Regulation. One of the Gartner predictions is around AI regulation for the next year. Right. Ethics. There’s a lot of questions around AI still and I think that it’s been a popular topic this year. We’ve done several episodes here on TechPod about AI and AI’s use in different things, in marketing and in our products and things like that. And, it’s certainly a popular topic right now, and I don’t think that it’s going away. I’m hoping that what we see is a bit more regulation, even if it’s not on like a federal or global level. But even if it’s like, people are starting to take it a bit more seriously, starting to take the security more seriously. So I’m curious, Kevin, what you want to say that is so maybe contentious?

Kevin Kline:

Well, you know. Let’s start with the, and this is the way I like to think about topics like this is in the broader context. So if we don’t have context about some really important and significant piece of information or new development, if we don’t have context, then we tend to lose track of the significance or the second or third tier effects of something new. And in this case, consider, the two main perspectives that people have been, the broader world, let’s say, has been looking at AI, you know, on the one hand, those of us who have 401Ks AI is our dream in terms of profitability in capitalization. Right? So just a few years ago, if you had bought Nvidia stock or in some of the other GPU producers, that kind of processing chip that’s used so heavily in generative AI, you’ve been sitting on a landmine as you watch it you go up and up and up over the last couple of years. That’s one thing that people talk about, even if they don’t necessarily use AI or they’re not technologists like us.

Chrystal Taylor:

I think you meant goldmine.

Kevin Kline:

Yes. Thank you. What did I say?

Chrystal Taylor:

Landmine. Very different things.

Kevin Kline:

I’m embarrassed now. Yes, you’re right. Step on a landmine. But drill in the gold mine. Good grief. Well, I need another cup of coffee too. Yeah. So, you know, historically, we have some precedence. You know, it’s almost exactly 30 years ago. I remember sending my first email in the late 80s, back when I worked for NASA and, you know, and then soon in the mid 2000s, we started to have websites, the internet began to grow and go mainstream. And this changed the corporate and wider social arc of Western civilization in a big way. You know, it was massive. AI has the potential to do the same thing. However, there’s a couple things that we see in the broader sense, you know, these cascading second and third tier effects from it. The first is that these new technologies are very frequently hyped in the early days of their introduction. And then there’s kind of a fall off as we begin to grapple with the reality of it. And then, Gartner has what they call the hype cycle. And after that peak, they call it the trough of despair, because now we’re disappointed in what it was supposed to do. However, the market, when that happens, the trough of despair happens. It can lead to like a 80 or 90% drop in the value of those companies over time. So that big moneymaker, the goldmine, if you will, has a potential to shrink later on. But that still is not the issue that I’m thinking of, although all of the big generative AI companies that make LLMs, that make different kinds of products that we are trying to use, the big thing I think we’re going to see next year that is going to get a lot of people talking about is AI is going to start taking jobs. That’s my prediction, not just that it’s going to help those of us who use technology. So for those of us who do that sort of work, AI is quite good and is very, very helpful.

Chrystal Taylor:

What if I told you it was already happening?

Kevin Kline:

Okay, tell me more.

Chrystal Taylor:

There was a news story I saw maybe last week that they had used an AI CEO to do a bunch of stuff, so that’s, like, right from the top. They already started. There’s been layoffs recently in several tech companies that they have replaced people with AI. So it is already happening. And Sean and I actually talked about this before in the context of, right, there’s a fear with any new technology and especially AI right now. There’s a there is a fear that it’s going to take your job and whatever. And I think that just as in the context of any other like piece of technology or job that’s being outdated, that you have to evolve with the times. So while it might take that job, you can learn to do something else. There’s still people that have to work with that AI and have to be able to do prompt engineering and things like that. So a more positive spin on that is that you have to learn new skills, right? Like you can’t just be a DNS admin forever and that’s the only thing that you do or something, right? Like eventually especially in technology where new technology is advancing and then coming up faster and faster every year, like every year this is getting faster. You’re going back to like a historical context when assembly lines were invented, loads of people lost their jobs. But different people got jobs right, like they still need maintaining. They still need work done to them. You still need somebody to make the parts, it’s a give and take, a push and pull that’s in every part of industry. And technology for all of its problems is industry. So if you think about it that way in a different context, right, it’s an opportunity to learn different skills and go take care of, go learn something else, go do something else. The job you’re currently doing might be taken by AI. And that would not be great, right? Really none of us wants to lose our job to anyone else or a machine or a cat, or anything. You don’t want to lose your job to something else. But I think that if in general we should look at it as evolution of the industry, like it’s inevitable. We are already in that place where you talked about where it has been, it’s taken off in the public, once it became popularized by the public, AI is gone. You can’t put that back in Pandora’s box. So we just have to adapt to it.

Sean Sebring:

And Chrystal, you brought up something that I found really interesting. Because there have been plenty of talks about it taking jobs and changing which jobs are relevant or necessary. But you mentioned something I hadn’t really thought of, and I think is really neat and that it took an executive leader’s job because what do executive leaders do but make decisions, and analyze strategy and data and then suggest decisions based off of that. So again, you also kind of mentioned, Kevin, the people who use the technology. AI is a great tool for them. But at that decision-making level, analyzing data, analyzing what could happen next? AI is very good at that. And I hadn’t thought of it in the perspective of which jobs it could take. Which is a very interesting point, and I’d love to hear Sascha real quick. What your take is on how AI will intervene with people’s jobs.

Sascha Giese:

Well, I think Chrystal said something that’s very, very important. And that many people kind of ignore because we got used to it. And that is that technology advancement always changed the way we worked. That happened to start, I don’t know, 250 years ago with those big chairs that make fabrics. I don’t what they are called.

Kevin Kline:

Textile mills? That’s right. I was about to go there.

Sascha Giese:

And you or we as humans have to adapt or we’re lost. Right. And I think if I look at us here as a group, we are quite lucky because we are in a position where we can actually use or utilize AI to improve our efficiency, our output, and the way we communicate. So for us, it is a great tool. But yes, many jobs will go down. Will disappear, other jobs will come up. We probably will see 15, 20% more AI prompters. I think someone said that already. If you look at how fast an AI framework is able to do actual work so that the amount of training is shorter and shorter and shorter and shorter, shorter. But, that is still an important topic. And AI trainers. Is that a job already? If not, it will be a job in the future. So we have to we have to adjust. And basically that’s what we do since we are part of the workforce, right? You have to be dynamic. You have to adapt, take your sprints and, but then you stay relevant. If you don’t, goodbye, right?

Kevin Kline:

I think the point that I did not effectively highlight that I’d like to come back to right now is that… And your analogy back to the Luddites, and that’s the term we use today for people who are anti-technology. That was a group of people in northern England who rioted and destroyed textile mills and things like that because it was taking their jobs as weavers and loom workers and things like that. Well, that’s what I’m talking about in terms of job losses. It’s not going to be just highly skilled tech people. I think for people like us, it’ll help us get our jobs done and there’ll always be work for us. But I’m talking about the middle skill kind of jobs, like insurance agents. Marketing copywriters, all kinds of jobs that are kind of middle of the kind of career profiles that we can choose from. And those jobs are in their millions. So losing 10,000 admins who take care of DNS and things like that, that’s a problem. But if we lose 1 million insurance agent jobs, that’s a whole different ball of wax right there.

Chrystal Taylor:

Yeah. I don’t think that any of us are arguing that it is terrible. I think that it is inevitable, though. It’s kind of been something that we’ve already seen taking over. Right. A new piece of technology comes along and it takes over jobs. Look at McDonald’s, even. They got rid of their cashiers, their order takers. It’s just a machine. It’s a screen now. That’s not necessarily a highly skilled job or anything like that. But there is something that’s already been ongoing. And Sascha, as you, Kevin, and I have all pointed out historical context for this, but that is something that continues to happen. I think that the shock of it is, as a human being is that when you read about it and you learn about it in like history textbooks and whatever, you’re still very separated from it versus like being a part of the workforce while it’s happening is extremely terrifying in a way. Like, you know, it could happen to anybody. But the reality is, the reality is it doesn’t matter what your position is, it doesn’t matter what part of the workforce you’re in. You can lose your job at any time anyway. If you can be forward thinking enough to see this happening. Right? We can see, we’re talking about it right now. We know it’s happening. We can see it’s happening. If you can be forward thinking enough to see that this is likely going to take over whatever role you occupy right now and take the time to go learn another skill and go do something else, then you can head that off at the pass. I think it is inevitable. I don’t know if that’s going to happen in 2025 or if it’s going to be past that. I mean, I think it’s going to be a rolling effect, like it’s going to happen continuously as we move forward, more technology advances, as these large language models get tuned, as regulation gets passed and things are, you know, made different ways, right? Like that’s going to take over different types of jobs. Humans are endlessly innovative and creative, and that means that they will find new ways to apply that technology, because, as Sascha pointed out, it is generally more efficient. Right? Like even just thinking about computer technology. In the 80s, it would have taken 20 minutes to run a query that takes two seconds now, it’s insane. And it only advances more quickly every year. And so we have to as a society, I think, a global society in technology using technology have to be prepared for that to happen at any time. Just keep learning stuff. And I know that sucks. Like the burden of that is on you. But the burden is on you to care for yourself and your family already. So like, that burden is already on you. You just have to accept it in a different way.

Sean Sebring:

So you said one thing earlier, Chrystal. And I’m going to be a nerd here. You said inevitable, and I was like, oh, God,

Chrystal Taylor:

Yeah. Thanos snapping us.

Sean Sebring:

AI is reminding me of the Terminator or something. “The future is inevitable.” It’s coming, it’s coming. But that was also a great segue for what I think we could discuss as far as a trend or prediction for the coming year. Which is where do we think AI is going to show its face most next year? And if I’m springing this on you and you need a moment to think, that’s fine. But I was going through feature requests. Part of my role is to see what the customers that I’m working with are asking for. And so I’m seeing some really interesting stuff. So, if you need a moment to think, I’ll just share with you something. It’s not uncommon. But one of the requests was, as I’m writing a comment. Right. And comments are just, you know, a message in responding to an incident, for example. But as I’m writing a comment, can it suggest grammar restructure and spelling? And at first I was like, oh, do you want it to brush your teeth too?

Chrystal Taylor:

Yeah.

Sean Sebring:

And then I realized I’m actually using AI to rewrite these feature requests in appropriate grammar and structured ways. I was like, wait a minute. I actually really do like this. So as a role that I wouldn’t say is at risk, but would either be benefited from or changed because of this is like an editor, right? An editor. So I can see it absolutely being put into anything where you’re typing.

Sascha Giese:

So actually, I had a very similar question. Today, this morning, a magazine from the Middle East, asked for predictions, blah, blah, blah. And a very similar question came up. So, I think I answered in a way that two of the sectors who would need an AI disruption most, which is health care and education. Won’t see massive disruption because of legislation. So if we move on from that, I could see a big disruption in 2025 for scientific research. They have money. They’re well-funded. AI will be super beneficial for them because they can run tests inside the box and don’t need massive, complex test cases outside. That should speed up whatever research they’re working on. Also, identifying patterns in AI is thousand times better than any human could be. So, considering that AI frameworks get more cheaper and cheaper and cheaper and more, now, approachable is not the right word. Easier to deploy. I see their opportunity, in the close future in 2025.

Sean Sebring:

No, I totally makes sense. And I don’t know if I like the concept of AI ever taking over in health care. Or if I still would prefer the human interaction of someone telling me I’m ok.

Sascha Giese:

Yeah. Look, look, I had a bit of an health issue last week while I was on PTO, so I visited three doctors this week. Okay. And the amount of guesswork I was confronted with is scary. And I didn’t mention this towards the doctor, but, I guess in a software that we talked about identifying patterns again, a software is probably more reliable than the 65 year old dude in front of me who forgets what he said two minutes ago. You know? Yeah. But, yeah, whatever.

Sean Sebring:

And I’m just thinking of Bones from Star Trek now with his, like, medical wand.

Kevin Kline:

“I’m a doctor, Jim, not a prompt engineer!”

Chrystal Taylor:

That’s good. That’s really good. I think Sean, though, what you have to remember is something you like to say all at the time, which is that AI is just a tool. So, like, I think that AI is a tool that doctors could use, like they go to years worth of medical school and then they’re out in the world and they have to frequently kind of re-up up their education in some ways. They take these seminars, they go learn about new medications. They have to learn about new procedures. They have to learn about new diseases. They have to learn about all of these things constantly. And I feel like, there’s only so much that your brain can parse through. Cullen Childress actually compared the human brain to a computer on SolarWinds Day not that long ago with me, which was a really great analogy. And I think that while that is true, there’s also the concept that you don’t use 100% of your brain. So we’re an inefficient computer almost, because we can’t use all the resources at our disposal. So if you think about it like that, I think that there is definitely room for it. I don’t want AI to diagnose me, but if AI could get my doctor to a faster diagnosis, like they can parse through, like you said, pattern matching, they can give them, you know, a summarization of my entire medical history so that they can get through that faster, especially anyone who has chronic problems. They go to doctors a lot and their files, you know, like this. And then like, when you go for a regular doctor’s visit, your doctor does not have time to review all of your files. So I think there are definitely places where it would be extremely helpful to have something that could do a summarization or that could even just bullet point. Like, these are the major things that you need to know for this type of diagnosis, or something that would help them get there. Because doctors are only human and they’re doing their best. I think in most cases, as I said, they’re only human. So there are certainly bad doctors. But I think that they could use help, basically. And if AI is a way to help them find the patterns and to do summarizations of your medical history for you, obviously this is in the context of me thinking that there are privacy controls and all of these things that are gonna be required for AI to ever make it into healthcare. That’s really important and can help them get to a diagnosis faster and potentially help them save lives and maybe even reduce the number of doctor’s visits that people have to go on, or at least how long it takes you when you’re at the doctor.

Sean Sebring:

Oh, now, that would be a win right there as the wait. But, yeah, my juvenile child brain went to Big Hero 6 and Baymax.

Chrystal Taylor:

La la la la.

Sean Sebring:

A marshmallow-shaped nurse robot offering me medical assistance, which would be cool, but I’m like, would I prefer? I don’t know. Either way, Kevin, when it comes to where you think AI will show its face most in 25. What do you think?

Kevin Kline:

Well. Consider this. There’s about 1.5 million lawyers in the United States. About, close to 40,000 new lawyers are minted every year. But the majority of time that lawyers spend in their work is combing through hundreds or thousands of pages of documentation. You know, there’s this whole process and the legal, in the legal setting, you know, you’re in the courtroom. You have to do what they call deposition here in the United States. And there are entire companies with hundreds and hundreds of employees that do nothing but provide that kind of service back to all the different lawyers across America. The most famous is probably LexisNexis, but there are many others. Now, imagine if you can suddenly go through, you can summarize and scan through the entire legal corpus, the entire body of law in the United States looking for precedents. Which used to take the whole team. I think we’re going to start to see this in some highly skilled professions very soon. So there’s many predictions out there that by 2030, we’re going to start to see jobs. Maybe 30% of the hours worked out there, according to McKinsey, will be reduced, due to AI. But I think it’s going to hit in verbal professions early because we’ve already seen how AI can help us with grammar. But what if we now expand that to, you know, all writing in the legal corpus about, let’s say, plagiarism and IP theft. And like your point about Baymax also reminded me, what if your doctor could instantly know the newest research? You know, my doctor is 68, and I don’t think he has kept up with, like, the hundreds of new drugs that are being introduced every year.

Chrystal Taylor:

Yeah.

Kevin Kline:

He stays on top of the major stuff, and that’s a lot of work just to do that. So I think the professions that have a heavy dependance on understanding of a large amount of data, or a large amount of text or verbiage, you be able to use AI to really extract the signal from all of that noise in a way that we’ve never been able to do before, but that will slow down or even prevent organizations from maintaining the same level of hiring that they have in the past.

Chrystal Taylor:

I didn’t think about lawyers, but that’s actually a super good point, because they do have to comb through so much documentation and law. I mean, that’s why they specialize, right? Is because there’s so much. So, you know, you specialize on family law. You specialize in something else. Just because you have to minimize the amount of data you have to consistently intake. So that’s a really good point. I think that what you said is super insightful. And I think that you’re right about that. We are like these places that we can reduce the amount of time it takes to learn something or to get to the point, basically, which is what you’re talking about. And I think that the companies that are going to be successful, the large language model building and all of that are going to be the ones that can prove data integrity, because if you’re using it for something like law or health care or something like that, you have to be able to prove that that’s reliable data, like it’s good sources. And I think that right now there’s kind of a problem with that. Right? And even though AI’s been popularized, and, you know, people are using it to plan vacations and all kinds of stuff now, right? Like regular people use it that don’t use technology a lot. Right? Like normal people are using it that are not in technology. And because of that, now you also have all these memes all over the place, right, of like it recommending you put glue on your pizza instead of cheese or like all of these things, right? It’s a joke. And while it’s a joke, then actual companies cannot take it seriously. So they have to get to a place where they can provide good data integrity. And I don’t know if that’s going to happen this year. I think that that’s going to be a bit of an endeavor because as we know, the really big large language models are taking in the entirety of the internet, which is full of nonsense. So combing through that in some way, maybe use AI to point out the stuff that doesn’t matter. I don’t really know what the answer to that is, but I think finding a place to good data integrity is going to be key in success in any industry.

Sascha Giese:

Remember, a few days ago when we briefly discussed what we want to talk about. I had this brilliant idea to talk about AI in gaming, right?

Sean Sebring:

Oh, yes.

Sascha Giese:

Yeah. Yeah. And this morning. No. Well, yeah. This morning. I got a cold shower. I think I read it on, Where where was it? Let me quickly give me one second. I think it was in The Verge, where I read about Sony Entertainment and AMD are going buddies for AI in gaming, but it’s kind of a four year plan they are on. So it seems that’s not something that’s going to happen soon. What they want to do is start utilizing machine learning for responses of NPCs, but also use voice modulation to make them speak. So they don’t have to rely on humans, right? For speaker roles. Right. So that is something they’re currently working on, but, they say it’s probably something for PlayStation six, which is a little bit ahead. Nvidia again, Kevin, you mentioned Nvidia a couple of minutes ago, right. They’re obviously a little bit more advanced on that topic, based on all the stuff they do. But even there, nothing immediate is going to happen quickly. And I’m wondering why. Is it too abstract? Or is it because creating a game is not something that you do within a year? You sit like 4 to 6 years on a game, on a Triple-A game at least. Right? And maybe stuff that’s currently in development just AI has no place in that. So it’s probably something that we would see in the next generations of games. I don’t know, we’ll see.

Sean Sebring:

I think scale is probably also a big issue with that. The demand of leveraging AI and the millions of prompts happening simultaneously in the games, I can only imagine that scale would be a massive thing they’d have to consider, because if, you know, a million of us are all asking the same NPC a question in a different way it’s got to run what should its response be for all million. And there’s another 2 million talking to another NPC. And, you know, when I think about, Sascha and I both shared tenureship at a video game company, Blizzard Entertainment. It had a peak concurrent subscription of, I’m sorry. Yeah. Concurring subscription. I was over 13 million, I think.

Sascha Giese:

Yeah. Yeah. 2010 or something else

Kevin Kline:

World of Warcraft. Yeah.

Sean Sebring:

And so that many people all, you know, interacting with the model simultaneously is just bizarre to think about. And that’s one game, right? And there’s thousands of games out there.

Kevin Kline:

Sure. Yeah. My favorite right now is Baldur’s Gate 3.

Chrystal Taylor:

Yes. So good.

Kevin Kline:

I’ve heard the number about well over 100,000 every day now, even a year after its release, in the middle hundred thousand, 130,000 is the number I heard. And, you know, so when you have games that are really exceptionally good like that, and have long, you know, they got legs, they it’s going to be around for a while. One of the other big issues you run into, you mentioned scalability is I mean, these chips aren’t cheap and they’re not quick to produce, you know? So, that could be as much a factor. I think, Sascha, like with Sony’s planning, is we’d like to put a GPU in every PS that we got out there, every PlayStation. But, when is the, you know, when are the fabricators going to be able to make enough for us to use a million extra, 2 million extra every year when the new PlayStation is coming out? So scalability and skills, I think are definite bottlenecks on that.

Chrystal Taylor:

Well, something to that is also important to remember is that gaming has pretty much always used some form of AI. So, like, it’s not new to the gaming industry in any capacity. It’s been on a smaller scale. And I think that part of the problem now is that gaming has, because of the advent of like geek culture, what, like ten, 15 years ago, it started to become popular, right? Where before it was like you got made fun of a lot. If you were a geek, you were a nerd, you were playing video games, and it has become so popular and they’re making more and more games, more and more different types of games like mobile gaming is taking off, right? Because anybody can play a mobile game. What is Microsoft saying right now? Xbox is saying, everything is an Xbox, right? Your phone that’s an Xbox. Your computer, that’s an Xbox. That’s their current campaign that they have going on with this whole play anywhere thing where the gaming industry as a whole is recognizing that gaming has become even more popular every year, it’s getting more popular with everyone, which is great for them. Like monetarily, they love it. But like also, it means that they, going back to the scalability conversation, like you have to scale up with it and we don’t have the resources for that. Like they don’t have it and they’re looking for places to cut costs. And unfortunately, going back to what Sascha said earlier, that means that they’re looking in places to cut costs. And some of those places, a lot of those places, mean that people in the games industry are losing their jobs already. It’s been happening this year. Last year there’s been huge layoffs in the gaming industry. Whole companies being shut down like genuinely like so many things. Artists, if you can have AI do all the voice work for you or do all the art work for you, why use it? It’s a big point of contention in an industry right now. And I think that, that’s just one facet of like the tech industry as a whole. And as we see AI kind of go into these other industries and become more like, visible, right, more visible in those industries, like if it goes into lawyer careers and it goes into, or law careers years, and it goes into whatever other things that people are doing, health care, whatever, the more visible that it gets, the bigger a problem that this is going to be. And I have lots of feelings about that, but I’m not going to dig into it right now. I think that there’s an issue right now where it’s popular and yet not really trusted. So it’s a weird situation, I feel like, because normally when something gets this big and this popular, it’s because everyone wants it. And you can rely on it to do whatever it’s supposed to do. And weirdly, it feels like AI almost is popular in spite of it not being reliable. Like, it’s just a troll.

Sean Sebring:

I think it’s probably still part of this next topic. Because AI, like you said, Chrystal, is everywhere. But metaverse was something we had also discussed. And because you made such a pleasant face, Sascha, would you care to kick us off? In what that might look like in 25 or beyond?

Sascha Giese:

It’s going to die a terrible, slow death.

Kevin Kline:

And expensive. You know, Mark Zuckerberg spent $10 billion getting that launched.

Chrystal Taylor:

And they didn’t even have legs.

Kevin Kline:

And it did not. Yeah. It was a bad idea from the start.

Sascha Giese:

Yeah, I mean, it’s… Look. Why? What is the intention behind it? You want to put your customers, grab them by the hats and keep them in a closed garden? Right? So the idea is to provide all possible services and offer all things inside the closed ecosystem. And then we had our couple big players there, like, yeah. Amazon didn’t even try. Right. Zuckerberg’s company Meta. Meta tried. I’m not sure where Apple is going now, since they have the hardware, but I don’t know. I don’t see this as being a huge thing. We talked about this maybe 2 or 3 years ago. And as Kevin mentioned, when he kicked us off the hype cycle thingy, we went to a metaverse. We went to it with NFTs. We went to it with all this stupid stuff that no one really needs, you know? And look, that is what technology fits. If we can’t, or if vendors can’t, provide use scenarios, use cases. Right. That is where stuff goes back in the box and maybe someone comes up with, hey, we had this technology five years ago. Now I have this scenario here. Remember? Oh, this this is hard. IT stuff like ten, 15 years ago when they started with putting compute, storage, and network in one box.

Kevin Kline:

Yes. Hyperconverge. Right?

Sascha Giese:

Yeah, there was just one vendor in the beginning, and the stuff was so expensive that no one really bought it. And this one vendor was, what was it? It was like a company, it was Cisco, HP, and someone else was involved, which I don’t remember all the details. But it was so expensive that it dropped. Okay. And, when did I start with SolarWinds, 2014. So around that time the second wave arrived and a lot cheaper, lot smaller boxes. And that became a thing then, right? So here we just talk about the difference of 4 or 5 years. Where proper use cases were found and maybe we see this, maybe we see stuff like that for metaverse. I don’t know, but I have little hope.

Chrystal Taylor:

You mentioned Apple with their headset, and I was just thinking. I think where Apple is doing a better job is that they’re investing in a AR over VR. So you still have human connection where VR. I’m not against VR. I do want to be clear about that. I think VR gaming is cool, but also I don’t want it to take over my entire life. I think that that is like a dystopian novel that could have been like, how many dystopian stories have you read where you basically wind up as like a drone in a box and you’re in a brown environment going from this box to that box and like, that’s what VR is like. If you’re trying to replace my office and every part of my life in a virtual space, that’s not great. Like, I don’t want that. People need a human connection. Even those of us that are introverted, even people who prefer to spend most of their time alone, they need some human connection and interaction, whether that’s just when they go to the grocery store or when they walk their dog or whatever. We are a community species. We need community. And while there are great online communities, like THWACK, while there are great online communities that give you some of that and that fosters some of that it’s actually one of the biggest challenges that we saw whenever everyone was forced to go to remote work as well. And when everyone was in lockdown for Covid. People don’t do great in isolation, turns out. I mean, they use it as a punishment in prisons for a reason, right? Like it’s not good for you mentally. And so I think that that’s where they’re going wrong with this. The technology I think is cool. I think they could do some cool stuff with it. I think where they’re going, and maybe you’re right, Sascha, maybe in like ten years, maybe the technology is too soon right now. And they’ll do something with it later when it’s less expensive because they, these are crazy expensive. Spending to even the Apple headset, the Apple Pro headset. Like I looked at that, I went, nah, I can’t even invest myself. My son has been asking me for a VR headset for Christmas for the last like three years in a row, and I still won’t do it, but like, no, no, that’s like, it’s like 4 or 500 bucks. I don’t know, buddy. And that’s a cheaper one, like, they’re very expensive and it’s not good. It’s not a good human connection. You isolate yourself so bad and it’s so negative. And where you start to see the negative repercussions of that. If this does get more popular, we’re going to see very negative repercussions in how humans deal with one another like we already see online. It’s already getting there. Social media is already like so negative because there is enough of a difference between your digital presence and your physical presence that you can dissociate from it completely. So you can just put these barriers and that’s where I hope that metaverse doesn’t take off so that those things don’t happen.

Sean Sebring:

I like your point about AR over VR. You agree, Kevin?

Kevin Kline:

Yeah. I was about to say, there’s a lot of success stories around AR. So, for example, I have a background in the Department of Defense and still have many friends working for these kind of companies. And, for example, many of the mechanical repairs, the mechanics can wear glasses that basically has a heads up display. So imagine you’re in there repairing an Apache helicopter and you’re like, I remember that intake manifold, but what does it do? And you can tap and. Oh, okay, here it is. Here’s the manual on that whole thing. So, I think the really key driver to success is even a capitalist could see how valuable that AR is. Whereas with VR, we only have kind of entertainment style kinds of solutions with VR. And so, because it’s not as good as, Chrystal, as you pointed out, it’s not as good as real reality. If people will not really gravitate to it in the way that they had hoped.

Chrystal Taylor:

Well, not only that, there’s the physical part of VR as well. Like, there are so many people that can’t wear a VR headset for more than, like, 30 minutes to an hour, because you have this inner ear situation where you’re stationary, but you think you’re moving like there’s a lot of, like weird things that your brain does with it. So there’s the physical aspects of VR that you don’t have as much with AR, because it’s like an overlay of what you can physically see and touch and feel in the world. So that’s where I think it’s probably not going to be as successful. Even though VR games are fun, most people can only spend 30 minutes to an hour, maybe two hours inside them.

Kevin Kline:

ARPA, the U.S. Defense Advanced Research Projects Agency has done a lot of work around VR for training soldiers. So, you know, you could be jogging on a mat that will move under you. And, you know, with your pack and your weapon. And they experience so much of that, Chrystal, that they called it simulation sickness. It’s a real diagnosis you can have. That disconnect between the real real and the virtual real.

Chrystal Taylor:

Yeah.

Sean Sebring:

Man, that one sounds fun to me. The idea of it. Because, you know, we never thought we’d have phones that were everything in our pockets, right? And I think a HUD is just the next way that we interact with similar technology and access the future. But, that’s kind of a hope for me. I think that’d be really neat. But, I think a good final topic for us now, which unintentionally may go back to AI, is quantum computing. So if we think about how AI has exponentially exploded growth and change in how we use and interact with technology, when quantum computing, you know, takes off a little bit more as well and its ability to interact with AI, maybe. I’m just thinking of the potentials there. Right. So quantum computing, the ability to do, you know, even more computation at rates that are unheard of, like Chrystal mentioned earlier. And this is a breakthrough where if we think about computing 20 years ago, we can do 20 minutes worth of work in two seconds. So what is this going to mean when quantum computing is more available?

Chrystal Taylor:

Hold on. I just want to say that I hate to break it to you, but the 80s was 40 years ago.

Kevin Kline:

Isn’t that the surprise for us all as we get older? Right?

Chrystal Taylor:

I do it to you all the time. Like so that was, like, 20 years ago? No.

Sean Sebring:

I didn’t remember how long ago you had said, but either way, yes you are correct.

Chrystal Taylor:

It’s fine. I’m just. I’m just teasing. But it is. It is the reality, right? I would be curious. I don’t know where we are in the technology of quantum computing to really talk about it in any kind of intelligent way. I think it’s still sci-fi for me.

Kevin Kline:

And one of the misunderstandings, too, about quantum computing is that, it can solve problems that classical computing would need hundreds of thousands or millions of years of compute time to accomplish. Yes, that is true. But the flip side, in terms of those easier tasks that traditional computers can manage, quantum computing is not good for every scenario. So, you know, when you hear something about, oh, you know, we can do all of this, you know, millions of man years of work in seconds. Yes, but it can’t build the pyramids, right? You know, there’s a lot of things that it’s ideal for. Breaking, what is a 256-bit encryption? It can do that in moments, you know, so if you’re a spy or a crook, then you should be worried. But, on the other hand, some of the things that we really want to do are not going to be solved by quantum computing. But things like, imagine if we could do not just one hurricane model right now if you are watching a weather forecast and it says we have a hurricane, the European model says this. The American model says that. That’s because we both have supercomputers crunching away on it. But what if you could have 500 different quantum computers give you that information back immediately, right. You put in your query about what’s the weather model? Tell us. And we can have many permutations and it can give you all the possibilities at a speed never before known. So that could save people’s lives. On the other hand, we’re not going to use that to run Visual Studio. So, it’s an interesting set of pros and cons.

Sascha Giese:

Well, I am with Chrystal there. So, up until maybe two years ago, it was pretty much science fiction to me. But then what Kevin mentioned, it’s not just a risk. It’s going to be a reality. That we need to rethink encryption. Of course. Even we as persons. And we are probably the least important target audience for this topic. Yeah. I mean, we talk about governments, we talk about organizations, enterprises, etcetera. They will suffer a lot I suppose, change and risks and yada yada. For us as individuals, it’s probably not a big deal. I don’t know, we’ll see. We’ll see. But yeah, things are happening. Google. Google is quite advanced on that. And I think IBM, right?

Kevin Kline:

Yes. They just had a news release on this.

Sascha Giese:

But stuff will be so abstract, so expensive. I don’t see much happening. Definitely not in 2025. There’s just probably 2 or 3 more years before it just becomes really relevant to our day to day. That’s my guess.

Sean Sebring:

Very good. I’m stuck as an optimist because of how fast generative AI exploded. I’m just like everything could be happening tomorrow at this point.

Chrystal Taylor:

You’re not wrong. You’re not wrong. But I don’t think that quantum computing is accessible, right? It’s way too expensive at the moment. Generative AI is way more accessible because they opened it up, right? ChatGPT, anyone can use it. That’s how it got popular is that literally anyone can use it. So, I think that quantum computing is still a ways off from that. Like I said, I think probably if it’s more sci-fi to me and Sascha, it’s probably more sci-fi to the general populace. Like they don’t even have a thought about it other than like, hey, I watched Transformers. That’s probably got quantum computing, right? For me, that’s what it is. I have not looked into quantum computing at all. So like, I don’t see it because it’s not accessible. I don’t think that it’s possible for it to explode in the same way that generative AI did. I think that while we would all like whatever technology we create or whatever we’re working on to be explosive in that way, popular in that way, like where people are taking it seriously and they’re learning about it and all of that. I don’t think it happens for everything.

Sean Sebring:

Well, I almost felt shut down. But what doesn’t happen in 25 is still a prediction. So.

Chrystal Taylor:

That’s true.

Kevin Kline:

That’s true. You know, and your point is spot on, Chrystal. You know, quantum computing right now requires liquid nitrogen to cool it. So anything that requires liquid nitrogen is not going to be a popular thing among the consumer. The consumer audience. Right. It’s reserved for the very biggest of organizations. The, you know, flip side, GPUs, we’ve been using them and, you know, vector graphics for really high end, gaming for quite a while. So we’ve got a long track record of that. One reason I bring them both up in this quick statement, if I can impose on you for another minute, Sean, is that both of these, though, have massive sustainability issues. Right. A GPU consumes, oh, gosh, like 30 times as much, energy and produces massive amounts more heat than a standard CPU. In fact, I was at a Gartner conference not long ago in which one of the people on the podium said, people think of, you know, these advanced self-driving cars with AI is the next version of what we will have with EVs. And he said the problem is, if we had every EV out there, if we had every car and turned it into an EV with self-driving capabilities, the energy required and the CO2 produced to make that energy would be greater than if we kept every car an internal combustion engine. And I thought, okay, it doesn’t sound as great to me as it did before. Now that’s the self-driving part. An EV that doesn’t have self-driving features doesn’t, you know, put that kind of burden on our environment. But, most people don’t want to buy one unless it can do that. So those self-driving features.

Sean Sebring:

Well, hey, at least we’re asking the questions while it’s in development now, so I think that’s, Kudos to us for making that part of the design process is how terrible could the results be from this on, you know, Earth.

Chrystal Taylor:

Yeah. Well, I know, we’re looking to wrap up, but before we go, I just want to ask. I want to give you guys an opportunity. I’ll start with Sascha. What is your biggest prediction for next year? What’s something that you think is really going to happen next year and it doesn’t have to be necessarily about tech, whatever you want it to be, make it about whatever you want. What do you think’s going to happen next year?

Sascha Giese:

The amount of blackmailing or extortion regions might have in global players, does that make sense? I think in our pre-call I brought it briefly up. The thing with Indonesia, remember? But, so Indonesia, for our listeners, you know, who are probably, maybe not aware, Indonesia said like, hey, Mr. Tim Apple, you’re not going to sell iPhones any longer in our nation. But we might do, we might reword our decision if you invest in, like, a factory or whatever. And that was a couple of months ago. And, for us, there was like, okay, Apple is one of the biggest organizations in the globe. And what exactly is the problem here with Indonesia? But then I googled actually, and I wasn’t aware of that, that was my own fault. It is the biggest Muslim country in the world. It is actually quite huge. So they do have something to say when it comes to the annual revenue of Apple. Probably just a low percentage, but, hey, even Tim Apple will probably look into this, and I think that we’re going to see something like that more in the future. And let’s not even forget all the trouble when I’m talking about Tim Apple. The trouble the EU puts in front of Apple’s shoes. Right? And waiting to stumble or whatever it is. And it’s kind of like, it’s your fault. Your fault, your fault. It sounds so familiar, right? Like, you know, our customer base. Whatever. I think something’s going to happen there.

Chrystal Taylor:

All right. Kevin, what about you?

Kevin Kline:

Yeah. Very similar to what you’re talking about. Is that I predict, I’m certain this is going to happen is we are going to see massive legal battles in courts about the impact of, in particular, AI. So we already talked about how law, medicine, and things like accountancy and investments. I know all the big insurance companies because they do so much actuarial analysis. They all want to make use of these kinds of AI capabilities. But there’s so little that has been firmly established in law about how to govern, and regulate these uses. And so, you know, will people, will the American public, will the EU public, say, you know what? We demand a right to privacy for our data? Well, that  is something that we really don’t have in the states, and the EU has a much stronger position on that. But I think we’re going to see those things being fought out in the courts. Particularly because many regular people are in favor of more regulation so that they don’t get abused by these new capabilities. But many of the capitalists who have the money to influence the legislators want that because it increases their profit. So I think this is going to be a very litigious year, and we’re going to see a lot of decisions come out of the court systems that for people like all of us on this call will find dubious because we’re technology people and we know what’s going on. And, you know, I think the average age of a senator in the United States is like, towards late 60s. You know, they can hardly use a smartphone. So I think we’re going to have some, we’re going to have some bad decisions come out as well, because people who are involved in actually settling the regulation really don’t know what’s what.

Sascha Giese:

I’m pretty sure there will be some lobbyists who make the decisions.

Kevin Kline:

They’re you go, exactly.

Sean Sebring:

Thanks, Kevin. Chrystal, since, you brought this up, what is your prediction for 2025?

Chrystal Taylor:

I have a couple of predictions. The main one is that we’re going to start to see regulation take effect for AI. I think that it’s a little like many things in regulation and governmental restrictions and that kind of stuff it takes a lot of time. So, now that we’re a few years in where people are starting to see repercussions, there are starting to be lawsuits and all that stuff that we’ve talked about. I think that we’re going to start seeing more regulation, and I do not know in the US what that’s going to look like  with the administration that’s going to be changing and all of that. Like, I don’t know what that necessarily is going to look like. But I do think that we are going to start to see more regulation and restriction around AI use in some capacity.

Sean Sebring:

Right on.

Chrystal Taylor:

Yeah, that’s my prediction. I have other predictions, but they’re not as fun, so I won’t share them. Sean, what about you? What do you think is going to happen next year? It doesn’t have to be about tech.

Sean Sebring:

Mine’s not as fun either. But, you know, it’s come up in some of our past episodes where we’ve talked about AI, and security. For example, how AI can pretend to be your grandma. I think it’s actually called, it’s related to a grandma scheme. They’ve named it such, I think. But because of that same concept where AI can pretend to be somebody, try and get information from you, that the same is true in marketing. And so I think in 2025, how marketing is approached is massively going to change, because people are going to have less trust in the material they’re being marketed. And so that’s going to present a challenge and an opportunity for innovation for marketers, because if I just am assuming, you know, algorithms already cater ads to me based off of, you know, my phone listening, Amazon listening, whatever’s listening. So they already cater marketing towards me, but even more so, I think scams are going to become more and more possible because if AI can create the marketing material, including images, make it look like something that’s not real, it’s going to be so much easier to dupe people into spending money where they shouldn’t. Because, hey, if it’s got the logo, if it’s got the branding right, I assume I’m clicking on the right thing from the right site, so who knows. But yeah, again, not as fun. But I definitely think that there’s going to be more of that. So, take your phishing training seriously at work. Kidding. Not kidding. Yeah, it’s going to change. I think. That’s my prediction.

Chrystal Taylor:

Well, you said that and all I could think was that security teams are going to have to step up their training also in response to that.

Sean Sebring:

Absolutely. Yeah. Well, I’m glad everyone was able to share their predictions. And thank you, Sascha and Kevin, for joining us today.

Kevin Kline:

Thank you. Pleasure to be here.

Sascha Giese:

Thanks for having me.

Sean Sebring:

Yeah. And while these predictions are fun, trying to guess what may come, I eagerly await seeing how 2025 unfolds. And also, thank you, listeners, for joining us on another episode of SolarWinds TechPod. I’m your host, Sean Sebring, joined by fellow host Chrystal Taylor. If you haven’t yet, make sure to subscribe and tune in for more TechPod content. Thanks for tuning in.