Sean Sebring:
Welcome to SolarWinds TechPod. I’m your host, Sean Sebring, and with me, as always, is Chrystal Taylor. Today we’re going to delve into the world of generative AI and prompt engineering, where these technologies are not only transforming industries, but they’re also reshaping how we approach innovation and security.
So joining us today is Doug Bennett. He is a global partner development manager for generative AI and security. Wow, that’s a big one. It’s actually a perfect opportunity for me to say welcome, Doug. Why don’t you tell us a little bit about yourself and what brings you here to your role and the ability for you to have fun talking to us about these topics?
Doug Bennett:
Yeah, that’s an interesting question because I mean the introduction that you gave just a moment ago, I mean, is generally plenty. But if you look at my background, what brought me to GenAI is maybe slightly more interesting, or at least it’s interesting to me.
But I’ve been in consulting, value consulting, that kind of thing for the majority of my career. As part of that, in that role, I spent a number of years as a principal consultant for a company in the IoT space.
So I got interested in this more nebulous type of technology where when somebody asks you what is IoT or what is AI or what is GenAI, me being a consultant at heart and being an analyst at heart, I love the answer of, everybody with me, it depends. So I love the answer it depends. What is it? Well, what do you want it to be? How do you want to apply it? What’s the industry that you want to apply it? What are your use cases? I love those mental gymnastics that you need to go through to define what this type of technology is.
So when I joined Amazon, I joined as a partner development manager, and I manage a variety of different companies that have GenAI applications or connection points to our Bedrock solution or Amazon Q, that kind of thing. But my interest in AI, generally speaking, has been personally driven. I really, really enjoy it as a space. It’s something that I go off and I get geeked out on in my own personal time and do lots of research and try and figure out.
My first question in this that really got me interested in it is what is GenAI as a technology? Because it seems magical. I mean it seems like something that when we use the term intelligence, we think that it has intelligence. We think of the word intelligence as a human-based concept in terms of what I think of intelligence versus what the intelligence is in a system like this is totally different. I mean if you’re interested, I gave you my quick elevator pitch on what I think GenAI is as a technology, but we can get into that during more of the podcast if you want to or not at all.
But that’s who I am and that’s why I think AI is interesting. I think AWS has an interesting perspective on AI and interesting set of solutions, but I’m not here to sell anything. I just want to talk about AI.
Sean Sebring:
Fantastic. I think one of my favorite things you said at the very start of your introduction is the nebulous, right? You have to dive into discovering, well, what do you mean about AI, because it’s got so many different facets, angles, perspectives, approaches.
Doug Bennett:
Is it one thing I’m going to lose my job over or is it something that’s going to help my job? I mean these are very human reactions to very nebulous technologies, and it’s fascinating as a subject.
Sean Sebring:
The number of applications that it could have as well is just mind-blowing. That’s where innovation comes in. It’s like, well, where can we use it to benefit next? Example, I was on my way down here to our studio and I was at the elevator. I was like, “What if we had some AI analyzing when the buttons were pressed, which floors needed the elevators the most? So if we have three elevators, can they be at the right spot?”
Doug Bennett:
Yeah, that’s right. You got 16 elevators in a bank and the bank is 100 floors tall. You better have those sequenced out in a way that makes sense and you better have an idea of how many people get on those on each floor. Another very, very interesting problem to solve is that kind of thing.
Chrystal Taylor:
I think the really fun part about it and that we all are enjoying, and in our previous conversations have been talking about, is that because it’s still nebulous, because it’s a space that is so innovative right now is because it’s yet to be determined. We still get to decide what all is it. You have your explanation of what generative AI is, but because it still has so many applications that we haven’t even thought of yet that we haven’t even got to yet, it’s still evolving. We’re still continuing to find ways to define what even can it be used for. And so, I think that that’s where the fun comes into play, where it’s so interesting.
Doug Bennett:
To your point in terms of how it applies and where, we don’t know yet. This is the internet, whatever, 25 years ago, where we think that we know what it is. We’re not going to recognize it in a year. We’re not going to … I mean in five years, if you look at the trajectory of the internet, in five years of the internet, we started out from bulletin boards to America Online and Netscape and all of these technologies that we look at and go, “What happened?” What was it, AOL Time Warner. They bought AOL and they put it in the first part of the name because they thought never going away, transformative technology, and this is what the internet is and this is what the future’s going to be. Nobody knows what AOL is anymore. There’s generations of people that have never seen it.
Chrystal Taylor:
Yeah. I mean and to that point, already we see it now. What are kids doing with this technology already that we as adults won’t think of because our brains don’t function in the same way? They’re so creative. So anything that … Even five years ago, because I mean ChatGPT took off a couple of years ago now. Even a few years ago, from then to now, how much has it changed?
I think that’s the best part about technology is that it is exponentially growing and changing every year. So if you don’t … I say this a lot, if you don’t like what you’re doing, go learn something else. Go do something else. I think that any of this AI technology, we’re specifically talking about generative AI, but there’s a lot of facets to AI as a whole and what it can be doing and what you are going to be doing.
As we’ve talked about in the past here on the podcast, it’s not going to take your job. It might change your job. You’re going to need to learn to do something else, but it still needs care and feeding. It’s not going to do everything on its own. It’s not sentient yet.
Doug Bennett:
No, and we can get into that in terms of what different roles … Or I think anyway, what different roles are going to look like. Just as an example … And I don’t know if you want to get into it now, but just as an example is I see doctors. I mean I rely on them for my health. I don’t want to be diagnosed by a doctor.
However, one of the big benefits of GenAI is it is an innovation catalyst. I want my doctor to be innovative. I want my doctor to have all of the useful information from all of the journals that are related to whatever it is that I’m seeing them about, and I want them to be able to be creative with my treatment. I hope my doctor never goes away. I hope they never get replaced by an AI system, because that’s not useful to me as a consumer.
Sean Sebring:
I think this is a great chance for us to shift into the first topic of the topic, which is prompt engineering. So I think it’s a great place to start because, well, prompt is also a great word to put in the front. But it’s about how we engage with that intelligence and what we’re actually asking it to do. And so, I think it’s a perfect place to start.
So, Doug, when it comes to prompt engineering, it’s a skill, it’s a technique, it’s something that people need to learn. Can you tell us what your thoughts are about prompt engineering and the relevance of it?
Doug Bennett:
Certainly. So this might be good opportunity for giving my really quick elevator pitch on what I think the technology is, because what the technology is I think can directly inform how we prompt it or how we interrogate that technology. So if you look at GenAI, what GenAI is is a set of inputs, an input layer and an output layer, and in the middle are something that I love the name, dark layers.
So what’s going on is these words are coming in and they’re being parsed into or changed into a set of numbers. Those numbers are then sent to … As a part of a neuron technology, these neurological type systems that are based on the same sort of technology that our meat computer is, it’ll go through these neurons and it’ll have scores applied to it. It will have biases and weights applied to it in terms of parameters. It’ll go through each one of these layers and it’ll say, “Well, probabilistically, likely what these words mean and what they’re related to are this, this, and this.”
So likely the next step and the next layer that I need to go through is these set of parameters and these … So what I’m going to do is transform these words into a set of vectors. These vectors are these intermediate layers in this technology that say, “Okay, this is the next level that I need to consider with my parameters.” Then it goes through its layer of this until it gets to what it thinks is a … Thinks, what the mathematics indicates is the most probabilistic response to your prompt.
So the reason why I go down this path, and I appreciate your giving me a moment there, is that the challenge for us when we do prompt engineering is to not necessarily look at these systems as we’re speaking to a human. So what we’re doing is we’re speaking to linear algebra and we’re speaking to calculus and we’re speaking to probabilistic functions and all the fun stuff that I loved back in high school or back in college and nobody else did.
When we do prompt engineering, it’s important to think of the way that you’re interacting with these systems as being interacting with the system and not a person. So we talk about the concept of hallucinations as an example, hallucination might be directly related to how you have prompted that technology, because it’s going to do what it’s got to do, it’s going to run through these layers, and it’s going to run this math and statistics and probabilities across what … Your question, and it’s going to come up with a pretty good guess. It might sound like it’s human, but it’s not. So I’m not sure if that’s what you’re looking for, but I’ll give you a moment to respond to what I said.
Sean Sebring:
Chrystal has to say something.
Chrystal Taylor:
I do.
Sean Sebring:
Go ahead.
Chrystal Taylor:
This brings up the interesting point of people building these GenAI technologies often inject personality into them to more humanize them so people will use them. So it does make it a little bit more difficult to separate it, especially for the average person. We had talked about this before of people … Or it has become so synonymous for the public with helping them do things that they use it to plan vacations and they use it to do all kinds of things now, but it has a personality.
So instead of thinking of it, as you’re saying, as a system of I’m talking to a computer … And I think that this has also been colored by our interaction, human beings’ interaction, with social media and bots on social media and stuff like that, is that you no longer see it as a person, but also it’s not a computer. It’s some weird nebulous in-between thing that is hard for you to define. So people simultaneously will make fun of the computer while also trying to attribute human-level sentience.
Doug Bennett:
Exactly. This thing’s stupid.
Sean Sebring:
Doug, I want to thank you because when I prompted you with that question … I didn’t mean to say prompt. Now it seems like … Anyway, when I gave you that question, I didn’t expect, and I … This is why you are such a valuable guest. I never would’ve thought, “Doug, please take me on a prompt’s journey,” and that’s what you did is you took me on the prompt’s journey in order to find out where it went and how it turned itself into the response I was hopefully looking for.
I think about the journey a lot. I’m in service delivery. And so, we talk about the customer journey. I never expected you to take me on a prompt journey.
Doug Bennett:
I mean it’s the same thing in your head. You’ve got a set of neurons in your head. You’ve got vision back here and you’ve got memory up here, and all of these neurons have a different ability to process in different ways. It’s the same way, much more simplified version, but the same way, that a GenAI system or neural network is set up.
So I’m doing this, like we do this. So it’s very, very difficult not to attribute, anthropomorphize these systems and say they have human characteristics because they relate to us. The way I’m relating to you in that prompt is very similar to the way that ChatGPT or Cedric or a Q or any of the other systems would respond to me.
Chrystal Taylor:
Sean has said this before on the podcast of we should be thinking of AI as a tool in our toolkit and using it as such rather than thinking of it as something that is going to replace our job. But in that context of it being a tool in your toolkit, the doctor would be using GenAI to hopefully get faster resolutions, get faster to a recommendation, but there still needs to be checks and balances in that.
You were also talking about how it’s formulating statistical probabilities of your answers, and that’s what it gives to you. That’s what it’s spitting back at you is what it thinks that you’re looking for. Well, thinks, it that you’re looking for. So where do checks and balances come into this?
I think that’s a good part of the context of where we think career is going as well for this. That’s all part and parcel of the same thing is the careers are going to be in the checks and balances for all of this stuff.
Doug Bennett:
So when you say checks and balances, are you talking about guardrails and security and privacy and that kind of thing specifically, or are you being more broad than that?
Chrystal Taylor:
Obviously there needs to be security and privacy guardrails, I think, but also just validation that the data that you’re looking for is actually what you need. In the case of a doctor, if it’s giving me a recommendation for a treatment plan and that treatment plan doesn’t align … Because people are misdiagnosed all the time. No fault to the medical system. This is a regular thing that happens because so many different diseases and maladies and things like that have the similar symptoms. So things get misdiagnosed all the time.
So in a case of statistical probabilities, if you put in your symptoms, it’s going to recommend a treatment plan for the most frequent thing that is going to be happening, like this is the statistical probability. 8 times out of 10 it’s this thing. But those other two times it could be something else. And so, the question of where … And I’m specifically using your healthcare example because that’s what you brought up earlier, but this is true in a lot of other cases. There needs to be data verification, validation, and that kind of stuff.
Sean Sebring:
I think another way to sum it up is how do we make sure we’re not blindly trusting AI?
Chrystal Taylor:
Yeah.
Doug Bennett:
This brings up an interesting set of questions. So there are questions related to access to data and privacy. I don’t want the entire breadth of my research to be available to ChatGPT or whatever it is. So you have to be careful that the answer it’s giving you is not one that’s based on a human response, not to keep beating this metaphor up, because the thing is, when you ask a human, we have a much more elastic brain than a neural network is. Our neural network is a lot more elastic and our experience, the experience that we apply to answering these questions is a lot more nuanced than something like a ChatGPT.
So when you ask a ChatGPT a question, as you said, it’s going to take its best guess statistically. Part of that is true, but we can’t keep out of this mix these parameters that we’re talking about.
So when we talk about training a GenAI model, so when we train it, what the system is doing is it’s trying to … It actually uses calculus to try and limit and minimize the curve of inaccuracies away from where it thinks that your answer … Where your answer wants to go. It’s using other prompts and it’s using its initial training to look at it and say, “Ah, okay. My answer to that wasn’t good so I want to try and minimize this error on this error propagation curve.”
So it’s important that you at least … Maybe you don’t understand the nuances behind the technology, but it’s important to understand that what you’re asking isn’t necessarily going to be a probabilistic response. It’s going to be a probabilistic response based on a set of trainings that this tool was given, and a set of weights and a set of biases.
So we as humans, to go back to this meat computer idea again, whether we like it or not, whether we are aware of them or not, we have biases and we have weights that we apply to certain datas and to certain types of questions. If I phrase a question one way, you’re going to react differently than if I phrase it another way. We’re just humans.
The system of a GenAI system, to come back to the subject, it’s essentially doing the same thing. It’s not doing it in the nuanced way that we can and in the elastic way that we can. It’s saying, “Okay. I was trained on these datasets that this is what truth is, this is what meaning is.” So the response you’re getting is within that context. So we’ve got to be careful.
So now we go down the ethics question and now we go down is this a dangerous technology question? Who trained it? What are the prompts that were used and what were the weights that are used in these parameters to train this model? Boy, if you’re not aware at least at some level where that large language model came from, the responses can be tuned to a specific type of output.
So if you look at something like ChatGPT, like GPT-4, it’s a huge model. I mean these models, when I say parameters, we think of Excel or something in a table. Parameters in ChatGPT-3 are about 178 billion parameters, give or take. I might be off on that. For ChatGPT-4 and beyond, we’re talking trillions, trillions of parameters. Can these be modified by hand and “pruned” by hand? Yes. At some level they can wholesale. But there’s no human that’s going through and saying, “Okay. Let’s look at this trillion parameters.”
So my point being … And I apologize, I’ve gotten on a little bit of a soapbox here. But point being is, like with all statistics, there are lies, there are truths, there are statistics. You ask yourself the question, where the maths come from? Who tuned the model? Who trained the model? Is this model something I need to verify? So the answer is when you ask a question, do you need to validate and verify the response? Ask yourself those questions.
Sean Sebring:
I was going to say, this is just a great segue to one of the other topics I wanted to approach, which is, okay, well, now what advice would you give to businesses and developers? We’re talking about, well, it depends. So this is a perfect segue, I think, for us to engage and talk about what advice would you give then?
Doug Bennett:
Right. So the advice that I would give businesses is when you’re developing an AI … Or we’re talking about GenAI in particular here. When you’re talking about a GenAI solution, the nice thing about when you apply it to a business context is you can be a lot more limited in the scope of your model, of your LLM than you would be with something like … ChatGPT has to be all things to all people. It has to be able to answer any type of question at any time in any context, from any language, I mean on and on and on and on. If we want to keep going down that human direction, it has to be the smartest, most knowledgeable computer that we’ve ever seen.
It’s not what we’re talking about when we’re talking about answering questions within a specific set of domains or a specific domain for a business. So where the vast majority of companies start, especially software companies start, is in chatbots.
So you’re not talking about taking licensing ChatGPT for that kind of an application. You can provide a large language model that is specifically within your industry, specifically within your technology, and you can train it based on a set of prompts and a set of parameters that are specific to the types of questions you want this to answer, because now we need to bring in security and we need to bring in ethics and morality and things like that. These are not trivial concepts. And also regulations, by the way.
So when we’re talking about EMEA and you talk about data protection in that kind of a context, they have very, very, very specific sets of regulations that you need … GDPR regulations that you need to accommodate, and we’ve got HIPAA and we’ve got all kinds of other regulations. So now we need to get into training these models to provide ethical responses. So not race-based when it’s appropriate, not sexual orientation-based when it’s appropriate, that kind of thing. These are very, very important concepts when we’re training these models.
But coming back to your original point, we can be fairly specific now in terms of how we apply these models and what types of prompts we expect. So that’s what I would recommend is you step back and you talk to your chief data officer, which you should have. You talk to your chief data officer and you talk to your folks that are defining these use cases, because in the past, what we would have is we would define software as a set of functional use cases. We would say, “Here’s the common denominator of the folks we want to sell to. Here’s the functions that we want this software to provide to our customer base and this is what we’re going to sell.”
When you’re defining AI solutions, you’re talking about domains. You’re talking about we want to be able to service a set of domains. This absolutely changes how applications are developed.
So if we look at AI as an entirely different type of application, it’s not functional-based, step back and say, okay, what datas do we have? What datas do we want to access? What are the controls that we need to apply to these data? What are the domains of the types of questions we want to be able to interrogate these data with?
Start structuring your AI roadmap from that perspective. Start with the data, start with where you want to be, and start with the moralities and the protections and all of this stuff in mind as you build these solutions, because one of the biggest mistakes that I have seen with working with ISVs and developing these types of solutions is they jump to the solution, “We have got to get this out on the market now. We are behind. We have got to get a presence in AI because all of our competitors do,” or whatever your concerns are. “We want to monetize this.” We’re not sure how, but, “We want to monetize AI, but we’ve got to get something in the market and start testing it.” Then they don’t pay attention to these guardrails. They don’t pay attention to these very, very important constraints that we have to put on these systems.
Sean Sebring:
Once again, Doug, you’ve done it and made me think about something I didn’t realize was part of the equation, and that’s shopping for different models, so to speak. I never thought to look at it as we’ve got different sets. In my head, I actually went to a digital interface where I was like, “Swipe. Which model do I want to choose based off of what is my goal? What am I selling? What am I creating?”
Doug Bennett:
And am I building this from scratch, to your point? Am I choosing a model like Q for manufacturing, which great solution, or am I building this thing from scratch?
Sean Sebring:
Is that going to continue to be more available, like shopping for models, so to speak?
Doug Bennett:
I promise that I won’t go back down that path of be careful who developed your model. I mean I am confident. I work for Amazon. I’m confident that these models are developed in a very business-resilient structure for want of a better term. But, yeah, I think that another place where I see a challenge with ISVs … I apologize, I’m using Amazon acronyms. So independent software vendors. So just think of a software company if I make the mistake and go back and use that acronym again.
So with software companies that are developing AI solutions, that’s also a fairly common challenge, is they start out with let’s build the solution. They don’t start out with where’s the technology now? Who’s built … I mean, look, AI is moving so fast that where you might have looked … Two or three months ago, there weren’t any Amazon for Q, or maybe six months ago, there weren’t Amazon for Q solutions by industry. There might not have been. You might not have heard of it. Go take a look. There are a lot of vendor solutions out there. Whether you choose Amazon’s or anybody else’s, there are a lot of vendor solutions out there that are worth considering.
So you ask yourself the questions, like I said, when you start your journey, what datas do we have access to? Then you can include LLMs in that, because you don’t have … By the way, as a side note, you don’t necessarily have to use a single LLM, because one of the ways of getting around these security issues and guardrails and data privacy issues and that kind of thing is breaking up these LLMs and to say that, for this type of domain, I’m going to provide these data and we’re going to be able to solve these types of problems. I’m going to combine that with another LLM that provides a different set of more constrained data, because now we’re talking about human information or we’re talking about something that is IP related or that kind of thing. We’re going to provide a more limited set of data which constraints down those prompts that we’re going to be able to be responsive to.
I’m not necessarily suggesting that you architect solutions across multiple LLMs, because if you want a really, really good GenAI solution, the bigger the dataset, the better, an appropriately trained dataset, the better, not overly trained, not undertrained, and now we’re getting into a lot of the really difficult parts about setting up these models.
Chrystal Taylor:
One of the things that I’ve talked a lot about in the past is things like when you’re building software or anything we’re building, we’re talking about building GenAI functionality into software, for instance, that when you jump straight to a solution, you miss out on things that you should have been doing from the very beginning like security and accessibility. We were talking about biases. So if you aren’t checking those things from the very beginning, how difficult is it to go back and build that in later?
Doug Bennett:
I mean it depends. I mean it depends on how complex your model is. It depends on how integrated that model is in terms of your broader set of systems, that kind of thing. I mean there are certainly contributing factors to difficulty, but it isn’t necessarily something that’s an impossible problem to overcome, that’s for sure.
The bigger issue to keep in mind is when you consider … Not to keep belaboring this issue, but when you consider these constraints that you need to put on your model, because if you look at Air Canada as an example … And this is a public case study. It’s not something I’m sharing anything that’s IP related about, but Air Canada had a chatbot that somebody came in and they said, “Hey, you guys have really good bereavement policies. Is it free for me to fly on your airline if I’m attending a funeral?” The system said, “Yes.” Now I’m paraphrasing there, but the system said yes, and it said yes to four or five different people.
The person went to the funeral and they came back and they said, “Okay. Well, great. Thanks, chatbot. So how do I get reimbursed?” That kicked off to a human that said, “What do you mean? Go kick rocks. The answer is no. We don’t have a policy. We’re not going to pay you.” It went all the way to the Canadian Supreme Court and the Canadian Supreme Court said, “Pay them.”
So my point being is, in the grand scheme of things, that’s a fairly trivial example, that they had to pay maybe tens of thousands of Canadian dollars in restitution to these people. But the risk that once these models are out in the wild that I’m going to figure out security later, which is the … Boy, that’s like the mantra of many, many startups, by the way, is, “We will figure out security and accessibility and policy and that kind of thing. We’ll figure it out later. We need to get something to market.”
The risks with an unconstrained system like that are unknowable is the problem, because how people access these systems in terms of prompt, this prompt engineering, you’re not going to predict how they’re going to be able to phrase every possible iteration of something that’s going to get you in trouble. So whether it’s more difficult to do later or not, you better consider it upfront because the risks downstream are gigantic.
Chrystal Taylor:
Yeah, I think difficult was the wrong word. I think more … I was thinking of the time and resources that are needed to put in overdoing it from the start versus having to go back in and put that stuff into it later.
Doug Bennett:
Oh, I mean pay now, pay later. I mean it’s always the same. It’s the same discussion as technical debt. It’s like do we develop systems now that are fragile and are brittle and solve a problem really, really well right now? So what is it that you want? That’s why I made that offhand crack about having a chief data officer, is because these types of discussions, they have to happen at an executive level. It’s like where do you want to be when you grow up with AI? Do you want to start out with a GenAI solution that’s a chatbot or do you want …
And that’s going to be a freestanding island of value that we provide to our customer and we’ll figure out the rest of AI later. Or is it something now that you say, okay, great, well, we had a chatbot that solves IT problems. Do we also want it to help our customers set up their new systems? Do we want to help that chatbot, “Hey, I want to add some users. I want to apply a different set of security policies to my software”?
I mean it makes sense that you’d have it in the same system based on user access requirements and that kind of thing. You could have that all in one system. What is it do you want to build? I’d strongly consider that you consider it as part of your data strategy and your overall application architecture and the rest of that. I suggest that you do that upfront.
Sean Sebring:
Well, this leads us to I think what could be potentially … Because I’ve loved all these topics, and we could definitely go at length in all of them. But I think a good one to bring us close to a close is what do you think some predictions on future trends could you give us? What are some future trends of GenAI do you think, Doug?
Doug Bennett:
That is an interesting question, and the answer is I don’t know. I wish I had a crystal ball. I mean I would be a rich man if I could have done that with the internet.
Sean Sebring:
Let’s do that. What do you want? If you had the magic wand or the crystal ball and a little bit of influence there, what would you say is the future we could have?
Doug Bennett:
Yeah. What I want in the near term in the future, to use that phrase I just used again, is a catalyst for innovation. That’s what I want out of AI. I want people to look at it as not being something … If you look at the coding example, great. So AI, generate me a connection point between these two databases that does these things. Is that what we ended at or is that the beginning and the end of it, or, Hey, AI, how would you connect these two databases to provide this type of value to my end user?
That’s a lot more interesting question to me than I know the answer develop this thing. I would hope that as AI and GenAI in particular starts to mature, that we mature in terms of how we use it. So we use it to prompt our brains or prompt our meat computers to be more creative, to provide us more insights, because I don’t necessarily see or want to see, at least in the near term, an AI that is so intelligent that we just say, “Go, write me this or generate this for me.”
Look, where I see coding and where I see … Because we started out with this whole thing about prompt engineering. That’s where I see quote, unquote … Because applications I don’t think are going to mean much in terms of the structure that we see now in the future. But what I want out of this is to have creative people start being drawn into these more technical roles. I would love to see people who are more traditionally creative.
For example, people who develop video games … Which I’m a huge PC gamer, PC Master Race. But I’m a huge PC gamer. I would love to see that type of technical creativity in more business-focused applications. People that are able to take AI and get these prompts into and out … Information out of AI and help them be very creative in terms of how they provide solutions, whether it’s medical or application development or whatever it is. That’s what I hope for AI.
Sean Sebring:
Well, you, once again, Doug, helped me think through a question I didn’t even know I had, and it did go back to prompt engineering, like you just called out, and that’s instead of asking it to do something for you, you prompted it with, “Here’s a scenario I’m thinking about and it has a potential problem. What do you think?” So it’s innovative in leveraging the AI to help think, to help discover and explore potential options, which is innovation, which is just what you said.
So I love that you brought that back to prompt engineering, which we opened with, in that a good way to start thinking about your prompts isn’t to ask it to solve your problem, but to help you explore potential.
Doug Bennett:
Give me examples. Show me how other people solve this problem. Here’s a great prompt, one of my favorites. What data, if I could provide it to you, would allow me to ask better questions? What data, if I were to provide it to you, would enable me to provide better value to my customers and how?
Chrystal Taylor:
That’s an interesting concept. I think going to your coding examples too, people are already using it in some respects to help them write coding queries. Historically what we have with computers and programming is that there’s popular coding languages of a time. Something new and innovative comes out, like Python came out and all of a sudden everyone was like, “We’ve got to learn Python.”
Doug Bennett:
Ajax.
Chrystal Taylor:
Yeah, yeah, whatever it is at the time. There’s a new hot coding language and we’ve got to learn this thing. And so, then but you wind up in the place where we are right now, where there’s a lot of legacy applications, and especially in industries that don’t have … They can’t make changes very quickly, like healthcare and government and things like that where they don’t make big sweeping changes. They’re not staying on the bleeding edge. They’re not changing their applications a lot. So they end up on … They’re still on C#, they’re still on JavaScript, whatever. They’re not trying to reinvent their applications into the latest things.
And so, do you think … As we’re going forward, engineers already used it, already use AI to help them generate queries, generate code, so they’re like … It’s taking out some of the busy work steps.
Doug Bennett:
What you’re describing is how we got to object-based coding. So it’s taking coding and moving it more and more towards human language, more and more abstracted away from the really “technical” aspects of it.
Chrystal Taylor:
There’s more and more popular now are low-code, no-code options for people as well. We’re leaning more into using things like generative AI to help us write the code, to do that harder technical … Or maybe not more difficult, but just tedious. It’s more tedious technical work than you may not need to do. Yeah, you still have to check it. Yeah, you probably are going to have to make some changes to it and then prompt your creative brain to say, “Oh, okay. Well, now I need to see this other thing.” But do you think that all of this innovation and the uses that we’re already starting to do is leading us towards a universal coding language? We talked before about Esperanto being a more universal speech language.
Doug Bennett:
Isn’t that an interesting question in terms of where this is going? What does coding mean now? It’s like when we talk about large language models, I think that what we mean by a large language model is going to change, because right now when we code, we say … Like we were developing a reporting software. Go get this data, send it through … That’s a little neuron that fired. Take that data and transform it in a way, mathematically or whatever, combine it with that data. That’s another little neuron.
We do that with Java and we do that with all these coding languages, but what does that mean when we start talking about the concept of a large language model? Do we just now dump the data that we need into a specific type of large language model, not like a ChatGPT? Do we develop our large language models by dumping these data in? Then do we train these data the way that we’re talking about now in terms of prompting and weights and biases and that kind of thing? Where did the coding language go?
I think that’s probably … If I were to make a wild prediction … It’s funny, I’m going to go back and look at this in 10 years and it’s going to be hilarious. If I make this wild prediction, it’s like there’s no coding language when you start talking about neural networks. If we get these neural networks to be “smart” enough, why do we need coding language? We have these data, we input these data, and we have a domain of a type of set of questions that we want to be able to ask and interrogate these data.
So we’re going to develop large language models that don’t require this type of coding. We’re going to have maybe data sources that are structured in a way that’s not an S3 bucket anymore. It’s not something that’s highly indexed and highly structured in terms of how we access it. It’s a fascinating … I think we could do a whole new podcast on that. That’s a really fascinating discussion about what we mean now when we say applications, what we mean now when we say data storage and access and that kind of thing.
Sean Sebring:
Well, Doug, this has been just such a blast. Thank you so much. Literally, I haven’t been awoken by so many questions that I asked you, and then I was like, “Oh, wow. I never thought of it that way.” This has just been so much fun, and we couldn’t have done this without your experience and your thought leadership you’re able to provide. So thank you so much for being here with us, Doug.
Doug Bennett:
Well, thank you so much for the opportunity, because, as you can hopefully tell, there is nothing that I like to do more than to talk about technology in a way that is hopefully consumable by people who aren’t super duper technical, because I’m not super duper technical, and I really, really enjoy this type of interrogation. Let’s talk about a subject and try and make it interesting. I mean this is so much fun for me, so I really appreciate it.
Sean Sebring:
Thanks again, Doug. This has just been too much fun. I’m your host, Sean Sebring, joined of course by my cohost, Chrystal Taylor. If you haven’t yet, make sure to subscribe and follow for more TechPod content. Thanks for tuning in.