Home > SolarWinds Lab Episode 61: SAM Template Rematch

SolarWinds Lab Episode 61: SAM Template Rematch

The power of monitoring with SAM lies in the ability to create templates - collections of monitoring components which collect data from all manner of sources, from services to logfiles to synthetic tests that impersonate a real user's behavior. Being familiar with all the different options to configure these components is essential to creating effective templates. Back in episode 53, SAM Product Manager Steven Hunt and Head Geek Leon Adato dug into each component type and described how they worked. But with over 50 component types, there were many left out. In this episode, they continue the exploration, looking at port monitors, database query components, and more.

Back to Video Archive

Episode Transcript

Hey everyone, welcome to SolarWinds Lab. Now, you might remember episode 53, when Steven and I dug into SAM components, and we started to talk about-- Do you honestly think people pay attention to episode numbers? Well, I mean, I do. Anyway, back in that episode, which aired April 2017, you can find a link in the additional resources section down below, we got through some of the components, but there was a lot left over. For anyone interested in modifying or building templates, whether you've seen the episode or not, understanding how each of the SAM components work can make all the difference, so that's what we're going to talk about today. Now, if you were watching episode 53, but got interrupted, or need a refresher, in addition to the link to the whole episode, we also have links to our Lab Bits, where we cover each of the components in short, bite-size videos. Actually, those are important to mention, because, in the interest of showing more here today, we're not going to talk about any of the components we covered previously, but we're wasting precious time, so let's get started. I'm Steven Hunt, product manager for SAM. And I'm Leon Adato. When we talk about SAM templates and components at SolarWinds User Groups, there is always a ton of questions, so please, feel free to ask them in the chat window that you should see over there, and if you don't see a chat window, that means that you aren't watching us live. To do that, go to lab.solarwinds.com to catch future episodes and for an archive of previous shows, complete with transcripts and closed captioning. Are you ready to get started? You mean pick up where we left off? Right, that. So where I'd like to start off for this episode is with a component that I think often gets overlooked, and that is the port monitors. Now, there's a series of them, and we're going to start off with the most basic of them. So we're just looking at this TCP port monitor. Yep, so within SAM, you have a TCP port monitor, and it allows you to do a very basic check, as you mentioned, around port. Is it responding? Is it up, is it down? That's it, that's all that really it's looking for. And if you look at the screen you can see that. We happen to have port 80, which, obviously, it might have been shown at the HTTP monitors that we covered in the last episode, but here we go. But you could put any port number in here. That's important. Any port number you want. Because, if we go to the next one, the DNS monitor, you'll notice that it is pretty much the same screen. It's exactly the same screen, except it's been pre-configured for port 53, so that basic element of TCP porting, but out of the box. It now has just 53 configured, you don't have to go in, create a TCP port monitor specifically for that, and designate that as 53. You can just simply add the DNS monitor. If I put 80 in that port number, would it work? It technically would work. Yes. But I don't know why you would want to do that. Of course, because then you'd see a DNS monitor being ported. Okay, but I'm getting to a particular point here. So here we go, next one on the list of the port monitors that we have available. We are looking at DNS monitor again, port 53, but this time it's-- It's checking UDP. It's using UDP instead of TCP. Exactly, so for those that need to ensure that DNS is up, it's responding, both of those are required, both TCP and UDP for that scenario, so you can just very quickly add those two component monitors, it's out-of-the-box functionality. You don't have to go configure a particular port monitor for it. It's there, it's available, it's quick, it's easy. Right, now, the point I was getting at with you could put port 80 in the DNS TCP monitor is that the rest of the port monitors that we're looking at here today, the FTP monitor and the NTTP monitor, just taking a quick look at the screen, FTP monitor is also port monitor. We happen to have stuck 21 in the port number. It could be any number. Especially if you've maybe configured FTP to run on a non-standard port. Right, you could put your port number in there, but it could also be 80, because it's just a port monitor. You could be checking to see if SFTP works. Correct, or SSH, or any of those, and the same thing with the NNTP monitor. It's just a port monitor that we've pre-configured for port 119. So, to be clear, it is simply several pre-configurations of TCP port monitor, so you can, whatever you need to do from a TCP port monitoring standpoint, all you have to do is add the TCP port monitor to any template that you want to monitor that capability. If you see one that is required for you, you need to monitor DNS, you need to monitor FTP, just add those. You don't have to worry about actually configuring the port number. And the key is that if you have an application that's listening on a particular port, then you can use these monitors here also. So let's talk about, really quickly, just so we can highlight, we talked about, it's simple, it's a check, right? Right. But there's still warning and critical thresholds for it, just in a lot of the component monitors, so what you'll notice is, when you're configuring this, there is a warning and critical value for response time, so not only are you checking to see, is it responding, but what is the actual response time of the particular port? And so just all the same functionality that you see in the component monitors, such as defining that static value, or using baselines available to it, that is also included in this component monitor to ensure that, again, not only is it responding, but is it responding within a timely fashion, and what you have an expectation for your business. Wonderful, now, the next thing I want to take a look at is something that people, I think, expect to be materially different, and actually isn't far off from here. Now, I want to be clear that the components that we're looking at are not the user experience monitors. We actually are going to cover those in a future episode. This is the regular, old email monitors, the POP3, the IMAP, and so on, and I think it's going to be hauntingly familiar for anyone who's been watching the last three minutes. To that point, these are, again, port monitors. They're looking at pre-configured values, so for POP3, right, we're looking at port 110, and for SMTP, port 25. Unless you've configured it for something else, which, of course, you should. Exactly, so have it configured for 25 out of the port, the out of the box, right? But just like any of the other port monitors, if you need to configure that for a different port, all you have to do is change the port number value within the component monitor. IMAP, 143. Again, it has all the functionality of the rest of the component monitors for the TCP port monitoring. Warning, critical for response time value. You can use baseline, right? You can define for single poll, X consecutive polls, X out of Y polls, all the standard functionality that you would expect out of the component monitor is there, it's just, again, a very simple TCP port monitor. Checking that particular value, you can change these however you need to, and ensure that that's responding and it's got the appropriate response time. Right, so just to be clear, from the last two component sets that we talked about, they're all variations of the TCP port monitor. You could use just the TCP port monitor and input your port that you want. We've pre-configured them in case either you're not sure or you want the special naming, although they can be renamed. Whatever it is, but they're really all variations on the same thing. It's just purely to make it easier for you to use in terms of defining a port or just pre-selecting one that's already existing. Exactly. So next up are the file related monitors, directory and file monitors, and believe it or not, these get a lot of conversation on THWACK. I see about once a week somebody saying, hey, I need to know if this file has changed, or is the file I'm looking for there, or I use that as a trigger, or whatever it is. It's a fairly simple concept, that it's surprising how much people need to leverage it. Right, and I know that a lot of in-house grown applications especially use the idea of a file lock or a file trigger, and it's a really useful thing. These monitors are really useful to know whether to trigger an alert. By having it as one of the components, does this file exist? Either does the lock file exist, meaning that I'm in maintenance mode and so I shouldn't cut an alarm, or am I missing a file, meaning that I'm out of sync or something like that. Again, a lot of conversation on THWACK about it, but they're really simple to do. We're starting off with, I think the most basic one, the File Existence Monitor. Why don't we dig into that? So it's very, very simple. It is just purely to determine whether a file is there or not, right? At its very core, it's really easy to configure. Add it to the template, then you're simply determining what is the file name, what is the directory structure, right? However you can point to that, and then you set File Existence Setting, so that's either File Must Exist or File Must Not Exist. Right. Very basic check. Right, note the double backslash after the drive letter. That trips up some people sometimes. I do want to point out, I need to show here for a second. On my target machine, I actually do have a directory called filedir, that's why I have that in there, and there is a file called, very technically named, blahblah.txt, and it's 27 bytes, that'll come up, that'll be important later on, just so we're all aware of that. So I put that in there, and-- So let's test this one. Really easy test, just hit Test, it'll come back in just a second. So what this is going... We've set this to determine File Must Exist. It goes and checks on the node and determines, is that file actually present? And it'll determine whether that's successful or not. And if we change it to File Must Not Exist, hit the Test again, this time we'll get a fail, because it must not exist, but it does. There we go, so it has, technically, a down status, because it wasn't expecting it to be there. So this is really important, right? If I have a system that's looking for the existence of a file, or a file must have been consumed in some way and removed, I can't count the number of applications that I've worked with where there's an output that needs to be determined for production roles, right? If that goes out, the file either is there or isn't there. That means the application's having a problem, so that's why we really focus on that down concept, because if this fails, then that application could actually be down. Right, now, one of the questions that gets asked a fairly decent amount of time is, well, I only need this to run at a certain time of day. I need to know that this lock file, or this output file, is there between midnight and one o'clock, and the challenge is that the polling engine is running on a cycle, it's not running at a particular clock value, so what I always advise people to do when they're doing that is, for this component, set the Polling Frequency fairly wide, in a number of hours, so that you're liable to catch it within that window. That's one of the ways to get around that. And the other thing, I know you guys will probably talk about it in a future episode, but if you need to be more complex in this, again, this is a basic check to see if the file does or doesn't exist. If you need to be more complex, there are some scripting component monitors I think you're going to cover later. Right, exactly. Alright, so that is the basic is the file there. Moving on, we've got what's the size of the file? Talk me through the new components that are here. Right, so the first one was a determination of does it exist, and this one is actually looking for values of the file size, so if you have a situation where you need to determine, the file can't grow beyond a certain size, this is a very easy way to do that. So again, same concept as before, we're going to specify the file path, right? Determine where that is, and now we have some statistic thresholds that we can define, right? We can, if it's a very static concept, we can define that the size is a certain value, both from a warning and critical standpoint, or, if you just need to track how that's expanding over time, you can use the baseline methodology to make that happen. Right, it's really good for log files that tend to grow too large, and they take up too much disk space, or whatever, there's a bunch of applications for it. I don't think anyone's ever experienced log files growing immensely. No, log files growing and filling the disk or overrun, yeah, no, never happened, never ever. Let's go ahead and run that test, just to see how that one works out, and there we go, it's successful with an up status. It did not exceed the greater than threshold. Moving to the next one, file age. So this one's important because you're trying to determine when the file was last modified. If you need to understand if there's a certain situation within an application where this file is regularly updated, you need to ensure that it has been done in a certain time frame, this is the component monitor that you would leverage. Again, you're defining a file path, and then you're defining the statistic threshold. From the value, what do you think the value is that you're trying to define from a time frame? Well, only because I was reading, it tells you the amount of times since the last update. Right, so in this situation, it's, by default, hours, but of course, you could convert the value if you need to. Simple yes convert, and then you could convert that to calculate for day values or whatever it is that you need to. Or minutes. Right, but by default, it's set from an hour standpoint, and then that will allow you to determine has this been done within the last hour? If you need to convert it, was this done in the last polling cycle, or really, the last 5 minutes? Something of that nature. But again, for the purpose of determining did the file get modified within a specified time frame, this is the one that you would use. Now, the other thing that I want to just direct everyone's attention to is the use of the variables. We haven't seen that before here, and this catches people, these two dollar signs. The first one is a variable, the IP address of the node that it's been assigned to. And more specifically, an Orion variable. Right. Right? Not necessarily a Windows variable or something like that, but specifically the Orion variables. This is where you could leverage that to define that particular node that you're monitoring, and ensure that you're pointing to that without having to specify a static path. Yeah, a static... So that way, if you apply this to multiple machines, it's still going to have the correct value there, but then the C$ isn't a variable. That is the normal Windows, that's a drive mapping. I just want to make sure that we're clear on that one. And if you needed to leverage an NFS share without that drive value variable, or drive value share, then same thing works. Exactly. Next up we have file change. Now, you might initially think that file change and file age are very related, and they are, but they work in radically different ways. Right, the previous one is just simply determining when was it last updated, right? But this one's a little bit different. Right, and actually, I want to run the test, because it actually is going to fail, and I had it fail on purpose. It's down. Now, what did I do wrong? Let me see, I have the file path, and it's the same as the other one, so one would presume that it's supposed to still work, and I can't see anything wrong except that my eye is automatically drawn to this checksum. Tell me a little bit about what this checksum is doing. So this is a little bit different than what we saw on the previous component monitors. Every file has a checksum value, but you have to actually specify the file that's there, because it needs to grab that actual value. You can't just go... On the fly. Yeah, it's not going to work, so the important part here is to be able to choose the file, update the checksum, which gives you a function right there on the screen. So let's say that I had the file handy, let's say that it was that one. Of course it's not, but let's say it was. I'd open that, I'd say update the checksum on that file. Gives you a notification-- That it is. It also tells me that's the file that's being updated, just so that we're clear on that one. So that means that you need to be able to get from the polling engine, which is really where we are here, you have to be able to get from the polling engine to the file, or have a copy of that file. Either one will work. Right, but it has to be that file, because again, checksum, it needs to determine what is that value, and ensure that it hasn't changed, and to that point, again, just like the previous component monitor, it's leveraging a determination of the past value, the past amount of hours. You can also, again, convert that if you need to change that to minutes, or days, or something. Right. Okay, so that is the file change. Uncheck that, and next we have a completely different thing. Up until now we've been talking all about specific files. Now we're talking about larger groupings. Here we've got the size of the directory. This kind of comes back to the use case we were mentioning a moment ago where log files, maybe an entire directory of log files, you need to keep it within a certain range, or you need to understand if it's growing wildly outside of what's expected, this is a perfect component monitor to do so. And you have some sort of mechanism that is creating multiple log files. Let's say that you have a system that automatically rolls your log file, starts a new file after it gets to be 50 gig or whatever. Very common use case. So a file monitor isn't going to work, because the file name's going to keep changing, but the directory monitor will tell you the size of the whole directory, and what it does is it also lets you specify at least the file extension filter, so you can say all of them, you can say only my log files, only my txt files, things like that. Yeah, it's a perfect way to understand... If there are multiple things going on in the directory, and the .log file for a particular application, or if the application has different log file extensions, but you need to check for a certain type of log file extension, this is the easiest way to determine, filtering out those extensions and ensuring that my growth is associated with those particular files within that directory. Right, so you're going to have other things in there that will never count and will never matter, so let me run the test on that one, just because it's fun. I think that's an important aspect, right? You can continue to test as you're building out your templates using these component monitors. You can go validate is this actually functioning? Very important for the checksum, to ensure the values there, but even more important for determining am I getting the expectation of output that I was looking for with this component monitor when I configured it, or am I not? If you don't configure your monitoring correctly, if you don't get these component monitors configured correctly, you're not going to be effectively monitoring your environment. Right, now that is good advice for anyone building or modifying templates. Overall, multiple components is that testing, testing again, testing one more time, waiting five minutes, testing again, is really the normal process for this, because you want to make sure that you've got it nailed down, that it's going to work in multiple situations. I mean, this is not as rigorous as I would say programming, where you want to have QA tests, and you want to have unit tests and things, but you want to have that same level of rigor to it. And you can see that the test came back and it says the directory says it's 27. Remember I said that the file size being 27 was going to matter? So we know that is actually reporting the true size of the directory, with all of its many massive files in it, and that takes us to the last of the file related components, which is a file count monitor, so talk us through this one. Yeah, so just like the rest of them, you come through, you configure a directory path, in this situation, just like the previous one that we saw here, you can do file extension filter, but then you also have file attributes, so you can specify am I looking for a certain amount of read only files, if that's an expectation, if there are hidden files, compressed, etc., so there's several options there that you can define, and that's ensuring that you're only looking for a certain amount of files if that is a requirement from your monitoring standpoint. Excellent, alright, and just to do that test, just because it's fun, and we get to see some results here. And as you can see, there is exactly one amazing, incredible file in that directory. So what I'd like to do is transition to something completely different now, but also a topic of much discussion on THWACK, which is the database related monitors. This is something that I think people want to do a lot, but when they first get into, it seems a little bit daunting, so I was really excited to get to this part of it today and show off some of the tricks. Well, I think part of the reason why people struggle with this concept is not many systems administrators are also equally adept to database administration, so there are some aspects of that that might escape some people, especially when you're trying to configure these component monitors specifically, because there's certain nuances to doing so, which let's walk through. Right, and you have to have pieces of information that often you're not privy to, so we're going to start off with an ODBC based user experience monitor. You can see that there, and the part that I want to drive your attention to right away is the connection string. Provider is MSDASQL, and the driver is MySQL, so this is a MySQL database we're connecting to. MySQL ODBC driver, 5.3 Unicode, etc., etc. So, this is a really important part, right? Because you have to have the driver set up on the polling engine. On the polling engine or engines. Or engines, exactly the point. We'll talk about that in a second. We'll get to that. So I want to show that we've got over here, on my target system... Now, I have spent many, many, many hours not realizing that all it needed was a driver, and I've spent time setting up a user DSN, you notice that we have none, a system DSN, also none here, a file DSN, uh-uh, don't got those. All you need to have is the driver set up, but you have to know exactly what it is and what it's called. You didn't open a support ticket for an easy question, did you? I certainly did not. I struggled with it for far longer than I should have. You can see that this particular MySQL ODBC driver actually comes from Oracle Corporation, which is fine. It can be any of the drivers, but you have to know what it is on the target system. Most importantly, you need to make sure that it is, the driver supports the database that you're trying to connect to on the other side. That's a key component, that's what I see a lot of you guys struggle with oftentimes is making sure that that driver is the right driver, so an easy way to do that is check by loading whatever the client software is, leveraging that driver, connecting to your database, and validating that that driver is allowing you to actually connect effectively, then leveraging that driver within this configuration. Again, on the polling engine. Correct. And this is where I do want to stop being spoiler-y and just say that if you have multiple polling engines, you'll need to do this on every single one. It's very important, because the calls are actually happening from the polling engine itself, so it's important to ensure, if you're making a call to a database from that particular polling engine, it's going to leverage that driver, so you need to load those drivers. You can do your test on one polling engine, and then make sure you load the drivers on the rest. You don't have to go through the whole process on every single polling engine. The key component is making sure that you load the driver, like you've shown here. Right, now, some of you have multiple products, you have let's say NPM, and SAM, and SRM, and maybe you have one poller specifically designated for your database servers, and you're very structured in that way. In that case, the driver would only need to be on the poller that you have the database servers being polled from also, for SAM components and all of that stuff, so it's assigned to that poller, so I don't want to imply that you have to install drivers on the ones that are across your DMZ if you don't use this component on the devices that are assigned to those DMZ pollers. Right, but for those of you that are leveraging your pollers across the entire board, trying to connect to everything and poll everything, make sure that you do load these drivers on every single one of your pollers. Or you're balancing continually, or whatever. Alright, so back to the actual component. Here we are. So again, you've got your provider, the MSDASQL, the driver name, which, again, is exactly what you found in the ODBC screen itself, and here we have that variable again. Just remind me, this variable is for what now? So this is for whatever I'm connecting to. In the situation before, we were connecting to a directory, right? And we were leveraging whatever the node that I'm monitoring on, the IP address associated with that. Same situation here, we're leveraging that Orion variable that's going to give us the actual IP address of the target machine that we're trying to monitor, so in this case, you would apply this template to a server that's running MySQL, and then the IP address of that MySQL server would get populated for this one, so, again, you can create this for a lot of different MySQL servers, and ensure that you don't have to statically specify that IP address. You can leverage the Orion variable for whatever node that you're actually monitoring this on. Right, and as long as we're on the topic of SolarWinds variables, down here, user ID and password are not something specific to the database server. They're specific to the credential for monitoring. That's where it's polling it from, so I've set up my credential for whether it is Orion, or Linux, or MySQL, or Oracle. That's what the username is and the password is that it's now passing to the driver on the poller. So I know we've talked about credentials for monitoring before, but I want to make sure that we highlight it again here specifically, because this is important. It's being used for the connection to the database, right? So very, very important. You do need to define those credentials. It can be inherited from the node if the node credentials are the same as the credentials for the database server. And let me say that with even more words, because I love all the words, is that if the credentials that SolarWinds is using to connect to the server in question and poll statistic data, the WMI, or the Windows login, are the same as the credentials to get from your polling engine to the database server, which, quite honestly, should actually never be true. It should never be the case, unless you're talking about a SQL server, then you could be leveraging Windows credentials, which is a common practice for Microsoft, but in just about every other database server instance out there, you're going to have a separate set of credentials, which is really important for this. Yeah, and Destiny, our security geek, is going to swoop down upon us and rain holy fire if we keep on suggesting that that's even... You should really, honestly, best practices. That should not be the case, but if for some reason they are, you could use those similar ones. You could inherit from node, but for most of the use cases, you're going to set up a set of credentials. Those are available in the SAM settings in your credential store. It's different from your node credential, so make sure that you're setting that up effectively. Like you have here, you have a particular MySQL service Linux account that you're leveraging that that is intended to be used for communication with your MySQL server. Right, so the last variable, the last element, is the database, which is performance_schema. That's one of the pieces that you have to know when you're setting this up. Your DBA may tell you, maybe you do know it, maybe you are one of the DBA Illuminati, and you are aware of it, that's fine, but you need to know what that database is that you're querying here in a second. Or it could be possible that you've just connected the database and you've fumbled through it, trying to figure out what exactly it is that you need to monitor. I know a lot of us have done that, but that's the important part, is make sure that you have the database specified within the connection string. Right, and once you have all that, believe it or not, the next part, which you'd think is the hard part, is the easy part, the query that you're going to run. So here we have a very simple query, just selecting the variable value from global status, where the variable name is bytes received, so really we're just getting the bytes received from the database statistics. Right, and this is the often use case that we see for any of the database servers. We're trying to get system information from the database server itself, in this case, bytes received, and that will allow you to determine, is the performance what is expected? You can do some other queries and find something very specific within the database itself. This is also a user experience monitor, so if there's a certain query that a user application runs that expects a certain output, that's a very effective use case, and all you have to do is specify that SQL query to be able to poll that information, and then you can use that value in defining in your statistic threshold. Right. I think the last new statistic or element that we haven't seen before, the query timeout. Pretty straightforward, if this query, this one right here, doesn't come back in a certain amount of time, then we want to call it down. Kind of important, hopefully this query is very simple in terms of what you're trying to get from it, but there is the scenario where the database may be very loaded with activity, so you want to ensure that you're not constantly checking for this, you're not waiting a very, very long time for this to come back, because many of us know that a database server could take a long time to respond, but in that case, make sure that you have a timeout for that situation. That is a very common user experience scenario, that the query just took too long, and therefore the users got tired of waiting, they're opening up help desk ticket, so in that situation, you want to make sure that if this is taking too long, go ahead and time it out, and usually you're going to set this for an expectation of when an application should respond to this query. You'll want to mimic that value, so set it to there. If that goes beyond it, it's down, you have a problem, you need to get somebody involved to fix it. Right, which goes back to the rule when you're setting up queries for alerts, or reports, or any of those, is that you want to use your database tools to create a query plan, to find out how long it takes to run. You want to optimize this query right here as much as you would optimize any other query anywhere else. Again, take your DBA friends out to lunch, get them to help you, have them point out, give you some pointers and things, but that's still due diligence, because you could create what I call observer bias, which is, simply by the act of monitoring something, you shove it over and you crash it, and it would've been fine without it, so don't be that person. Alright, and you can see that in this particular template, we have a lot of these monitors here, so what this looks like along the way is this kind of template. This is the same template where it's running and, to look at just one of the statistics, again, we're just going to look at the kilobytes received, there's the statistics that you can get along the way. Here we've got this collection of kilobytes received that we've been polling up, and you can threshold it, and alert on it, and all the glorious, wonderful monitoring things you can do. I think that's important, so you've seen kind of how you can take a simple query from a database, grab value associated with that, right? See a whole set of them. You can string a whole bunch of those queries together, have several component monitors that are doing those queries against your SQL server, and those multiple queries could be an indication of exactly what a client application is supposed to be doing. Now you can drill down into it and see each of those individual statistic values and understand how is my application behaving from a user experience standpoint, as it expected to poll information from a database. Exactly. Alright, so that is the ODBC driver. Another option that we have is Oracle, so here we're looking at a component for the Oracle User Experience Monitor, and you'll notice that there isn't a place for the ODBC driver information. In fact, there's not a lot of information here at all, so what's going on here? What am I missing? So this is a little bit different, right? The ODBC query is supposed to be kind of database communication at its most basic, so leveraging a built in, the driver that's there, making that communication to the database. This is more specific to an Oracle database, so in that situation, typically what you're going to do is you're going to go install the Oracle client on the polling engine. You're going to load the respective drivers associated with that, and again, this is very important. It needs to be the one that's supported by the Oracle database that you're trying to monitor, so if you have that offset, if you're loading a client and a driver that is not supported by the Oracle database, you're going to have issues with it. Support is often going to say that this is the problem, you need to make that change, so when you're setting this up, make sure that you're actually configuring the Oracle client on the polling engine. Make sure you've got the right driver that supports that Oracle database. Right, you can get a T-SQL prompt that you can actually make the connection manually, that you can run the query right from the polling engine, and again, polling engine or polling engines, depending on what you're doing. And I think that's an important part, right? You should be able to do the query that you're going to do within the component monitor. You should be able to do that directly from the Oracle client that you have installed on the polling engine directly, and then from that point in time, then put your query within the component monitor itself. Right. That said, there are still a few things that we need to put in here. First of all, you still need the credential for monitoring, which we're going to have the Oracle Service Account. There we go, again, you need to know what the service account is, it’s got to use it, etc. The port number for the database. Go ahead. So this is really important, because you can have Oracle configure to set any particular port that is required from an Oracle standpoint, you can vary it, so you have different applications connecting in to your Oracle server. Because of that, we have the component monitor set up to allow you to be able to configure that port number, and allow you to connect very specifically to that Oracle instance and get to that particular database that you're looking to communicate with. Mhmm, exactly. You've got the query, again, you'd think it was the hardest part, it's actually the easiest part of this whole process, and then you've got the destination type. Are you connecting to a service or a SID? Which is very specific to Oracle, and if you don't know the answer to this, check with your DBA. If you need some more help, you can easily do a search on Google, but it's just important to understand how are you supposed to connect to this database? That information is what's going to populate here. Right. And then also you need to know the destination port name, if it is a named port. Point name. Sorry, sorry, destination point name. Destination point name, thank you. Ports, points. Yeah, yeah, I know. And then, finally, the driver type. This takes us back to what did you install on the polling engine? Correct. What is the driver that you're using? And you want to make sure that you specify that particular driver that you have installed on the polling engine, so you need to specify it here. You need to have it installed, and then you need to specify it in the component monitor itself. Excellent, so we've covered the ODBC based database monitors, we've covered the Oracle client based monitors, that leaves us with... SQL. Microsoft SQL. So here's a template with the SQL Server User Experience Monitor here. Once again, the ODBC isn't there, so we know that it's not using that. I also see some values here, but how am I making a connection in this case? So this is the benefit of Orion running on a Windows environment, right? We're leveraging the same mechanisms that Windows uses to call the connection to the SQL server, so when we go through your configuration in your component monitor, there's some basic stuff from a credential standpoint. You can define your port, you can use either use the default port, or you can use a static port. If we use a static port then you can define that actual port. Probably most default instances of Microsoft SQL server are going to be using 1433, but for those of you that have that configured, again, check with your DBA if you have one, or go log in to that server, check the SQL server configuration itself. You can find that port configuration. If it's using something other than 1433, you can change the port type to find that in here. I'm going to skip down first past the query, because, again, that's the simple part. You should specify your SQL server instance, so there's an instance name associated with every SQL server. You can find that in a number of different ways, and then the initial catalog, which is basically the initial database that you're connecting to when you make that query. So all of this is just configuring, effectively, what you would configure on the server itself if you were trying to make a connection to the database server through some type of client software. Right, now, I love this one. Use Windows authentication first and then... Does that mean it's going to try one of the other, or? Yeah, so this is going to leverage first, what are the credentials associated with the SQL server or the Windows server itself? The Windows server itself, right. Right, and then it's going to fall back to SQL authentication, so it's important that you're configuring your credentials for monitoring effectively to be able to leverage those. Right, so you want to make sure, again, there's our SQL service credentials. If that's the one that you use to connect to the database directly, which, honestly, is recommended. It's what you should be doing. What you should be doing, then that's the way you want to connect. But we recognize, not everyone is always doing the standard practice, so you can configure this as you need to, to ensure that you can connect to whatever your instance is configured for. Right, so if you're using regular domain credentials to connect to the database, that's when you would use this check mark, is to say, try my Windows credentials, and if that doesn't work, now fall back to... Otherwise, we presume that you're using the SQL server credentials initially, because that's technically the right way, the best way to do it. And then you have query timeout again. We talked about that previously, ensuring that it's responding with an effective time frame, and then the rest of the configuration really is effective the same, so you need to define your SQL query, same experience that we were looking at before in the ODBC and the Oracle. You need to understand what is the data you're trying to get back from the database, create that query. Again, you can often get this from your DBA, you can get this from an application owner, or, if you need to fumble around within SQL Management Studio and query in your SQL server, you can do that. Right, and when we talk about databases, I know that database monitoring is a big topic, and obviously, there's a number of different ways to crack that nut. There are some of the built in templates that come with SAM, there's the AppInsight for SQL that is available also. Of course, there's a product like DPA. This gives you one more option where you can work very closely with your database administrators to say what are the things that are really concerning you? How do you find it? And I've said this before, when I'm working with somebody on creating some monitoring, a template, the first thing I say is how do you know when something is wrong? I walk up to the machine, I type grfrnkle command, and then I know, or whatever it is, because that's what I'm going to replicate. I know, the grfrnkle command, it's very technical. So if a database administrator comes to me and says, well, I run these T-SQL scripts and that's when I know from the output what's going on, that's my clue that I can start to use these components to be able to give them that same amount of information automatically every five minutes, 10 minutes, whatever, and then create thresholds, and reports, and alerts off of that data, so this is one more tool in your database monitoring toolbox. Now, one thing I want to point out is they may not necessarily know what is the most effective way to gather information that says that hey, this is working right form a user standpoint. Then you may need to work with them and define a plan and understand how does the application, the client application, work with the database server? What is the expectation there? So it may be some trial and error associated with that, but ultimately the result of that information, defining what that query is, that an application would actually be calling to the database server, is what you want to configure, and ensure that that's the query that you're inputting for each of these component monitors. Absolutely. If we go any longer, people are going to say we're trying to run into the THWACKcamp. But there's still so much left to cover. Looks like we'll have to plan for a round three of this showdown. Right, and also a round four where we only cover scripting. Knock yourself out. For SolarWinds Lab, I'm Steven Hunt. And Leon Adato. Thanks for watching.