There’s a big difference between the MySQL database powering your internet of things (IoT) lightbulb and the one powering your website. There’s a chasm between the SQL Server Express database in your lightly used application’s VM and the monster multi-region SQL cluster you have running on Microsoft Azure. This kind of database diversity is everywhere today. But what are the differences between these databases, and how can administrators justify database spend?
Many IT teams live in fear that one day their boss will walk in after reading about “free” databases and start asking questions. How do IT teams answer challenges to their budget with facts and objective, application-delivered analysis?
To begin quantifying the challenge database diversity represents, let’s look at the most popular databases currently available. In general, databases break into three categories: relational (SQL), non-relational (NoSQL), and end-user-targeted applications (which is beyond our scope here).
NoSQL databases aren’t typically the target of database performance analysis software. They tend to be deployed on a per-application basis, with the specific architectural choices underlying their deployment closely tied to the development of the applications intended to run on them. This is something of an artifact of history.
SQL databases have been used for enterprise applications for a long time. As a result, standards and technologies have emerged (such as ODBC) to help application designers write more database-agnostic applications. ODBC drivers do exist for NoSQL databases, but these are new—the ODBC driver for the popular MongoDB was released in mid-2018—with little evidence yet of widespread adoption.
Though they’re a favorite of DevOps teams, NoSQL databases have only recently become available as commercial off-the-shelf (COTS) software. This, combined with the inherent scale-out nature of NoSQL databases, means there hasn’t been much demand for traditional database performance analysis, though this is likely to change soon.
Relational databases fall into three broad categories: commercial, open-source, and public cloud. Oracle, Microsoft SQL Server, IBM DB2, and SAP Adaptive Server Enterprise are the traditional commercial databases. Historically, they’ve been the “heavy lifters” of enterprise databases, but this has been changing.
Open-source and public cloud databases are perfectly capable of handling the majority of most organizations’ workloads. As a result, the commercial database engines have begun to differentiate themselves on features and functionality, and these databases are increasingly proving themselves able to handle workloads once considered “NoSQL-only.” This level of flexibility makes these databases challenging to profile and troubleshoot, but it has cemented their position as critical to virtually every large organization in the world.
The most popular open-source databases
are the MySQL family (including MariaDB) as well as the PostgreSQL family of databases. Once thought of as “lightweight” databases suitable only for handling small webpages, these databases have evolved. Early cloud-native applications favored these databases, driving innovation in their ability to scale up and out.
Whereas the traditional enterprise databases tend toward small clusters of extremely powerful and resource-rich database servers, the MySQL family is most frequently deployed in large numbers of small instances, while PostgreSQL deployments tend toward a more balanced approach.
Azure SQL Database and Amazon Aurora represent public cloud database offerings from the two most popular public cloud providers. Scaling is handled by the public cloud provider, but they otherwise behave like typical relational databases. Each of these database engines has different under-the-hood design choices influencing how they respond to different resource constraints. Each can be significantly affected by table design, configuration, and other optimizations.
Picking the Right Database
The right database for your application starts with "the one that works." Many applications are tied to specific databases. For these applications, asking which database best supports your use case isn’t a particularly helpful discussion. Other applications—especially modern ones using NoSQL databases
—are more frequently capable of using multiple different databases.
When administrators do get to work with (at least partially) database-agnostic applications, database performance analysis tools can be extremely useful. These tools are designed to assess database performance in depth and provide the information required to make evidence-based decisions.
Many applications capable of using multiple databases weren’t necessarily designed to do so. They may have been designed to use a specific database engine but then had support for other databases added later. This support can come in the form of direct support for additional database engines, but it frequently manifests in the form of support for standards-based interfaces, such as ODBC.
Database engine independence comes with trade-offs. Either the application needs to restrict itself to data types common across all the databases the developers intend to support, or performance penalties can creep in as queries are translated by an intermediary driver. Some of this can be mitigated with database tuning, and some of it can’t, but picking the right database requires balancing database performance against licensing and other nontechnical considerations.
Getting this level of detail about an application’s interaction with a database offers multiple benefits. It often allows administrators to realize additional performance from existing investments, eliminating (or at least forestalling) the need for change. If database change is—after analysis—still a serious consideration, the details of the analysis are the evidence needed to make the correct decision.
Understanding how applications perform on different databases provides necessary data for making evidence-based decisions about database selection. If there are performance issues, where do they exist? Is performance hardware-bound, or can it be improved with configuration changes to the database engine? Can changes to tables or indexes offer improvements?
Database Performance Analyzer (DPA) provides the level of insight IT teams need. Instead of a simplistic examination of averaged speeds, DPA allows you to optimize all candidate databases for the specific workloads in question and analyze where performance issues develop in each one. This lets IT teams not only determine what will best meet their needs today but make evidence-driven estimates of how increases in demand and scaling will affect the performance of their applications tomorrow.
Level up your database game with SolarWinds: try DPA free for 14 days