What is the cost of database performance for businesses today? The very shocking reality is most owners have absolutely no idea. In fact, I would even go so far as to say most business owners can’t even begin to understand everything this question might encompass.
As a database consultant for the last twelve years, I would like to think I’ve seen it all, but I know from experience there is always something that will make you think somewhere on the horizon. I’ve never had a prospective client contact me because a system was just too fast for the work they wanted to do. However, I have been accused of breaking a system after removing a significant bottleneck with some index tuning and simple code changes (making a process that ran over 24 hours run in under a minute).
Many systems I consult on have huge amounts of low-hanging fruit just waiting to be picked. The problem is the business doesn’t know what it doesn’t know about SQL Server and other tools causing slow performance. So, what is the actual cost of poor database performance?
When speaking in terms of cost, I have no doubt monetary impacts are the first thing to come to mind. However, the cost of database performance issues can go well beyond the financial aspects alone.
What if the database design limits performance and, as a result, the flexibility to implement new features? How about the effects of poor data quality and duplication of data? Are ongoing performance issues putting the business at risk of losing one of the most important assets—the business knowledge held by staff simply fed up with the constant fire drills? Database performance goes well beyond the time it takes for a query to execute, and the cost of inefficiency existing inherently in poorly designed and architected solutions is staggering.
Calculating the cost of poor database performance
I’ve only once in my life worked at a company that had a quantifiable cost associated with system availability. However, at the time, I was against how cost was determined because it usually pointed to a problem in the billing system for which I was the lead business analyst. For that company, a natural gas provider, any performance degradation was counted as a full downtime and tracked meticulously and reported on by the QA manager for the company.
The basis of the cost calculations was to measure the impact on the customer service call center. Understandably, downtime would result in higher call volumes from unavailable self-service through automated systems and inhibit the ability to assist customers with everyday requests.
The issue I had with this way of calculating is it didn’t consider the time involved in solving the issue, or the time lost due to maintaining manual processes. The overall cost of system fragility was never considered.
There also wasn’t a true database administrator for this system. At best, there was an ‘Accidental DBA’ who held the responsibility because he was the most senior. There were new positions created in the IT development department at the company solely to balance out the manual workload, ensuring multiple, heavy-hitting processes never stepped on each other to bring the system to a halt.
Identifying the causes of poor database performance
So, what was the problem there? Was it underpowered hardware? Was it storage? What about bad code and database design? Depending on the way you look at it, yes.
I say jokingly I am a multi-faceted threat to any application because I understand how to build specifications from my experience as a business analyst years ago, how to read and write code in multiple languages from later becoming an accidental developer, and how to architect and performance tune schemas from being a DBA for several years before becoming a consultant. I am equally comfortable filling any of these roles and understand how most analysts and developers think and how to communicate with them to explain why databases differ.
It was this exact same system where I learned a valuable lesson in SQL Server performance tuning early on; indexes are important. This was back on SQL Server 6.5 and later SQL Server 2000, where the database engine didn’t track things like missing index suggestions. It took only one nonclustered index to make the account billing process run in minutes, where it previously took as much as 24 hours with the server maxed out on CPU the entire time (and there was hardly a slight bump in CPU usage when it ran after the change).
Can we say the problem was entirely the database design? Yes, and no. How can it be both at the same time? Well, the same database with no indexes would probably have had no problem with today’s hardware, at least until it grew to the point where it would become a problem again.
The cost of scanning entire tables without indexes was the biggest problem with system performance. A few additional indexes on the largest tables in the database such as billing line items, accounting ledger, invoice history, and gas usage history tables, essentially eliminated all the headaches associated with the system. Nobody had to manually monitor the system processes running to make sure nothing overlapped, causing a problem. The hardware at the time was a significant limitation to performance, as were the existing 32-bit platform limitations. Depending on which way you looked at the system, the performance problem could have been any of the aforementioned possibilities.
Understanding the true business impact of poor database performance
In my case, two interesting things happened as a result of those indexes being created:
- A new standard was created for system performance. Every future change was going to be benchmarked against this new standard for performance impacts—even the cost per minute for anything the QA manager deemed to be an “outage.”
- Developers could actually develop instead of babysitting a system straining to keep up with the high cost of scanning large tables over and over. Suddenly developers could focus on new features instead of having to fulfill the accidental DBA role and fighting fires.
What did this mean to the business? Well, it meant opening new gas service markets as they became deregulated and expanding the customer base. It also meant opening electrical services in deregulated markets a year ahead of what was projected on the original project plans. Imagine the cost of not solving those performance problems over the long term and keeping people busy trying to make things work instead of focusing on growth.
This is one example I have of where database performance had a high cost to the business beyond financial. The turnover rate for developers at one point in the company was extremely high, primarily because newer guys would get stuck with monitoring rather than writing code, and/or troubleshooting problems. Their only recourse was to call on the accidental DBA, who was also beyond ready to leave but continued to stay on. The ongoing performance issues went on so long the business was at risk of losing one of the most important assets; the business knowledge held by the staff with the company since its beginning. Posing yet another high cost of poor database performance.
Business knowledge: your most valuable cost
You can replace people as a body but not what they know about your business and processes. I too am replaceable. I have no doubt SQLskills can hire another consultant to solve client problems, but they can’t replace all the knowledge I have from twelve years working with some of our oldest and largest clients. When I left my job at the gas company, it created a huge loss of business knowledge. I had written the specifications for probably three quarters of the billing system, along with rewriting a massive amount of the code behind it later as a developer. I had so much business knowledge of “why” something was the way it was, the company hired me back as a 1099 consultant to transfer knowledge, provide further documentation, and fix coding issues.
I have absolutely no doubt someone reading this article can relate to the burnout caused by consistent firefighting in a system where performance is always a challenge to be fought. Nobody is immune to it, even consultants; though by the very nature of our job we are typically like smoke jumpers; jumping in to hit the hot spots and knock things back enough to create some breathing room for others to continue fighting.
I think one of the most frustrating statements I get sometimes as a consultant when scoping a new client engagement is the business can’t afford or won’t approve of the costs associated with a basic health check of the environment, or worse, training the staff responsible for maintaining it. These are miniscule costs compared to the longer-term costs of performance problems for the company.
For example, SQLskills has offered an accidental DBA course under $2,000 focused on teaching a server administrator with IT experience the essentials for configuring, maintaining, monitoring, and troubleshooting SQL Server. Many students would later come back to attend one or more of the week-long, deep dive courses. I like to say a business can’t afford not to invest in the education of the people responsible for their data, because if they don’t, eventually another company will lure those people away.
What is poor database performance costing you?
Do you know what the cost of database performance problems are for your business? Have you lost employees due to the high rate of firefighting and burnout associated with database performance issues? Wouldn’t it be nice to have some insight into how database performance limits your business’s growth, increasing the costs of licensing and hardware incrementally over time?
You can learn more about how
SolarWinds® database performance monitoring and optimization solutions are designed to provide you with deeper insights into database health and can allow you to pinpoint performance issues typically in minutes wherever your databases are located, from on-premises to the cloud.