The Best Way to Make Your Application Faster
There are two easy ways to make your cloud application faster and they’re homonyms; cash being the first and cache being the second. (Sorry, but not sorry, for the dad joke). The cloud makes it very easy to increase the amount of compute power and therefore performance associated of your cloud resources. I was once on a cloud project at a major enterprise we jokingly called “the Tinder project” because the customer just swiped (or slid the performance slider bar) right on all their resources to the max size. As a result, their monthly cloud bill had a lot of zeroes in it.
The other good, more scalable approach is to use a caching tier for your applications. Caches are a basic principle used throughout computer science—not every call to every system needs to happen in real-time with the latest value. Remote calls tend to be expensive, and if we can use something repeatedly the performance cost is cheaper. In the cloud world, many smaller things (services) are cheaper than one giant large thing in cloud economics. This means it’s typically a better design to build a system to scale out rather than requiring scaled-up hardware (this is a fundamental idea of distributed systems architecture).
The hard part about building systems reliant on stateful data (a fancy way of saying databases) is scaling out the write workload across multiple nodes. It’s possible, in a few different ways, but typically requires the application to be designed in that manner from the start and isn’t easy to implement after initial design. However, most database workloads, even on busy online transaction processing systems, do at least 50% (if not more) of their I/O (reading or writing to storage) as read activity. How does this help us?
Reads are often repeated—this works on the notion that data isn’t updated frequently. A round-trip from your application’s front end to the back-end datastore is expensive—it’s a remote call, and potentially one that must reach all the way through to the storage on database engine. Sure, the database engine will cache records in its buffer pool, but it also has limited resources. The most common solution to this is to use Redis cache, an open-source solution that stores data in-memory in a key-value format (it falls under the category of NoSQL solutions).
Redis requires some changes to your application’s code, to look in the cache, before going to the database. However, Redis has broad application and language support. In the case of our WordPress site, it was a matter of installing a plugin, and then we were off and running. Redis also offers the benefit of being able to scale-out horizontally, so you can add scale to your application as your workload increases. You can control how long your key values live (just like DNS), and you can persist data to disk if your application requires it.
Caching data is vitally important to all manner of software. In a modern distributed systems architecture, it makes sense to include a key-value cache like Redis in your application stack. It can dramatically reduce workloads and save you expensive cloud database (or even database licenses from Microsoft or Oracle, if you’re in a virtual machine) and help your application scale as your workload grows.