Home > Parsing the Redis TCP Protocol

Parsing the Redis TCP Protocol

When we decided to go beyond just MySQL monitoring, we had a couple of natural next choices. The decision involved engineering effort, the likelihood we’d find MySQL-specific things in our system that would slip our schedule, alignment with existing customers, and the commercial opportunity. We thought that Redis monitoring would be a relatively small sales opportunity at present (although the community is very large and active), but would be simple to support because of its simple wire protocol and Redis’s straightforward nature: single threaded, no query execution plans, etc. It turns out we were wrong about the ease of implementation. Redis’s protocol is hard to capture on the wire and inspect precisely because of its simplicity. There are some interesting lessons learned. To set the context, we're SolarWinds® Database Performance Monitor (DPM). Most of our customers buy our product because we go beyond status counters and actually measure queries. We sniff the queries off the network with libpcap (the same library tcpdump uses) and reassemble the TCP stream. Then we decode the network protocol and extract the queries and their responses, correlate them together, and categorize them by the abstract of the query text. We can generate lots of interesting observations from this: query timing, frequency, errors, protocol flags, and so on.

Why Redis’s TCP Traffic Is Hard To Sniff

The hard part with Redis is correlating the queries (commands) with the responses from the server. This is hard because Redis’s protocol allows pipelining. A client can send many commands without needing to wait for the server to reply to previous commands. When the server does reply, the responses come in the same order as the original commands were sent, but they are not otherwise labeled as belonging to any specific command. The client needs to figure out which response belongs to which command by keeping a FIFO queue of commands it sent. When a response comes back, it belongs to the oldest pending command. This is kind of a nightmare for TCP sniffing. There are at least two obvious cases where we will be unable to figure out the correlation between commands and responses:
  1. We start observing in the middle of a conversation. Commands have been sent but we didn’t see them.
  2. We don’t see some packets in the conversation. This happens when the packet rate is high and packets are dropped from buffers before libpcap can observe them.
This is made even more challenging by the fact that Redis commands are typically very fast and the packet and command rate are usually very high as a result. Whatever we do has to be extremely efficient.

How We Do It

In this environment, how do we observe the Redis protocol and measure command latencies? Best effort and approximation. In more detail: there are some cases that can be handled and others can’t, and some are a gray area:
  1. No pipelining in use. In this case, if we see a command and a response, we subtract the timings and we’ll be correct, more or less.
  2. We don’t see the pipelining. We might not be able to measure timings for some commands as a result.
  3. Pipelining in use. We cannot be sure we’ve seen all the requests and responses, so we take a middle road and apply a heuristic that represents our best effort as to which responses go with which commands, and their timings.
There’s another factor at play, too. Timings seen from network packet capture are approximate anyway, because there are layers of buffers and delays in between where we’re observing and what a client or a server daemon sees. There are delays between when packets arrive on the interface and when the OS delivers them to Redis, for example. Likewise, when Redis writes out a response, the OS might delay sending it over the network. At very high packet rates and very low request latencies, these uncertainties add up to a greater portion of the real server response times, so the relative noisiness is greater.

Results

As a result, the timings of our Redis queries have some wiggle room. The error is usually skewed towards the short end – if there’s a timing error, we measure queries as being faster than they might have been. However, this imprecision is worlds better than not having any visibility at all into Redis queries and their latencies. And for longer commands (perhaps a big operation over a very large list/set/hash) the latencies will have a smaller fractional error. And that, in the end, is what we really want to find out. There’s an assumption that “Redis is fast, for every operation, all the time.” But what if it isn’t? My entire career has followed this recipe:
  1. Notice an assumption about something unmeasured, that is never noticed or questioned.
  2. Find a way to see if it’s true or false. Apply that method.
  3. Unsurprisingly, the assumption will turn out to be false. Every time.
  4. Rinse and repeat.
And that’s what we’ve done with Redis. It turns out that, sometimes, Redis commands can be slow. Who would have thought? Witness the usual behavior of Redis on our own systems: Redis_TCP_Protocol_2 And the occasional outliers: Redis_TCP_Protocol_3 If you’d like to see how your own systems stack up, SolarWinds DPM integrates Redis queries right into the overall Top Queries reports we generate, so you can see your entire application’s workload across all of your servers of all types, and then drill into them quickly. For example, here’s Top Queries by count on our production systems. Redis_TCP_Protocol_4 Notice how heavily we depend on Redis relative to MySQL. I wrote about this in a recent High Scalability blog post discussing how we make our backend metrics storage and processing scale. That’s the other reason we added support for Redis monitoring, by the way: it’s really important to us. Before we built this functionality, we were flying just as blind as everyone else, without visibility into our Redis servers’ query traffic and workload.

You Can Do This Too

If you’d like to take a look at your own Redis server performance and workload, you can sign up for a free trial of SolarWinds DPM.
Baron Schwartz
Baron is a performance and scalability expert who participates in various database, open-source, and distributed systems communities. He has helped build and scale many large,…
Read more