The In-depth Guide to Caching ActiveRecord SQL Queries in Rails

 
Caching slow SQL database queries in Rails is represented by flytraps Photo by NADExRioTic from Pexels

Caching might seem a perfect solution to “speed up” slow database queries. However, caching in Rails apps can be easily misused, leading to poor maintainability or even slower performance than without it. In this blog post, I’ll discuss the common pitfalls of caching SQL queries in Rails apps. I’ll also describe my toolkit for assessing the cacheability of database queries and techniques for reducing the cost of caching infrastructure.

Rails.cache 101

Ruby on Rails offers a rich toolkit for caching different web application layers. Ranging from simple instance variables, through external in-memory cache shareable between processes, up to HTTP caching relying on headers and status codes. This blog post will focus on caching SQL-related data in separate in-memory storage.

I won’t elaborate on choosing the best in-memory database for your app’s cache. For projects that are just starting, Redis is usually a decent choice. Once the app’s scale increases, it might be a good idea to separate the cache into a Memcached with Dalli gem so that it does not clash with background worker data. Alternatively you can use two separate Redis instances.

You can check your currently configured cache storage by running:

Rails.cache

For external in-memory storage, you should see:

<ActiveSupport::Cache::RedisCacheStore...

or

<ActiveSupport::Cache::MemCacheStore...

Please consult the documentation of respective gems for configuration details. You can also read the official Rails guide on caching for a detailed comparison between different cache storages supported in Rails. For production applications with non-trivial traffic, the external in-memory store should be the best choice in most cases. An advantage of an external in-memory cache is that it can be shared between different Ruby processes, e.g., multiple Heroku dynos. It means that time-consuming cache write will only be performed once, and all the other dynos can reuse the result.

Rails cache under the hood

Before we discuss how and when we should cache SQL queries, let’s first take a closer look at what it means to “cache” data using Rails framework. In this blog post, we’ll focus on the Rails.cache module. It abstracts away the interface to the underlying data store and exposes simple methods for storing and retrieving cached data.

You use it like that:

important_value = 42

Rails.cache.write("important_value_key", important_value)
Rails.cache.read("important_value_key") # => 42

An often-used feature is the auto-expiry of cached values:

Rails.cache.write("important_value_key", important_value, expires_in: 5.seconds)
Rails.cache.read("important_value_key") # => 42
sleep 6
Rails.cache.read("important_value_key") # => nil

A handy shortcut for automatically refreshing expired data is the fetch method:

Rails.cache.fetch("important_value_key", expires_in: 5.seconds) do
  important_value
end

It reads the cached value if present and refreshes it with the result of executing provided block if the selected key is expired.

A useful trick is to leverage a cache_key method of an ActiveRecord query to ensure that it will be reused.

You can also cache more complex objects:

user = User.find(42)
Rails.cache.write("important_user", user)
cached_user = Rails.cache.read("important_user")
user == cached_user # => true

Here’s where things are starting to get more interesting. User is not a primitive value, but an ActiveRecord object built based on an SQL query results.

So how come that the Rails.cache can reconstruct the object without talking to the database? Introducing the Marshal module:

user = User.find(42)
dumped_user = Marshal.dump(user)
puts dumped_user
# "\x04\bo:\tUser\x11:\x10@new_recordF:\x10@attr..."

This is a serialized representation of our User object containing all the info necessary to restore it from text format back to the program memory. You can do it like that:

restored_user = Marshal.load(dumped_user)
user == restored_user # => true
user.object_id == restored_user.object_id # => false
Remember to avoid loading objects from user-provided input because it can result in a remote code execution vulnerability.


You can see that our restored object equals our original user, but they are now different objects in memory since their object_id attributes don’t match.

Let’s now dig a bit deeper by peeking into the internals of how the cached object binary blobs are stored. If you’re using Redis as your cache database, you can read raw entries using a similar code snippet:

require 'redis'

redis_client = Redis.new(url: ENV.fetch("REDIS_URL"))
puts redis_client.get("important_user")
# "\u0004\bo: ActiveSupport::Cache::Entry\n:\v@value\"\u0002\xFC\u0003x\x9C\x9DU\u007Fo..."

You can notice that the resulting entry is different from output of using Marshal.dump on the same user object. One notable difference is that Rails.cache automatically compresses larger objects using the Zlib::Deflate stdlib module, but it still uses Marshall.dump under the hood.

Now that we’ve covered the basics let’s move on to speeding up our queries.

How to find SQL queries worth caching?

In theory, you could “cache all the things” and Rails would scale…

Nope :(

When I work on speeding up a Rails application, I usually treat applying any backend caching technique as a last resort. The most significant downside of caching is the additional complexity it introduces. By adding a caching layer, you instantly lose a single source of truth trait of your SQL database. Any debugging effort now requires an analysis whether a potential stale cache issue could have affected it.

I could keep on ranting about the downsides of cache layer… My approach is to avoid caching unless the potential cost is potentially worth it. That’s why in this tutorial, I focus on caching SQL queries. I think that it is one of the least complex backend caching techniques that can significantly speed up the application without all the usual downsides.

The best part about SQL queries is that you can quickly find the ones worth caching and measure that expected gain before you deploy the change to production. One tool for finding queries that consume a significant amount of database resources is my rails-pg-extras gem with its calls and outliers methods. Check out my other blog post for more details on using rails-pg-extras to improve PostgreSQL performance.

Now that we know which SQL queries to potentially cache, let’s finally discuss how to do it.

Two Hard Things

The challenging part of leveraging caching is not finding bottleneck queries, but rather deciding which queries can be cached without breaking the business logic. There are no straightforward answers since it differs per project. The best candidates for caching are always queries that are shared between multiple users. E.g. results of a popular or default search query.

I’m not a fan of “smart” cache expiry policies since it can add a hard-to-maintain complexity. In theory, you could base your cache expiry keys on the updated_at attribute of an object. But, in Rails, it’s straightforward to modify objects bypassing all the callbacks magic, thus leading to hard-to-debug stale cache issues. And what about updated_at of the object relations? Do you have to keep all the relations structures in sync with every state update now?… Even if the initial version of the “smart” cache expiry policy might look straightforward, keeping the complexity at bay with the new features requested might not be possible.

If you’re just starting with caching, usually, the simplest way to implement it is to add a fixed expiry threshold. The longer it is, the greater will be the performance benefit, but data displayed to users will be more outdated. Unfortunately, there are also no simple answers to configuring the correct cache expiry threshold. You can check out my other blog post for tips on using dynamic config in Rails to easily tweak the values in production and observe results.

But, as an example, let’s assume you’re optimizing an endpoint with traffic of 10 RPS. If all the requests generate the same slow SQL query, caching its results for 1 second would speed up ~90% of all traffic. Depending on your app’s traffic, even a small caching threshold could translate to huge performance and scalability benefits.

It’s impossible to give universal advice on configuring the “perfect” caching policies. So let’s instead move on to describing how to store cached data correctly.

Caching SQL queries in Rails

In the following examples, we’ll be working with this sample ActiveRecord model:

model User < ApplicationRecord
  scope :slow, -> {
    where("SELECT true FROM pg_sleep(1)").order(:created_at).limit(10)
  }

  def to_json
    {
      id: id,
      email: email,
      nickname: nickname,
      created_at: created_at
    }
  end
end

The slow scope is supposed to return 10 User model objects after an artificial 1-second delay.

Let’s now consider the following code snippet:

@users = Rails.cache.fetch("my_slow_query", expires_in: 1.minute) do
  User.slow
end

The big question is whether this caching technique will improve performance?

Nope :(

The previously mentioned trick of inspecting raw cache values can help us understand why this is a bug:

puts redis_client.get("my_slow_query")
# => "ActiveSupport::Cache::Entry\t:\v@valueo: User::ActiveRecord_Relation...
If the cache entry is unreadable you can save it with compress: false to simplify debugging.


We’ve accidentally cached an ActiveRecord::Relation object representing an SQL query that has not yet been executed. You can read my blog post about using load_async API in Rails for more in-depth info on how and when ActiveRecord queries are triggered in Rails. Query object would only be executed after instantiating it from the cache, meaning that each request would suffer from the additional 1-second slowdown. It would not be possible to debug this issue without inspecting raw cache entry because any attempt to display ActiveRecord::Relation executes it obfuscating the original stored format.

A better way to cache this query would look like that:

@users = Rails.cache.fetch("my_slow_query", expires_in: 1.minute) do
  User.slow.to_a
end

Appending to_a forces query execution, and the resulting User objects are serialized and stored in the cache database. Now reading the contents of the cache would not execute any SQL queries.

How to spend less money on an in-memory cache?

But, here comes the tricky part. Do you need to store full-blown ActiveRecord objects? Let’s compare the size of cached ten full User ActiveRecord objects with their JSON representation:

Rails.cache.write("user_ar_objects", User.slow.to_a)
Rails.cache.write("user_json_objects", User.slow.map(&:to_json))

redis_client.get("user_ar_objects").size # => 1721
redis_client.get("user_json_objects").size # => 1064

As you can see, storing JSON representation takes almost half the space. Another benefit is the smaller memory usage required to instantiate JSON data compared to full-blown AR objects.

Optimizing the size of your cached data is critical. Compared to the SSD disk space, memory storage is expensive. If you’re using Heroku, 1GB of Memcachier storage costs $70/month, and 100GB would incur a monthly cost of $4000 (!!). You could reduce the price by spinning up a custom infrastructure, but this is just an example. The takeaway is that in-memory storage is expensive.

How to minimize cache size by storing only IDs

There’s a way to significantly limit the amount of data you have to cache while maintaining good enough performance. The slowdown in SQL queries is usually caused by complex search criteria that span multiple joined tables. But the outcome of the query are often objects from a single table. It means that in many cases, we should be able to store just the IDs of original query results and later reuse them to fetch objects without all the search logic overhead. Based on my tests saving only integer IDs takes ~10% of the in-memory space needed to store a JSON representation of an object with just a few attributes.

Let’s see it in action:

users_ids = Rails.cache.fetch("my_slow_query_ids", expires_in: 1.minute) do
  User.slow.ids
end

@users = User.where(id: users_ids)

We’re caching only user IDs and later use them to fetch objects efficiently.

Unfortunately, this implementation has a sneaky bug. Maybe you’ve also learned the hard way that PostgreSQL does not preserve the order of IDs passed as a search param… (╯°□°)╯︵ ┻━┻

It means that while returned objects would be correct, without an explicit ORDER BY, their ordering would be random. This is critical if you’re paginating results because subsequent pages could return duplicates and omit some of the rows.

One way to fix it is by reordering the objects with Ruby:

users_ids = Rails.cache.fetch("my_slow_query_ids", expires_in: 1.minute) do
  User.slow.ids
end

@users = User.where(id: users_ids).sort_by { |u| user_ids.index(u.id) }

Based on my benchmarks fetching by IDs and sorting 1000 ActiveRecord objects in Ruby has an overhead of ~50ms. That’s usually an acceptable performance for fetching SQL data compared to an unoptimized database query. If you have to work with more objects, you should probably consider adding pagination.

But, this technique forces us to execute the AR query object, which might not always be optimal. There are scenarios where you need the raw query to add eager loading or merge it with other queries. In this case, you can use an ugly way of custom sorting that’s supported by PostgreSQL (copy-paste source):

class User
  def self.find_as_sorted(ids)
    values = []
    ids.each_with_index do |id, index|
      values << "(#{id}, #{index + 1})"
    end
    relation = self.joins("JOIN (VALUES #{values.join(",")}) as x (id, ordering) ON #{table_name}.id = x.id")
    relation = relation.order('x.ordering')
    relation
  end
end

users_ids = Rails.cache.fetch("my_slow_query_ids", expires_in: 1.minute) do
  User.slow.ids
end

@users = User.where(id: users_ids).find_as_sorted(user_ids)

This method has a comparable overhead of ~50ms for 1000 objects. But, it produces a huge SQL query with the custom sorting order hardcoded. Always measure the performance impact if you’re about to merge it with other queries.

Now our cached results are correctly sorted, taking up only a fraction of the original space. This technique allows you to aggressively cache more data without bloating the costs of your in-memory database.

┳━┳ ヽ(ಠل͜ಠ)ノ

That’s another advantage of caching just the critical data, i.e., bottleneck SQL results, instead of entire HTML views. Storage space needed is an order of magnitude smaller.

Summary

The best way to implement caching is to avoid it. So, please always double-check if adding a database index cannot save you from developing a complex cache expiration strategy. But, if you have to do it, I hope some of the above tips will prove helpful.



Back to index