Event-driven cache service using Hasura Triggers and Redis

sujesh thekkepatt
Geek Culture
Published in
5 min readAug 24, 2021

--

According to Wikipedia, the cache is,

“ Cache is a software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.”

In this, we are building a cache solution using Node.js, Redis, and Hasura (Graphql). We are creating a simple API endpoint that will return some results by joining two or more tables.

Here we are using Haura as our backend server. Although we are using Hasura as the backend, we will not be using the Hasura Graphql endpoint here. We will create a separate API endpoint using express which is used to serve the data that is going to be cached. Graphql caching is out of scope for this article.

Here we are exploring how to extend the functionality of hasura triggers to create a cache service.

Nodejs server contains the API endpoint that we are going to cache. The API is a result of the joining of two tables and these two tables are modified independently by customers.

We are not using any TTL-based caching strategy since we need an event-driven solution to invalidate the cache. Using TTL for your cache is always recommended. It will make sure keys are always invalidated.

So whenever a table in the join expression changes the cache needs to be invalidated. Otherwise, we will serve stale data. So our invalidation must depend on the database models. We can write almost all invalidation code as part of the CRUD operation. But this makes our code scatter and almost impossible to debug. So here we are offloading the complex task to Hasura. That means we will be using Hasura to notify us whenever CRUD happens at the database table level. Hasura has a feature called Triggers, which can extend your business logic by listening to create, delete and update operations for a table.

This is great because we can create our separate route/service to invalidate the cache and write all cache-related logic there. A clean solution that can be scaled.

So let’s get started. Assuming you have Hasura and Postgres set up according to the following schema.

Given above are the screenshots of the schema that we are using. This is pretty basic and doesn't worry about the indexes and triggers. They are automatically created when the table is generated using the Hasura console.

We are using the following query to execute the API endpoint that we are going to cache. This query joins the author, books and publisher tables. Basic stuff.

SELECT athrs.id as authorid,athrs.name as authorname, athrs.email as authoremail,bks.name as bookname,pbrs.publish_id as publisherid,pbrs.email as publisheremail from authors athrs  inner join books bks ON athrs.id=bks.authorid INNER JOIN publisher pbrs ON bks.publisherid=pbrs.id WHERE athrs.id=:authorId;

Now let’s write a middleware that checks for cache data upon a request and when there is a miss it will forward the request to the middleware that fetches data from the database.

In this implementation, we are creating the cache layer as part of our application server, not as a separate service. But we can easily pluck it out and deploy the same as a separate service.

Given below the middleware code,

Now create triggers for the tables that we are going to listen to. Go to the Hasura console and click on the events tab and create the trigger. We can create a trigger whenever insert, update and delete operation happens on the table. Choose the events that your business logic are relied upon. In the screenshot below I have ticked for update and delete trigger operation. In the example, I have only added author_update trigger on author table.

Head over to chrome, open the web address and console. Now on the network tab check the headers. On initial load, we can see a cache miss by inspecting the header x-cache: miss as given below in the figure.

Now reload again and you can see the header is indicating a cache hit, x-cache: hit . When the cache miss occurred the data was fetched from the database and we added it to the Redis. Given below is the screenshot.

Now our cache is working and let’s check the invalidation strategy. Update an entry in the author table. We can either update via Hasura console/ using SQL update statement or can use the edit feature in the console. After updating the table head over to the Hasura console click the event tab and select the trigger that you have created. Under that, you can see Invocation Logs the tab as shown below in the figure.

Click that and you can see all the delivery attempt to our configured endpoint. And head over to the browser and do a reload. Now inspect and we can find that the cache has been invalidated and upon reloading again we can see a cache hit.

Our cache invalidation works perfectly. The implementation looks clean and efficient. You can modify the implementation according to your business logic. You can run a complex query whenever a new entry is created and pre-populate the cache. Also, you can entirely make a separate cache invalidation service and move all your logic to there.

This tutorial is a bit verbose and requires the knowledge of Hasura. Hasura is a great platform that can improve your productivity and cut down your development cost by half. If you haven’t checked Hasura yet. Please take a look at here.

If you like my work and want to support it, buy me a cup of coffee! The code for the repo can be found here.

--

--