The Bolg

Back to coding blogs 🧏🏾‍♂️

Using a cache so i dont rate limit myself when using notion

You can access the site here: cloud.shaikzhafir.com

Its currently hosted on a Singapore based server using linode. And honestly its worth the 5 bucks a month just having your own managed server. Thats what i tell myself all the time.

Why are you still using notion? Why not <insert other thing>?

With all the hype around other note taking software (like Obsidian), I still enjoy the convenience of using Notion for my non work related stuff. I can edit my blog posts or book reviews from anywhere with my phone, and displaying it is super simple. Until i figure out how to render Obsidian markdown from the sync DB, this will always be my option.

Design

i went with redis, mostly because I wanted to get some experience with the redis API.

In this design, my primary thought was, to serve old cached content even when cache is expired. I don’t want users to wait super long just to see shit. So i couldn’t use cache with expiry.

So this design is more like a grug way of , me want fast content

Code walkthru

The request first when a user visits the homepage, and it will trigger GetSlugEntries in the service layer.

func (c *cache) GetSlugEntries(ctx context.Context, key string) ([]notion.SlugEntry, error) {
	cachedJSON, err := c.redisClient.Get(ctx, key).Bytes()
	if err != nil && err != redis.Nil {
		// key does not exist, does not matter what the error is, we have to fetch from notion API
		return nil, fmt.Errorf("error reading from cache: %v", err)
	}
	// if cache miss
	if err == redis.Nil {
		// fetch notion block
		cachedJSON, err = c.UpdateSlugEntriesCache(ctx, key)
		if err != nil {
			return nil, fmt.Errorf("error adding to cache: %v", err)
		}
	}

	var slugEntries []notion.SlugEntry
	err = json.Unmarshal(cachedJSON, &slugEntries)
	if err != nil {
		log.Error("Failed to deserialize: %v", err)
	}
	go func() {
		ctx := context.Background()
		shouldUpdate := c.ShouldUpdateCache(ctx, key)
		if shouldUpdate {
			_, err := c.UpdateSlugEntriesCache(ctx, key)
			if err != nil {
				log.Error("error updating cache: %v", err)
			}
		}
	}()
	return slugEntries, nil
}

If cache is nil, which i think will never happen unless the cache emptied, it will do a call to Notion client to get all the content, and then update the cache

func (c *cache) UpdateSlugEntriesCache(ctx context.Context, key string) ([]byte, error) {
	// fetch notion block
	rawBlocks, err := c.notionClient.GetSlugEntries(key)
	if err != nil {
		return nil, fmt.Errorf("error getting slug entries: %v", err)
	}
	// write to redis cache
	// Serialize the slice of json.RawMessage
	cachedJSON, err := json.Marshal(rawBlocks)
	if err != nil {
		return nil, fmt.Errorf("error marshalling rawblocks: %v", err)
	}
	err = c.UpdateCache(ctx, key, cachedJSON)
	if err != nil {
		return nil, fmt.Errorf("error updating cache: %v", err)
	}
	return cachedJSON, nil
}

After getting the content at this stage, we should return as quick as possible back to the user, but at the same time start a background goroutine to update the cache as necessary

Here, I’m using timestamp key to determine whether or not to update the goroutine


func (c *cache) ShouldUpdateCache(ctx context.Context, key string) bool {
	// handle timestamp to check whether to update cache
	timestamp, err := c.redisClient.Get(ctx, key+"-timestamp").Time()
	// if error is that the key doesn't exist, we should add it
	currentTime := CurrentTime()
	if err == redis.Nil {
		c.redisClient.Set(ctx, key+"-timestamp", currentTime, 0)
		return false
	}
	if err != nil {
		log.Error("error getting timestamp: %v", err)
		return false
	}
	// if timestamp is more than 1 hour ago, update cache
	if time.Since(timestamp) > time.Hour {
		return true
	}
	return false
}