Redis - Reload content before expires - redis

In Akamai we can order to reload a content from origin when 90% of the expiration time was consumed. In this case, Akamai is serving the cached content but is accessing to origin to reload the new content.
Is there a similar feature in Redis?
For example, I put a content in cache for 5 hours. But I want to reload it if someone access to this content when only left 30 minutes or less. If a user access to it in this period I will serve the cached content but in background we will be reloading the new content.
Is it possible?
Thanks.

Redis is not an active component regarding fetching data itself but rather a data store. It keeps your data, expires/evicts keys based on their TTL.
You/your application is in charge to populate Redis with the data you want to keep stored.
However, you can use Redis primitives to achieve a part of what would be needed to serve your requirement:
Redis TTL/Expiry
Keyspace notifications
Pub/Sub
Keyspace notifications publish a notification on certain events such as key creation or expiry. You could store two keys in Redis, a key representing your payload with the appropriate TTL and a phantom key which is a marker with a slightly shorter TTL (say 90% of the original TTL).
As soon as the phantom key expires you capture that notification. Then you can fetch the content of the cache you want to update. You update the cache key and write a phantom key again for the next cache update iteration.
The steps above are a strongly abbreviated but should guide you towards a feasible approach.

Related

Purge cache in Cloudflare using API

What would be best practice to refresh content that is already cached by CF?
We have few API that generate JSON and we cache them.
Once a while JSON should be updated and what we do right now is - purge them via API.
https://api.cloudflare.com/client/v4/zones/dcbcd3e49376566e2a194827c689802d/purge_cache
later on, when user hits the page with required JSON it will be cached.
But in our case we have 100+ JSON files that we purge at once and we want to send new cache to CF instead of waiting for users (to avoid bad experience for them).
Right now I consider to PING (via HTTP request) needed JSON endpoints just after we have purged cache.
My question if that is the right way and if CF already has some API to do what we need.
Thanks.
Currently, the purge API is the recommended way to invalidate cached content on-demand.
Another approach for your scenario could be to look at Workers and Workers KV, and combine it with the Cloudflare API. You could have:
A Worker reading the JSON from the KV and returning it to the user.
When you have a new version of the JSON, you could use the API to create/update the JSON stored in the KV.
This setup could be significantly performant, since the Worker code in (1) runs on each Cloudflare datacenter and returns quickly to the users. It is also important to note that KV is "eventually consistent" storage, so feasibility depends on your specific application.

Phantom Key - Spring Data Redis - Goal/Best Practices

I'm working with Spring Data Redis and I'm a little bit confused about the utility of phantom key. Below some questions about that :
What's the goal of Phantom Key for Spring Data Redis?
When should I save it or not? What are the impacts?
Is there best practices on this subject?
Thank advance for your feedback.
When the expiration is set to a positive value, the corresponding
EXPIRE command is run. In addition to persisting the original, a
phantom copy is persisted in Redis and set to expire five minutes
after the original one. This is done to enable the Repository support
to publish RedisKeyExpiredEvent, holding the expired value in Spring’s
ApplicationEventPublisher whenever a key expires, even though the
original values have already been removed. Expiry events are received
on all connected applications that use Spring Data Redis repositories.
By default, the key expiry listener is disabled when initializing the
application. The startup mode can be adjusted in
#EnableRedisRepositories or RedisKeyValueAdapter to start the listener
with the application or upon the first insert of an entity with a TTL.
See EnableKeyspaceEvents for possible values.
The RedisKeyExpiredEvent holds a copy of the expired domain object as
well as the key.
For more information see https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis.repositories.expirations

Is it posible to hook redis before key expired

I have set an expiration value to a key in redis, and want to get the opportunity to run a piece of code before the key will be deleted by redis. Is it possible, and if so how...?
Thanks
My solution was to create a new key, with the same name as the one I wanted to hook, only I added a prefix for it indicating it's a key for timeouts usage ("TO") - something like:
set key1 data1
set TO_key1 ""
expire TO_key1 20
In the example above, as soon as "TO_key1" will expire, it will notify my program and I'll get the opportunity to run my code before I will manually delete "key1".
I found this link very useful for creating the listener for redis: Redis Key expire notification with Jedis
This isn't possible with standard OS Redis... yet. There is a way, however, to do something similar without too much hassle. If you stop using Redis' expiry (at least for those keys that you're interesting in "hooking") and manage expiry "manually" in your code, you can do anything you want before/during/after the expiry event.
Since Redis offers key-level expiry out of the box, people are usually content with it. In some cases, i.e.g. expiring members in a Set, the only way to go is the manual approach but that approach is still valid for regular keys when you need finer control.

Reclaiming expired keys in Redis

I'm trying to solve the following problem in Redis.
I have a list that contains various available keys:
List MASTER:
111A
222B
333C
444D
555E
I'd like to be able to pop an element off of the list and use it as a key with an expires.
After the expires is up, I'd like to be able to push this number back onto MASTER for future use. I don't see any obvious way to do this, so I'm soliciting for a creative one.
The best method would be to get called back by Redis when the key expires and then take action.
However, callbacks support is still to be added (http://code.google.com/p/redis/issues/detail?id=360).
You can either use a Redis version that contains a custom/community modification to support this feature (like the last one in the link I've posted), or worse :): start tracking keys and timeouts in your client app.

How is Redis used in Trello?

I understand that, roughly speaking, Trello uses Redis for a transient data store.
Is anyone able to elaborate further on the part it plays in the application?
We use Redis on Trello for ephemeral data that we would be okay with losing. We do not persist the data in Redis to disk, and we use it allkeys-lru, so we only store things there can be kicked out at any time with only very minor inconvenience to users (e.g. momentarily seeing an incorrect user status). That being said, we give it more than 5x the space it needs to store its actual working set and choose from 10 keys for expiry, so we really never see anything get kicked out that we're using.
It's our pubsub server. When a user does something to a board or a card, we want to send a message with that delta to all websocket-connected clients that are subscribed to the object that changed, so all of our Node processes are subscribed to a pubsub channel that propagates those messages, and they propagate that out to the appropriately permissioned and subscribed websockets.
We SORT OF use it to back socket.io, but since we only use the websockets, and since socket.io is too chatty to scale like we need it to at the moment, we have a patch that disables all but the one channel that is necessary to us.
For our users who don't have websockets, we have to keep a list of the actions that have happened on each object channel since the user's last poll request. For that we use a list which we cap at the most recent 100 elements, and an auxilary counter of how many elements have been added to the list since it was created. So when we're answering a poll request from such a browser, we can check the last element it reports that it has seen, and only send down any messages that have been added to the queue since then. So that gets a poll request down to just a permissions check and a single Redis key check in most cases, which is very fast.
We store some ephemeral data about the active status of connected users in Redis, because that data changes frequently and it is not necessary to persist it to disk.
We store short-lived keys to support OAuth logins in Redis.
We love Redis; once you have an instance of it up and running, you want to use it for all kinds of things. The only real trouble we have had with it is with slow-consuming clients eating up the available space.
We use MongoDB for our more traditional database needs.
Trello uses Redis with Socket.IO (RedisStore) for scaling, with the following two features:
key-value store, to set and get values for a connected client
as a pub-sub service
Resources:
Look at the code for RedisStore in Socket.IO here: https://github.com/LearnBoost/socket.io/blob/master/lib/stores/redis.js
Example of Socket.IO with RedisStore: http://www.ranu.com.ar/2011/11/redisstore-and-rooms-with-socketio.html