Using Redis as Cache and C# client - redis

I'm new to Redis and trying to figure out a simple way to use Redis as a local cache for my C# app. I've downloaded and ran the redis-server from https://github.com/MSOpenTech/redis/releases
I can successfully store a key value and retrieve it as follows:
var redisManager = new PooledRedisClientManager("localhost:6379");
using (var redis = redisManager.GetClient())
{
redis.Set("mykey_1", 15, TimeSpan.FromSeconds(3600));
// get typed value from cache
int valueFromCache = redis.Get<int>("mykey_1"); // must be =
}
I want to limit the amount of memory Redis uses on my server and I also want redis to automatically purge values when memory fills. I tried the maxmemory command but in the redus-cli program maxmemory is not found.
Will Redus automatically purge old values for me? (I assume not) and if not, is there a way that I can make the default behavior of redis do that with the Set method I'm using below?
If I'm heading down the wrong path, please let me know.

The answer to your question is described here: What does Redis do when it runs out of memory?
Basically, you set the maxmemory from the config file, and not from the redis-cli. You can also specify a maxmemory-policy, which is a set of procedures that redis executes when it runs out of the specified memory. According to that config file, there are a total of 6 policies that Redis is using when it runs out of memory:
volatile-lru -> remove the key with an expire set using an LRU algorithm
allkeys-lru -> remove any key according to the LRU algorithm
volatile-random -> remove a random key with an expire set
allkeys-random -> remove a random key, any key
volatile-ttl -> remove the key with the nearest expire time (minor TTL)
noeviction -> don't expire at all, just return an error on write operations
You can set those behaviours using the maxmemory-policy directive that you find in the LIMITS section of redis.conf file (above the maxmemory directive).
So, you can set an expire time to every key that you store in Redis (a large expire time) and also set a volatile-ttl policy. In that way, when Redis runs out of memory, the key with the smallest TTL (which is also the oldest one) is removed, according to the policy that you've set.

Related

Can django-redis use dbsize?

django-redis source: https://github.com/jazzband/django-redis/tree/master/django_redis
my problem is I can not find method to get number of keys in Redis database, it call dbsize. Methods that available such as set, get, add, delete, delete_pattern, delete_many, clear, get_many, set_many, incr, decr, has_keys, keys, iter_keys, ttl, pttl, persist, expire, expire_at, pexpire, pexpire_at, lock, close, touch.
How can I used dbsize method of redis command in django-redis library?
environment:
django version : 3.2.10
django-redis: 5.2.0
I found the solution of the question
from django_redis import get_redis_connection
REDIS = get_redis_connection("default") # default is alias of redis
REDIS.dbsize() # get number of keys in the currently-selected database
this solution can use native redis command but cannot use method of django-redis plugin
WARNING: Not all pluggable clients support this feature.

Can redis Lua script contain key determined at runtime?

Look at this lua script:
local clientIds = redis.call('ZRANGEBYSCORE', KEYS[1], '-inf', ARGV[1], 'LIMIT', '0', ARGV[2]);
local prefix = 'lock:';
local lockedClientIds = {};
for _, value in ipairs(clientIds)
do
lockal key = prefix .. tostring(value)
if redis.call('EXISTS', key) == 0 then
redis.call('SET', key, 'PX', ARGV[3]);
table.insert(lockedClientIds, value)
end
end
redis.pcall('ZREM', KEYS[1], unpack(lockedClientIds));
return lockedClientIds;
It takes some values from the sorted set and uses them to create keys (after some simple concatenation). I'm not sure if this is OK because according to Redis Lua tutorials, all keys should be provided in the KEYS array so should be known in compile-time, not the runtime.
All Redis commands must be analyzed before execution to determine
which keys the command will operate on. In order for this to be true
for EVAL, keys must be passed explicitly. This is useful in many ways,
but especially to make sure Redis Cluster can forward your request to
the appropriate cluster node. Note this rule is not enforced in order
to provide the user with opportunities to abuse the Redis single
instance configuration, at the cost of writing scripts not compatible
with Redis Cluster.
So does it mean, there is a risk that this will only work with a single node and when redis is distributed across many nodes it won't work?
YES, it is (highly) possible that the script will not work in cluster mode.
It will continue to work even in cluster mode only if the keys are in same hash slot. The idea of hash tags can be used for this purpose.
Note: I'm assuming, by "redis is distributed across many nodes", you are meaning Redis Cluster mode.

Why redis RENAME executes an implicit DEL rather than UNLINK?

As the docs of RENAME says:
Renames key to newkey. It returns an error when key does not exist. If newkey already exists it is overwritten, when this happens RENAME executes an implicit DEL operation, so if the deleted key contains a very big value it may cause high latency even if RENAME itself is usually a constant-time operation.
As we know, DEL is blocking while UNLINK is non-blocking.
So I have two questions:
If the deleted key contains a very big value, it seems that executing an implicit UNLINK would be better. Why redis determines to use DEL?
If I manully execute UNLINK then RENAME with transaction, will the high latency be avoided?
The "implicit DEL operation" is not the same as a DEL command called by a user.
You can config it to use async or sync delete. The reason behind it is to probably give the user more control.
In the redis config file, on the part of LAZY FREEING, it says
DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
It's up to the design of the application to understand when it is a good
idea to use one or the other. However the Redis server sometimes has to
delete keys or flush the whole database as a side effect of other operations.**
Specifically Redis deletes objects independently of a user call in the
following scenarios:
....
For example the RENAME command may delete the old key content when it is replaced with >another one.
....
In all the above cases the default is to delete objects in a blocking way,
like if DEL was called. However you can configure each case specifically
in order to instead release memory in a non-blocking way like if UNLINK
was called, using the following configuration directives.
Then there's the config
lazyfree-lazy-server-del no
Just switch it to YES then it will behave like UNLINK
I checked the source code,
For Redis version 5.0, this function is called when you call RENAME command
void renameGenericCommand(client *c, int nx) {
// some code....
// When source and dest key is the same, no operation is performed,
// if the key exists, however we still return an error on unexisting key.
if (sdscmp(c->argv[1]->ptr,c->argv[2]->ptr) == 0) samekey = 1;
// some code ...
if (samekey) {
addReply(c,nx ? shared.czero : shared.ok);
return;
}
...
/* Overwrite: delete the old key before creating the new one
* with the same name. */
dbDelete(c->db,c->argv[2]);
}
This is the dbDelete function it called
int dbDelete(redisDb *db, robj *key) {
return server.lazyfree_lazy_server_del ? dbAsyncDelete(db,key) :
dbSyncDelete(db,key);
}
As you can see, it does refer to the config of lazyfree-lazy-server-del

Possible to use pipelining with Redis cluster?

Currently, our Redis set up involves Jedis + sharding. Scaling up and down involves adding/removing shards manually which is a lot of operational work. We are also heavily dependent on pipelining since we are doing a lot of writes per second.
Hence, we are looking into Redis cluster to automate the sharding process. However, one deal-breaker for us is that Jedis doesn't support pipelining with Redis cluster:
https://groups.google.com/forum/#!msg/redis-db/4I0ELYnf3bk/Lrctk0ULm6AJ
We are aware of Codis which supports pipelining + automatic sharding, but it requires heavy operational work to maintain due to its dependency on Zookeeper. It is also a fork of Redis so it may not be updated with upstream changes. Most likely we will be using it if there are no good solutions to use pipelining with the official Redis cluster implementation.
Just wondering if pipelining is at all possible with the official Redis cluster? Maybe in the form of an alternative Redis client?
Cluster pipeline is not support by jedis release version yet, but there is contribution waiting to be merged now, refer https://github.com/xetorthio/jedis/pull/1455.
You can also write your own implementation refer to that, The basic idea is capturing all commands sent with pipeline, and replaying them for cluster redirecting, as when all keys belong to the same slot, pipeline in cluster can work well.
Yes you can use pipeline in cluster mode as long as key hashes to same keyslot (not node).
To achieve keys hashing to same slot you can use hash tags. TLDR - if you have curly brackets in your key {} hash function will only be applied to that part https://redis.io/docs/reference/cluster-spec/#hash-tags
In my particular case I needed to put countries and cities by id into the redis which totaled 150k records. This is how I did it (nodejs + ioredis):
if (countries.length && cities.length) {
const pipelines = []
for (let i = 0; i < 21; i++) {
pipelines.push(redis.pipeline())
}
await Promise.all([
Promise.map(countries, (country) => {
const randomPipeline = pipelines[country.id % 21]
randomPipeline.set(`country_${country.id}_{${country.id % 21}}`, JSON.stringify(country))
}),
Promise.map(cities, (city) => {
const randomPipeline = pipelines[city.id % 21]
randomPipeline.set(`city_${city.id}_{${city.id % 21}}`, JSON.stringify(city))
})
])
for (let i = 0; i < 21; i++) {
await pipelines[i].exec()
}
console.log('Countries and cities in cache')
}
Just an update. We decided to use Lettuce as the Redis client. It's currently deployed to production and it's working great with ElastiCache Redis cluster mode so far. The key insight is that Lettuce enables automatic pipelining of commands via async I/O. We think this is a better approach since we don't have to write custom code to work around the Jedis limitation.

servicestack redis, when using SetEntry, it will automatic generate a set with key "ids:+objectName" in redis db, how can I disable it?

when using SetEntry, it will automatic generate a set with key "ids:+ objectName" in redis db.
For example:
typedClient.SetEntry("famyly:username:jhon",new Family {FatherName="Jhon",...});
a set with key name of "ids:Family" and a member like "2343443" will be automatic created in redis db,
and each time I update or modify the same key with SetEntry, the set of "ids:Family" will increment with an new auto generated member. And this set will grow extremely large if I update the key frequently.
How can I disable the auto generated set? this set seems useless for the current circumstances.
thanks
I ran into this same problem - I discovered that our database contained a couple dozen of these "ids:XXX" sets, each containing tens of millions of items, which were consuming significant amounts of memory.
The solution is to switch to untyped clients. You can still use typed methods on the client so you're really not giving up any type safety or automatic serialization at all. There's a couple ways to create clients; we tend to use the get-in-get-out Exec shortcuts on RedisClientsManager. You should be able to adapt this to the way you do it.
Typed client - creates "ids" sets:
// set:
redis.ExecAs<T>(c => c.SetEntry(key, value));
// get:
T value = redis.ExecAs<T>(c => c.GetValue(key));
Untyped client - no "ids" sets created:
// set:
redis.Exec(c => c.Set(key, value));
// get:
using (var cli = _redis.GetClient())
{
T value = cli.Get<T>(key);
}
The inferred auto-generated id's are when you use the high-level Redis Typed Client. Use the IRedisClient.SetEntry on the string-based RedisClient API instead.