Currently, our Redis set up involves Jedis + sharding. Scaling up and down involves adding/removing shards manually which is a lot of operational work. We are also heavily dependent on pipelining since we are doing a lot of writes per second.
Hence, we are looking into Redis cluster to automate the sharding process. However, one deal-breaker for us is that Jedis doesn't support pipelining with Redis cluster:
https://groups.google.com/forum/#!msg/redis-db/4I0ELYnf3bk/Lrctk0ULm6AJ
We are aware of Codis which supports pipelining + automatic sharding, but it requires heavy operational work to maintain due to its dependency on Zookeeper. It is also a fork of Redis so it may not be updated with upstream changes. Most likely we will be using it if there are no good solutions to use pipelining with the official Redis cluster implementation.
Just wondering if pipelining is at all possible with the official Redis cluster? Maybe in the form of an alternative Redis client?
Cluster pipeline is not support by jedis release version yet, but there is contribution waiting to be merged now, refer https://github.com/xetorthio/jedis/pull/1455.
You can also write your own implementation refer to that, The basic idea is capturing all commands sent with pipeline, and replaying them for cluster redirecting, as when all keys belong to the same slot, pipeline in cluster can work well.
Yes you can use pipeline in cluster mode as long as key hashes to same keyslot (not node).
To achieve keys hashing to same slot you can use hash tags. TLDR - if you have curly brackets in your key {} hash function will only be applied to that part https://redis.io/docs/reference/cluster-spec/#hash-tags
In my particular case I needed to put countries and cities by id into the redis which totaled 150k records. This is how I did it (nodejs + ioredis):
if (countries.length && cities.length) {
const pipelines = []
for (let i = 0; i < 21; i++) {
pipelines.push(redis.pipeline())
}
await Promise.all([
Promise.map(countries, (country) => {
const randomPipeline = pipelines[country.id % 21]
randomPipeline.set(`country_${country.id}_{${country.id % 21}}`, JSON.stringify(country))
}),
Promise.map(cities, (city) => {
const randomPipeline = pipelines[city.id % 21]
randomPipeline.set(`city_${city.id}_{${city.id % 21}}`, JSON.stringify(city))
})
])
for (let i = 0; i < 21; i++) {
await pipelines[i].exec()
}
console.log('Countries and cities in cache')
}
Just an update. We decided to use Lettuce as the Redis client. It's currently deployed to production and it's working great with ElastiCache Redis cluster mode so far. The key insight is that Lettuce enables automatic pipelining of commands via async I/O. We think this is a better approach since we don't have to write custom code to work around the Jedis limitation.
Related
django-redis source: https://github.com/jazzband/django-redis/tree/master/django_redis
my problem is I can not find method to get number of keys in Redis database, it call dbsize. Methods that available such as set, get, add, delete, delete_pattern, delete_many, clear, get_many, set_many, incr, decr, has_keys, keys, iter_keys, ttl, pttl, persist, expire, expire_at, pexpire, pexpire_at, lock, close, touch.
How can I used dbsize method of redis command in django-redis library?
environment:
django version : 3.2.10
django-redis: 5.2.0
I found the solution of the question
from django_redis import get_redis_connection
REDIS = get_redis_connection("default") # default is alias of redis
REDIS.dbsize() # get number of keys in the currently-selected database
this solution can use native redis command but cannot use method of django-redis plugin
WARNING: Not all pluggable clients support this feature.
I've worked through the documentation on Akka.Net PersistenceQuery here, but I'm struggling to figure out how I would hook up any of those queries inside an ASP.Net6 Blazor Server startup pipeline using the new Akka.Net Hosting model.
What I have in mind is to Sink such a query out to a SignalR hub that will cause views to refresh their data based on the output of a ReadForJournal stream.
Has anyone done this, and if so, please can you provide me with some guidance in this regard?
I have not done this before, much less an expert are this, but I can try to point you in the right direction! :)
If you want to run a local actor, you can spawn the ProjectionBehavior as any other Behavior. This can be useful for testing or when running a local ActorSystem without Akka Cluster.
SourceProvider<Offset, EventEnvelope<ShoppingCart.Event>> sourceProvider(String tag) {
return EventSourcedProvider.eventsByTag(system, CassandraReadJournal.Identifier(), tag);
}
Projection<EventEnvelope<ShoppingCart.Event>> projection(String tag) {
return CassandraProjection.atLeastOnce(
ProjectionId.of("shopping-carts", tag), sourceProvider(tag), ShoppingCartHandler::new);
}
Projection<EventEnvelope<ShoppingCart.Event>> projection1 = projection("carts-1");
ActorRef<ProjectionBehavior.Command> projection1Ref =
context.spawn(ProjectionBehavior.create(projection1), projection1.projectionId().id());
You can combine this with your predefined query, e.g.:
var queries = PersistenceQuery.Get(actorSystem)
.ReadJournalFor<SqlReadJournal>("akka.persistence.query.my-read-journal");
var mat = ActorMaterializer.Create(actorSystem);
Source<string, NotUsed> src = queries.AllPersistenceIds();
So I was thinking maybe your queries could be linked to your ProjectionBehavior for the akka hosting model to host it.
Related sources:
Getakka.net persistence-query
Akka projection dox
Akka hosting
Look at this lua script:
local clientIds = redis.call('ZRANGEBYSCORE', KEYS[1], '-inf', ARGV[1], 'LIMIT', '0', ARGV[2]);
local prefix = 'lock:';
local lockedClientIds = {};
for _, value in ipairs(clientIds)
do
lockal key = prefix .. tostring(value)
if redis.call('EXISTS', key) == 0 then
redis.call('SET', key, 'PX', ARGV[3]);
table.insert(lockedClientIds, value)
end
end
redis.pcall('ZREM', KEYS[1], unpack(lockedClientIds));
return lockedClientIds;
It takes some values from the sorted set and uses them to create keys (after some simple concatenation). I'm not sure if this is OK because according to Redis Lua tutorials, all keys should be provided in the KEYS array so should be known in compile-time, not the runtime.
All Redis commands must be analyzed before execution to determine
which keys the command will operate on. In order for this to be true
for EVAL, keys must be passed explicitly. This is useful in many ways,
but especially to make sure Redis Cluster can forward your request to
the appropriate cluster node. Note this rule is not enforced in order
to provide the user with opportunities to abuse the Redis single
instance configuration, at the cost of writing scripts not compatible
with Redis Cluster.
So does it mean, there is a risk that this will only work with a single node and when redis is distributed across many nodes it won't work?
YES, it is (highly) possible that the script will not work in cluster mode.
It will continue to work even in cluster mode only if the keys are in same hash slot. The idea of hash tags can be used for this purpose.
Note: I'm assuming, by "redis is distributed across many nodes", you are meaning Redis Cluster mode.
my aim is to add only slaves URIs, because master is not available in my case. But lettuce library returns
io.lettuce.core.RedisException: Master is currently unknown: [RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6382], role=SLAVE], RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6381], role=SLAVE]]
So the question is: Is it possible so avoid this exception somehow? Maybe configuration. Thank you in advance
UPDATE: Forgot to say that after borrowing object from pool I set connection.readFrom(ReadFrom.SLAVE) before running commands.
GenericObjectPoolConfig config = fromRedisConfig(properties);
List<RedisURI> nodes = new ArrayList<>(properties.getUrl().length);
for (String url : properties.getUrl()) {
nodes.add(RedisURI.create(url));
}
return ConnectionPoolSupport.createGenericObjectPool(
() -> MasterSlave.connect(redisClient, new ByteArrayCodec(), nodes), config);
The problem was that I tried to set data, which is possible only with master node. So there is no problem with MasterSlave. Get data works perfectly
I'm new to Redis and trying to figure out a simple way to use Redis as a local cache for my C# app. I've downloaded and ran the redis-server from https://github.com/MSOpenTech/redis/releases
I can successfully store a key value and retrieve it as follows:
var redisManager = new PooledRedisClientManager("localhost:6379");
using (var redis = redisManager.GetClient())
{
redis.Set("mykey_1", 15, TimeSpan.FromSeconds(3600));
// get typed value from cache
int valueFromCache = redis.Get<int>("mykey_1"); // must be =
}
I want to limit the amount of memory Redis uses on my server and I also want redis to automatically purge values when memory fills. I tried the maxmemory command but in the redus-cli program maxmemory is not found.
Will Redus automatically purge old values for me? (I assume not) and if not, is there a way that I can make the default behavior of redis do that with the Set method I'm using below?
If I'm heading down the wrong path, please let me know.
The answer to your question is described here: What does Redis do when it runs out of memory?
Basically, you set the maxmemory from the config file, and not from the redis-cli. You can also specify a maxmemory-policy, which is a set of procedures that redis executes when it runs out of the specified memory. According to that config file, there are a total of 6 policies that Redis is using when it runs out of memory:
volatile-lru -> remove the key with an expire set using an LRU algorithm
allkeys-lru -> remove any key according to the LRU algorithm
volatile-random -> remove a random key with an expire set
allkeys-random -> remove a random key, any key
volatile-ttl -> remove the key with the nearest expire time (minor TTL)
noeviction -> don't expire at all, just return an error on write operations
You can set those behaviours using the maxmemory-policy directive that you find in the LIMITS section of redis.conf file (above the maxmemory directive).
So, you can set an expire time to every key that you store in Redis (a large expire time) and also set a volatile-ttl policy. In that way, when Redis runs out of memory, the key with the smallest TTL (which is also the oldest one) is removed, according to the policy that you've set.