In RedisTemplate i understand that executing get multiple times will end up making multiple network calls to the redis cluster and then will retrieve the result. Will the same happen in case of multiget or the multiget will pass all the keys at once to redis cluster and then execute them in the structure of pipeline and then return the result.
I have tried googling it but could not find any reference regarding it.
So it looks like that a multiget is not like executing multiple get in a loop, rather multiget will pass the whole operation on redis cluster side and do the caldulation on redis cluster and then pass the data to the client
Related
I am using region getall method to get values for all keys, but what i found is that for the key present in apache geode it gets data quickly but for one which is not present in apache geode. it calls the cache loader one by one. Is there any mechanism so that calling cacheloader can be made parallel.
I don't think there's a way of achieving this out of the box, at least not while using a single Region.getAll() operation. If I recall correctly, the servers just iterate through the keys and performs a get on every single one, which ends up triggering the CacheLoader execution.
You could, however, achieve some degree of parallelism by splitting the set of keys into multiple sets, and launching different threads to execute the Region.getAll() operation using these smaller sets. The actual size of each set and the number of threads will depend on the ratio of cache hits / misses you expect, and your SLA requirements, of course.
New to ignite, i have a use case, i need to run a job to clean up. I have ignite embedded in our spring boot application, for multiple instances, i am thinking have the job run on each instance, then just query the local data and clean up those. Do you see any issue with this? I am not sure how often ignite does reshuffing data?
Thanks
Shannon
You can surely do that.
With regards to data reshuffling, it will only happen when node is added or removed to cluster. However, ignite.compute().affinityRun() family of calls guarantees that code is ran near the data.
Otherwise, you could do ignite.compute().broadcast() and only iterate on each affected cache's local entries. You don't have the aforementioned guarantee then, though.
I am using Jedis to connect to Redis and push data into a list. I am using rpush for the JSON data.
These are the steps I do:
Fetch Data from Rabbitmq
Collect info from JSON data and prepare a key , value pair
Push the data into redis using the key and the value.
I dont see my code scaling more than 3000 requests per second.
Note:
I am not using pipeline , every message will result in getting jedis resource , add it to redis and close of resourse.
Options for persisting faster in Redis are
Pipelining
Jedis Connection Pooling
To Avoid:
3. No frequent opening/closing of resource, i.e open a resource and reuse it
Good link:
https://tech.trivago.com/2017/01/25/learn-redis-the-hard-way-in-production/
How I solved My Problem:
My design was perfectly fine. But I was pushing data into the same key for all my tests. When I started pushing data into different keys Performance increased hugely.
I'm trying to implement cuckoo filter in Redis. What I have till now works fine except that it just inserts all the values on a single node even when working on a cluster.
In order to implement it on multiple nodes, I'm thinking of directing different elements to different nodes using some hash function. Is there any command or function call in Redis that allows forcing of elements to a particular node using its key or number, or even a particular slot?
For reference, this is the implementation of cuckoo filter I have till now.
As an aside, is there any existing implementation of Cuckoo Filter or Bloom Filter on distributed nodes in Redis that I can refer to?
This page explains how Redis cluster works and how the redis-cli works when using it in cluster mode. Other clients do a better handling of the operations in cluster mode, but the basic functionality of the redis-cli should work for simple tests.
If you check the code of other data structures (for example, hash or set) that come with Redis, you'll notice that they do not have code to deal with cluster mode. This is handled by the code in cluster.c, and should be orthogonal to your implementation. Are you sure you have correctly configured the cluster and the Redis cli?
Problem:
I am getting junk values like "OK" for redis get call.
This issue reproduces often over a particular period of time irrespective of the keys trying to obtain through get command.
I'm Using :
Redis version 2.8
Jedis client 2.5.1 to connect Redis
Please suggest some solution to resolve this issue.
The problem is outlined in this page. From the article:
I learned a hard lesson when enabling Redis transactions in the Spring RedisTemplate class redisTemplate.setEnableTransactionSupport(true);: Redis started returning junk data after running for a few days, causing serious data corruption. A similar case was reported on StackOverflow.
By running a monitor command, my team discovered that after a Redis operation or RedisCallback, Spring doesn't close the Redis connection automatically, as it should do. Reusing an unclosed connection may return junk data from an unexpected key in Redis. Interestingly, this issue doesn't show up when transaction support is set to false in RedisTemplate.
We discovered that we could make Spring close Redis connections automatically by configuring a PlatformTransactionManager (such as DataSourceTransactionManager) in the Spring context, then using the #Transactional annotation to declare the scope of Redis transactions.
Based on this experience, we believe it's good practice to configure two separate RedisTemplates in the Spring context: One with transaction set to false is used on most Redis operations; the other with transaction enabled is only applied to Redis transactions. Of course PlatformTransactionManager and #Transactional must be declared to prevent junk values from being returned.
Moreover, we learned the downside of mixing a Redis transaction with a relational database transaction, in this case JDBC. Mixed transactions do not behave as you would expect.