I have a database of around 40GB in memory (I have allotted 42GB ram to redis server) on a server containing 40,000,000 redis hashes and I want to now perform HGETALL on these hashes using redis client. What would be the fastest way possible to retrieve the data from hashes. I am running a code which calls the below listed methods in a loop which runs 50,000 times.
I have tried:
redisReply *reply;
redisContext *redis = redisConnect(IP, host);
(In Loop)->
reply=redisCommand(redis,"HGETALL %s",key);
Also I have tried:
(In Loop)->
redisAppendCommand(redis,"HGETALL %s",key);
redisGetReply(redis,(void **)&reply);
(Both the methods seems to take 60-90 microseconds for each fetch for the 50,000 fetches I am performing)
Is there any faster way to do the same? (I am also open to using a different redis client as long as it is in C language).
Related
I am kind of new to Redis. I am just trying to store values in Redis using the HashSet method available using StackExchange.Redis.StrongName assembly (Lets say I have 4000 items). Is it better to store the 4000 items individually (using HSet ) or shall I pass a dictionary (using HMSet) so it will call only Redis server call is required but a huge amount of data. Which one is better?
Thanks in Advance
HMSET has been deprecated as of redis 4.0.0 in favor of using HSET with multiple key/value pairs:
https://redis.io/commands/hset
https://redis.io/commands/hmset
Performance will be O(n)
TL;DR A single call is "better" in terms of performance.
Taking into consideration #dizzyf's answer about HMSET's deprecation, the question becomes "should I use a lot of small calls instead of one big one?". Because there is an overhead to process every command, it is usually preferred to "batch" calls together to reduce the price.
Some commands in Redis are varidiac, a.k.a dynamic arity, meaning they can accept one or more values to eliminate multiple calls. That said, overloading a single call with a huge amount of arguments is also not the best practice - that typically leads to massive memory allocations and blocks the server from serving other requests during its processing.
I would approach this by dividing the values to constant-sized "batches" - 100 is a good start, but you can always tune it afterwards - and sending each such "batch" in a single HSET key field1 value1 ... field100 value1000 call.
Pro tip: if your Redis client supports it, you can use pipelining to make everything more responsive ("better"?).
We're planning to use MGET for one of our Systems. During benchmarking, we were able to retrieve values for 1 Million Keys in one MGET call in lettuce and were quite surprised.
What I've been trying to find are the limitations of MGET. Specifically,
Is there any limit to number of Keys that can be retrieved in one MGET call?
Is there any limit to size of data that gets returned by a single MGET call?
Is there any limit to number of Keys that can be retrieved in one MGET call?
Theoretically, the limit is the max value for int: 0x7FFFFFFF. However, in practice, you CANNOT have so many keys in a single Redis instance (it costs too much memory).
Is there any limit to size of data that gets returned by a single MGET call?
Theoretically, NO limit. However, in practice, Redis saves the returned values in memory before sending to client, so if you try to MGET too many keys, you'll get OOM problem.
In a word, it's a bad idea to MGET too many keys from Redis: it costs too much memory, and blocks Redis for a long time.
there are two systems sharing a redis database, one system just read the redis, the other update it.
the read system is so busy that the redis can't handle it, to reduce the count of requests to redis, I find "mget", but I also find "multi".
I'm sure mget will reduce the number of requests, but will "multi" do the same? I think "multi" will force the redis server to keep some info about this transaction and collect requests in this transaction from the client one by one, so the total number of requests sent stays the same, but the results returned will be combined together, right?
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Short Answer: You should use MGET
MULTI is used for transaction, and it won't reduces the number of requests. Also, the MULTI command MIGHT be deprecated in the future, since there's a better choice: lua scripting.
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Since MULTI (with EXEC) command ensures transaction, all of the three GET commands (read operations) executes atomically. If the update happens before the read operation, you'll get the old value. Otherwise, you'll get the new value.
By the way, there's another option to reduce RTT: PIPELINE. However, in your case, MGET should be the best option.
I'm trying to convert data which is on a Sql DB to Redis. In order to gain much higher throughput because it's a very high throughput. I'm aware of the downsides of persistence, storage costs etc...
So, I have a table called "Users" with few columns. Let's assume: ID, Name, Phone, Gender
Around 90% of the requests are Writes. to update a single row.
Around 10% of the requests are Reads. to get 20 rows in each request.
I'm trying to get my head around the right modeling of this in order to get the max out of it.
If there were only updates - I would use Hashes.
But because of the 10% of Reads I'm afraid it won't be efficient.
Any suggestions?
Actually, the real question is whether you need to support partial updates.
Supposing partial update is not required, you can store your record in a blob associated to a key (i.e. string datatype). All write operations can be done in one roundtrip, since the record is always written at once. Several read operations can be done in one rountrip as well using the MGET command.
Now, supposing partial update is required, you can store your record in a dictionary associated to a key (i.e. hash datatype). All write operations can be done in one roundtrip (even if they are partial). Several read operations can also be done in one roundtrip provided HGETALL commands are pipelined.
Pipelining several HGETALL commands is a bit more CPU consuming than using MGET, but not that much. In term of latency, it should not be significantly different, except if you execute hundreds of thousands of them per second on the Redis instance.
I am using the ServiceStack.Redis client on C#.
I added about 5 million records of type1 using the following pattern a::name::1 and 11 million records of type2 using the pattern b::RecId::1.
Now I am using redis typed client as client = redis.As<String>. I want to retrieve all the keys of type2. I am using the following pattern:
var keys = client.SearchKeys("b::RecID::*");
But it takes forever (approximately 3-5 mins) to retrieve the keys.
Is there any faster and more efficient way to do this?
You should work hard to avoid the need to scan the keyspace. KYES is literally a server stopper, but even if you have SCAN available: don't do that. Now, you could choose to keep the keys of things you have available in a set somewhere, but there is no SRANGE etc - in 2. you'd have to use SMEMBERS, which is still going to need to return a few million records - but at least they will all be available. In later server versions, you have access to SCAN (think: KEYS) and SSCAN (think: SMEMBERS), but ultimately you simply have the issue of wanting millions of rows, which is never free.
If possible, you could mitigate the impact by using a master/slave pair, and running the expensive operations on the slave. At least other clients will be able to do something while you're killing the server.
The keys command in Redis is slow (well, not slow, but time consuming). It also blocks your server from accepting any other command while it's running.
If you really want to iterate over all of your keys take a look at the scan command instead- although I have no idea about ServiceStack for this
You can use the SCAN command, make a loop search, where each search is restricted to a smaller number of keys. For a complete example, refer to this article: http://blog.bossma.cn/csharp/nservicekit-redis-support-scan-solution/