Is there a limit to how many queries one could send when using multi/exec from a node-redis application or is it only a question of available memory on the client and server to buffer requests and replies?
It's only a question of available memory.
Firstly on the client side as node-redis will queue up the queries you do on the multi and not send any of them to Redis before the exec is executed.
And secondly on the Redis server as it needs to be able to hold all the queries and answers at once as it is an atomic operation.
Related
I am trying to use Redis for my application. In my application multiple user at the same time want to get the data which is stored in redis cache . As Redis is Single-threaded & we can't run operation concurrently against a single server, by using connection pool can we execute multiple commands at a single time to achieve the high throughput ?
I have read some of the articles in open forum & they said it would be helpful using connection pooling only when we know we are going to use redis blocking operation some time like BLPOP. But, if we are sure that we are never going to use blocking operations & use only normal operation like SET,MSET, GET & MGET then using connection pooling having any advantage over single connection ?
Also can anybody have any idea/recommendation about the maximum no of keys we will need to provide in MGET command in order to get the maximum values of all specified keys ?
It will be very helpful if I get answers to this. Thanks in advance.
I was wondering, when does pipelining become more efficient? E.g. if I have to query the server once, a pipeline would be less efficient that just using the redis instance.
If I have to query the server twice, e.g. check something exists, if it does, grab it, is pipeline more efficient or is using the redis instance?
If I have to query something three times.. etc.
When does pipelining become more efficient than using the redis instance?~
Thank you
If you only operation once , it's the same .
The pipelining like batch process mechanism . If you have three commands to execute at the normal way , the client need send three tcp packets and receive three tcp packets (every command need send one and receive one ) ; but when use pipelining , the client package 3 command to server , just send one tcp packet and receive one .
You can see the official document : Using pipelining to speedup Redis queries
Two issues
Do lua scripts really solve all cases for redis transactions?
What are best practices for asynchronous transactions from one client?
Let me explain, first issue
Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.
I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.
Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:
Check cache -> miss
Load data from database
Store in redis
However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?
The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.
Moving on to the 2nd issue (asynchronous transactions)
Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.
Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.
These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.
What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.
As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.
Any help appreciated!
I have received the following, after my query was escalated to redis engineers:
Hi Jeremy,
Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.
Using LUA is not a good fit for this problem.
Best Regards,
The Redis Labs Team
Server needs to push data to 100K of clients which cannot be connected directly since the machine are inside private network. Currently thinking of using Rabbitmq, Each client subscribed to separate queue, when server has data to be pushed to the client, it publish the data to the corresponding queue. Is there any issues with the above approach? Number of clients may go upto 100K. Through spike, i expecting the memory size to be of 20GB for maintaining the connection. We can still go ahead with this approach if the memory not increasing more than 30GB.
the question is too much generic.
I suggest to read this RabbitMQ - How many queues RabbitMQ can handle on a single server?
Then you should consider to use a cluster to scale the number of the queues
Let me explain what information I need for this:
I have several concurrent users hitting the same record at once. This means, there will be some queuing and locking going on, on the db end. How big is this buffer? Can queue and locking hold for 200+ concurrent users?
How can I determine the size of this buffer in our setup? Is there a default setting?
There is no query queue ("buffer") in the database.
Each concurrent connection to the database can have one query in flight. Other queries cannot be queued up behind it*.
Your application probably uses an internal connection pool, being Rails, so you can have however many queries waiting as you have slots in the connection pool.
If you have an external connection pool like PgBouncer proxying between your app and PostgreSQL then you can have more queries queued because you can have a much larger pool size in the app when connecting to pgbouncer as pgbouncer connections are so lightweight. PgBouncer will service those requests on a smaller number of real connections to PostgreSQL. That effectively makes PgBouncer a query queue (though not necessarily a FIFO queue) when used this way. HOWEVER because those queries don't actually hit Pg when they're issued they don't take locks while waiting in PgBouncer. This could be important for some concurrency designs.
* OK, so you can send multiple semicolon separated queries at once, but not in series like a queue.