Redis pipeline vs mget - redis

I'm looking into using either MGET or pipeline, but I can't seem to find the information on MGET that I'm looking for
My use case is to replace 50 GET calls with either MGET or pipeline
What I found so far is:
When we use pipeline, commands are not guaranteed to be executed one after the another and other client commands could be executed in between. This means that when we group GET commands with pipeline, redis won't be blocked for O(50) and other clients will get a chance to execute their commands (50 being number of GET calls that I'm grouping)
On the other hand, for the command MGET I was not able to find the information on how it works - when we call MGET with 50 keys, will the command block the redis instance until it gets all the keys? How does the MGET work?

Because Redis is single-threaded, any single command will block until it's finished. Including MGET.
Pipelines are just a way of batching commands, they don't block other clients.
So: MGET will block and a pipeline won't.

Related

Redis guarantees on XREAD return value?

Can XREAD (or perhaps another command) be used to atomically detect whether data was written to a Redis stream?
More specifically:
Suppose you added some data to a Redis stream in one process and saw that the data was added successfully with some auto generated key.
XADD somestream foo bar
After this XADD completes, you immediately run the following read in another process.
XREAD COUNT 1000 STREAMS somestream 0-0
Is this XREAD guaranteed to return data? The documentation is not clear about whether a successful XADD guarantees that readers will immediately see the added data, or whether there might be some small delay.
Redis's famous single threaded architecture answers that question. When you execute XADD on one process(client side) and after another process(client side) executes XREAD then the server execute them consecutively which guarantees that the data will be there before XREAD is executed.
The next quotes are from The Little Redis Book
Every Redis command is atomic, including the ones that do multiple things. Additionally, Redis has support for transactions when using multiple commands.
You might not know it, but Redis is actually single-threaded, which is how every command is guaranteed to be atomic.
While one command is executing, no other command will run. (We’ll briefly talk about scaling in a later chapter.) This
is particularly useful when you consider that some commands do multiple things.

why we need the eval command in redis, if redis is single-threaded?

One way to execute commands in REDIS, is via the EVAL script.
Redis uses the same Lua interpreter to run all the commands. Also
Redis guarantees that a script is executed in an atomic way: no other
script or Redis command will be executed while a script is being
executed.
Since redis is single threaded, why do we need EVAL to offer atomicity? I would expect that this is implied by the one running thread.
Am I missing something? Apologies if my question is pretty simple, I am quite new to redis
Every (data path) command in Redis is indeed atomic. EVAL allows you to compose an "atomic" command with a script that can include many Redis commands, not to mention control structures and some other utilities that are helpful to implement server-side logic. To achieve the similar "atomicity" of multiple commands you can also use MULTI/EXEC blocks (i.e. transactions) by the way.
Without an EVAL or a MULTI/EXEC block, your commands will run one after another, but other clients' commands may interleave between them. Using a script or transaction eliminates that.
Redis uses a single thread to execute commands from many different clients. So if you want a group of commands from one client to be executed in sequence, you need a way to direct Redis to do that. That's what EVAL is for. Without it, Redis could interleave the execution of commands from other clients in with yours.

What happens if Elasticache decides to reshard while my script is running?

I have some scripts that touch a handful of keys. What happens if Elasticache decides to reshard while my script is running? Will it wait for my script to complete before it moves the underlying keys? Or should I assume that it is not the case and design my application with this edge case in mind?
One example would be a script that increment 2 keys at once. I could receive a "cluster error" which means something went wrong and I have to execute my script again (and potentially end up with one key being incremented twice and the other once)
Assuming you are talking about a Lua script, for as long as you're passing the keys in the arguments (and not hardcoded in the script) you should be good. It will be all or nothing. If you are not using a Lua script - consider doing so
From EVAL command:
All Redis commands must be analyzed before execution to determine
which keys the command will operate on. In order for this to be true
for EVAL, keys must be passed explicitly. This is useful in many ways,
but especially to make sure Redis Cluster can forward your request to
the appropriate cluster node.
From AWS ElastiCache - Best Practices: Online Cluster Resizing:
During resharding, we recommend the following:
Avoid expensive commands – Avoid running any computationally and I/O
intensive operations, such as the KEYS and SMEMBERS commands. We
suggest this approach because these operations increase the load on
the cluster and have an impact on the performance of the cluster.
Instead, use the SCAN and SSCAN commands.
Follow Lua best practices – Avoid long running Lua scripts, and always
declare keys used in Lua scripts up front. We recommend this approach
to determine that the Lua script is not using cross slot commands.
Ensure that the keys used in Lua scripts belong to the same slot.

Should I always use pipelining when there are more than 1 command in Redis?

I am new to Redis, and a little bit confused when I should use pipelining or should I use it all the time when there are more than 1 command to be sent?
For example, if I want to send 10 SET commands to Redis server at a time, should I simply run the 10 commands one by one or should I pipeline them?
Are there any disadvantage to pipeline 10 SET commands instead of sending them one by one?
when I should use pipelining
Pipeline is used to reduce RTT, so that you can improve the performance, when you need to send many commands to Redis.
should I use it all the time when there are more than 1 command to be sent?
It depends. You should discuss it case by case.
if I want to send 10 SET commands to redis server at a time, should I simply run the 10 commands one by one or should I pipeline them?
Pipline these commands will be much faster than sending 10 commands. However, in this particular case, the best choice is using the MSET command.
Are there any disadvantage to pipeline 10 SET commands instead of sending them one by one?
With pipeline, Redis needs to consume more memory to hold the result of all these piped commands before sending them to client. So if you pipe too many commands, that's might be a problem.

Redis - How to use async MULTI?

We are using hiredis from our C++ application using the redisAsyncCommandArgv interface. What we are not able to figure out is how to execute a bunch of commands in a MULTI-EXEC transaction. The redisAsyncCommandArgv encodes only one command at a time. Can it be used to send all the commands in a transaction in one go? Synchronous API is straight forward but, they cannot be used.
Any help?
It is impossible to use MULTI-EXEC over Redis Asynchronous API. You can only choose one.
MULTI-EXEC transactions SHOULD always execute sequentially. Redis Asynchronous API, on the other hand, allows the commands to be delivered out of order. Hence, it won't make sense to make a MULTI-EXEC transaction if the commands aren't in the proper sequence or worse, if MULTI and EXEC commands themselves became out of order.