Redis multiple calls vs lua script - redis

I have the below use case.
Set the key with a value
Get the key if it already exits other wise set it with a expiry.
Basically, I am trying to do a set with nx and get. Here is the lua script I came up with
local v = redis.call('GET', KEYS[1])
if v then
return v
end
redis.call('SETEX', KEYS[1], ARGV[1], ARGV[2])"
I am slightly confused whether I should use the above Lua script as compared to executing two different separate commands of get first and then set.
Any pros or cons of using the lua script. Or should two separate commands be better.

Yes, you should use the script.
If you use two separate Redis commands then you'll end up with a race condition: another process might set the value after your GET and before your SETEX, causing you to overwrite it. Your logic requires this sequence of commands to be atomic, and the best way to do that in Redis is with a Lua script.
It would be possible to achieve this without the script, by using MULTI and WATCH, but the Lua script is much more straightforward.

Related

Is redis EVAL really atomic and crash safe?

Redis doc seems to affirm EVAL scripts are similiar too MULTI/EXEC transactions.
In my personnals words, this meens a LUA script guarantees two things :
sequential : the lua script is run like it is alone on server, thats ok with me
atomic / one shot writes : this I don't understand with LUA scripts. when is the "EXEC like" called on LUA scripts ? Because with scripts you can do conditionnal writes based on reads (or even writes because some writes returns values like NX functions). So how can redis garanthee that either all or nothing is executed with scripts ? What happen if the server crash in the middle of a script ? rollback is not possible with redis.
(I don't have this concern with MULTI/EXEC on this second point because with MULTI/EXEC you can't do writes based on previous commands)
(sorry for basic english, I am french)
Just tested it using this very slow script:
eval "redis.call('set', 'hello', 10); for i = 1, 1000000000 do redis.call('set', 'test', i) end" 0
^ This sets the hello key to 10 then infinitely sets the test key to a number.
While executing the script, Redis logs this warning:
# Lua slow script detected: still in execution after 5194 milliseconds. You can try killing the script using the SCRIPT KILL command. Script SHA1 is: ...
So I then tested to shutdown the container entirely while the script is executing to simulate a crash.
After the restart, the hello and test keys are nil, meaning that none of the called commands are actually been executed. Thus, scripts are indeed atomic and crash safe as the doc states.
My belief is that Redis wraps the Lua scripts within a MULTI/EXEC to make them atomic, or at least it has the same effect.

Using scan command inside lua script

I'm trying to Implement 2 behaviors in my system using Hiredis and Redis.
1) fetch all keys with pattern by publish event and not by the array returning when using SCAN command.
(my system works only with publish event even for get so need to stick to this behavior)
2) delete all keys with pattern
After reading the manuals I understand that "SCAN" command is my friend.
I have 2 approaches, not sure what is the pros/cons:
1) Using Lua script that will call SCAN until we get 0 as our cursor and publish-event/delete-key for each entry found.
2) Using Lua script but return the cursor as the return code and call the LUA script from the hiredis client with new cursor until it gets 0.
Or maybe other ideas will be nice.
My database is not hugh at all .. not more than 500k entries with key/val that are less then 100 bytes.
Thank you!
Option 1 is probably not ideal to run inside of a Lua script, since it blocks all other requests from being executed. SCAN works best when you are running it in your application so Redis can still process other requests.

PostgreSQL: Execute queries in loop - performance issues

I need to copy data from a file into a PostgreSQL database. For that purpose I parse that file using bash in a loop and generate the corresponding insert queries. The trouble is that it takes a lot of time in order to perform that loop.
1)What can I do to accelerate that loop? Should I open a kind of connection before the loop and close it after?
2)Should I use a temporary text file inside the loop in order to write there the unique values and search in it using the text utility instead of writing them to the database and perform a search there?
Does whatever programming language you use commit after every insert? If so, the easiest thing you can do is commit after inserting all rows rather than after every row.
You might also be able to batch inserts, but using the PostgreSQL copy command is less work and also very fast.
If you insist on using BASH you could split files by defined row numbers and then excecute paralel commands using & at the end of each line.
I strongly suggest you try a different approach or programing language since as Bill said, bash doesn't talk to postgres, you can also use the pg_dump funtionality if your file's source is another postgres database.

Redis increment several fields in several hsets

I have data of several users in redis e.g.
hset - user111; field - dayssincelogin .....
I want to periodically update dayssincelogin for all users, one way to do it is
KEYS user*
HINCRBY ${key from above} dayssincelogin 1
Is it possible to do this in a single call? If not what's the most optimal way? I'm using using redis cluster and java client.
You can't do multiple increments in one command but you can bulk your commands together for performance gains.
Use Redis Pipe-lining or Scripting.
In Jedis I dont thing LUA is supported (If someone could answer that :) )
As #mp911de suggested; Use Exec for LUA Scripting
and you can also use pipelining to execute your bulk methods faster.
Have a Pipelining readup here for more information
And here is the sample code to use Jedis Pipelining.
Pipeline p = jedis.pipelined();
p.multi();
Response<Long> r1 = p.hincrBy("a", "f1", -1);
Response<Long> r2 = p.hincrBy("a", "f1", -2);
Response<List<Object>> r3 = p.exec();
List<Object> result = p.syncAndReturnAll();
Edit: Redis allows multi key operations only when they are present in the same shard. You should arrange your keys in such a way to ensure data affinity. like key1.{foo} and key5678.{foo} will reside in the same server

Redis delete all key which are there in one list

I have a list A in redis having values
K1 , K2 , K3
I want to delete all keys from redis matching values in list.
Is there a way to do this thing on one command or pipelining ?
You can fetch your list on the client side and then pipe some delete commands on the server. There is no other possibility for your task to be accomplished, as the LUA scripting feature is missing for the moment. With it, you could execute your task on the server without the need to fetch the whole list on the client.
yes, you can do that using eval and Lua (since redis 2.6)
eval "redis.call('del', unpack(redis.call('smembers', 'A')))" 0