Is redis EVAL really atomic and crash safe? - redis

Redis doc seems to affirm EVAL scripts are similiar too MULTI/EXEC transactions.
In my personnals words, this meens a LUA script guarantees two things :
sequential : the lua script is run like it is alone on server, thats ok with me
atomic / one shot writes : this I don't understand with LUA scripts. when is the "EXEC like" called on LUA scripts ? Because with scripts you can do conditionnal writes based on reads (or even writes because some writes returns values like NX functions). So how can redis garanthee that either all or nothing is executed with scripts ? What happen if the server crash in the middle of a script ? rollback is not possible with redis.
(I don't have this concern with MULTI/EXEC on this second point because with MULTI/EXEC you can't do writes based on previous commands)
(sorry for basic english, I am french)

Just tested it using this very slow script:
eval "redis.call('set', 'hello', 10); for i = 1, 1000000000 do redis.call('set', 'test', i) end" 0
^ This sets the hello key to 10 then infinitely sets the test key to a number.
While executing the script, Redis logs this warning:
# Lua slow script detected: still in execution after 5194 milliseconds. You can try killing the script using the SCRIPT KILL command. Script SHA1 is: ...
So I then tested to shutdown the container entirely while the script is executing to simulate a crash.
After the restart, the hello and test keys are nil, meaning that none of the called commands are actually been executed. Thus, scripts are indeed atomic and crash safe as the doc states.
My belief is that Redis wraps the Lua scripts within a MULTI/EXEC to make them atomic, or at least it has the same effect.

Related

Is there any way redis can check conditions before expiring keys via TTL?

Consider I have a very large records(key values) of data in redis for which the TTL is set according to some business rules(also stored in redis), lets say if the business rule is changed, and because of that the record should not expire on the time which it was set previously, but should expire according to new time.
I cannot simply change the ttl of millions of records, each time the rule is updated.
How can I acheive this ? Is there a way in redis, which allows us to provide a script to run when it deletes the record when TTL is met.
Redis supports LUA scripting. Maybe you should check it.
https://redis.io/commands/eval/
EVAL : Executes a Lua script.
EVALSHA : Executes a Lua script. Cached
SCRIPT EXISTS : Check in the script cache by hash
SCRIPT FLUSH : Clear cache
SCRIPT KILL : Kill running script
SCRIPT LOAD : Loads the specified Lua script into the script cache.
redis 127.0.0.1:6379> EVAL "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1
key2 first second
1) "key1"
2) "key2"
3) "first"
4) "second"

Redis multiple calls vs lua script

I have the below use case.
Set the key with a value
Get the key if it already exits other wise set it with a expiry.
Basically, I am trying to do a set with nx and get. Here is the lua script I came up with
local v = redis.call('GET', KEYS[1])
if v then
return v
end
redis.call('SETEX', KEYS[1], ARGV[1], ARGV[2])"
I am slightly confused whether I should use the above Lua script as compared to executing two different separate commands of get first and then set.
Any pros or cons of using the lua script. Or should two separate commands be better.
Yes, you should use the script.
If you use two separate Redis commands then you'll end up with a race condition: another process might set the value after your GET and before your SETEX, causing you to overwrite it. Your logic requires this sequence of commands to be atomic, and the best way to do that in Redis is with a Lua script.
It would be possible to achieve this without the script, by using MULTI and WATCH, but the Lua script is much more straightforward.

Using scan command inside lua script

I'm trying to Implement 2 behaviors in my system using Hiredis and Redis.
1) fetch all keys with pattern by publish event and not by the array returning when using SCAN command.
(my system works only with publish event even for get so need to stick to this behavior)
2) delete all keys with pattern
After reading the manuals I understand that "SCAN" command is my friend.
I have 2 approaches, not sure what is the pros/cons:
1) Using Lua script that will call SCAN until we get 0 as our cursor and publish-event/delete-key for each entry found.
2) Using Lua script but return the cursor as the return code and call the LUA script from the hiredis client with new cursor until it gets 0.
Or maybe other ideas will be nice.
My database is not hugh at all .. not more than 500k entries with key/val that are less then 100 bytes.
Thank you!
Option 1 is probably not ideal to run inside of a Lua script, since it blocks all other requests from being executed. SCAN works best when you are running it in your application so Redis can still process other requests.

Sequential Teradata Queries

I have a collection of SQL queries that need to run in a specific order using Teradata. How can this be done?
I've considered writing an application in some other language (like Python or C++) to sequentially call each query, but am unsure how to get live data there from Teradata. I also want to keep the queries as separate SQL files (like it is currently).
Goal is to minimize the need for human interaction ie. I want to hit "Run" and let it take care of the rest.
BTEQ scripts are your Go-To solution.
Have each query, or at least, logical blocks of several statements, in single bteq script.
Then create a script that will call the the BTEQ with needed settings, i.e. TD logon command and have this script be called in a batch with parameters like this:
start /wait C:\Teradata\BTEQ.bat Script_1.txt
start /wait C:\Teradata\BTEQ.bat Script_2.txt
start /wait C:\Teradata\BTEQ.bat Script_3.txt
pause
Then you can create several batch files, split in logical blocks and have them executed at will, or scheduled.

PostgreSQL and queue commands

I would like to know if there is a way to quere my queries. I am doing some basic text matching in psql and each query (which is saved in a different script) takes about 6 hours to run. I was wondering if there is a way to queue my scripts?
For example;
my database is called; "data"
my scipts are called; cancer, heart, death
and I am doing the following;
data; \i cancer;
data; \i heart;
data; \i death;
But I have to come back every so often and check whether it is running or not etc which doesn't seem very efficient.
I am new to postgresql so appreciate any help.
this is the most easiest/fastest solution I can think of, but should work for your case ;)
When using psql from command line, you can start it with
-f filename
where filename is a SQL script. It will run the query and send the output to stdout. Also you can forward this to a file. Just put your queries into that SQL-file and you got your own queuing.
Assuming you might run Linux, you could use screen to have a simple way to leave your session open when logging of for the night.
The easiest solution was to create a separate sql file which runs through the commands sequentially.