Is there a command to move redis keys from one database to another or is it possible only with lua scripting??
There has been this type of question asked perviously redis move all keys but the answers are not appropriate and convincing for a beginner like me.
u can use "MOVE" to move one key to another redis database;
the text below is from redis.io
MOVE key db
Move key from the currently selected database (see SELECT) to the specified destination database. When key already exists in the destination database, or it does not exist in the source database, it does nothing. It is possible to use MOVE as a locking primitive because of this.
Return value
Integer reply, specifically:
1 if key was moved.
0 if key was not moved.
I think this will do the job:
redis-cli keys '*' | xargs -I % redis-cli move % 1 > /dev/null
(1 is the new database number, and redirection to /dev/null is in order to avoid getting millions of '1' lines - since it will move the keys one by one and return 1 each time)
Beware that redis might run out of connections and then will display tons of such errors:
Could not connect to Redis at 127.0.0.1:6379: Cannot assign requested address
So it could be better (and much faster) to just dump the database and then import it into the new one.
If you have a big database with millions of keys, you can use the SCAN command to select all the keys (without blocking like the dangerous KEYS command that even Redis authors do not recommend).
SCAN gives you the keys by "pages" one by one and the idea is to start at page 0 (formally called CURSOR 0) and then continue with the next page/cursor until you hit the end (the stop signal is when you get the CURSOR 0 again).
You may use any popular language for this like Redis or Ruby or Scala. Here a draft using Bash Scripting:
#!/bin/bash -e
REDIS_HOST=localhost
PAGE_SIZE=10000
KEYS_TO_QUERY="*"
SOURCE_DB=0
TARGET_DB=1
TOTAL=0
while [[ "$CURSOR" != "0" ]]; do
CURSOR=${CURSOR:-0}
>&2 echo $TOTAL:$CURSOR
KEYS=$(redis-cli -h $REDIS_HOST -n $SOURCE_DB scan $CURSOR match "$KEYS_TO_QUERY" count $PAGE_SIZE)
unset CURSOR
for KEY in $KEYS; do
if [[ -z $CURSOR ]]; then
CURSOR=$KEY
else
TOTAL=$(($TOTAL + 1))
redis-cli -h $REDIS_HOST -n $SOURCE_DB move $KEY $TARGET_DB
fi
done
done
IMPORTANT: As usual, please do not copy and paste scripts without understanding what is doing, so here some details:
The while loop is selecting the keys page by page with the SCAN command and with every key then running the MOVE command.
The SCAN command will return the next cursor in the first line and then the rest of the lines will be the found keys. The while loop starts with the variable CURSOR not defined and then defined in the first loop (this is some magic to just stop in the next CURSOR 0 that will signal the end of the scanning)
PAGE_SIZE is the value of how long will be each scan query, lower values will impact very low on the server but will be slow, bigger values will make the server "sweat" but faster ... here the network is impacted, so try to find a sweet spot around 10000 or even 50000 (ironically values of 1 or 2 may stress also the server but due to the network wrapping part of each query)
KEYS_TO_QUERY: It's a pattern on the keys you want to query, like "*balance*" witll select the keys that include balance in the name of the key (don't forget to include the quotes to avoid syntax errors) ... additionally you can do the filtering at script side, just query all the key with "*" and add a bash scripting if conditional, this will be slower but if you cannot find a pattern for your keys selection this will help.
REDIS_HOST: using localhost by default, change it to any server you like (if you are using a custom port other than the default port 6379 you can also include it with something like myredisserver:4739)
SOURCE_DB: the database ID you want the keys move from (by default 0)
TARGET_DB: the database ID you want the keys move to (by default 1)
You can use this script to execute other commands or checks with the keys, just replace the MOVE command call for anything you may need.
NOTE: To move keys from one Redis server to another Redis server (this is not only moving between internal databases) you can use redis-utils-cli from the NPM packages here -> https://www.npmjs.com/package/redis-utils-cli
Related
I have hundreds of thousands of keys that start with html: that I need to clean up quickly. Instead of writing a python script to do this, I was hoping for a simple command via redis-cli to do this. I was surprised to not find one. I am assuming I can do something with xargs from the linux terminal but I need to also pass in the auth password to redis.
After some munging and experimentation, I found this combination to work perfectly to remove all the keys I needed that match the beginning from left to right in the key name itself. It was very fast to run, which is no surprise since redis is in memory.
redis-cli -a <mypassword> --raw keys "html:*" | xargs redis-cli -a <mypassword> del
I have implemented storing the results of a selection from a database (list) in Redis to speed up data loading on the site. When a cms user performs any operations (creates, deletes or edits article), the keys in redis are dropped and load fresh data from the database.
But sometimes it happens that one or two users do not drop their keys after performing operations with articles and old data remains in redis. The Internet was available, nothing was turned off. Why does this happen - are there any typical reasons that I need to know?
Do I need to block the database so that there are no multiple connections? But redis seems to be single-threaded. What am I doing wrong? The function for key drop is very simple:
function articlesRedisDrop($prefix)
{
$keys = $redis->keys($prefix."*");
if(count($keys) > 0)
{
foreach($keys as $key)
{
$redis->del($key);
}
}
}
guess that an atomic question. After $redis->keys($prefix."*"), before $redis->del($key), another connection added refresh data to redis.
You can try to combine get and del operations to single lua script.
local keys = redis.call("keys",string.format("%s.*",KEYS[1]))
for _, key in pairs(keys) do
redis.call("del", key)
end
then run the script with eval command and prefix param. If you meet performance problem with keys command, you can try scan or store all prefix keys to a set then get and delete all.
I'm trying to Implement 2 behaviors in my system using Hiredis and Redis.
1) fetch all keys with pattern by publish event and not by the array returning when using SCAN command.
(my system works only with publish event even for get so need to stick to this behavior)
2) delete all keys with pattern
After reading the manuals I understand that "SCAN" command is my friend.
I have 2 approaches, not sure what is the pros/cons:
1) Using Lua script that will call SCAN until we get 0 as our cursor and publish-event/delete-key for each entry found.
2) Using Lua script but return the cursor as the return code and call the LUA script from the hiredis client with new cursor until it gets 0.
Or maybe other ideas will be nice.
My database is not hugh at all .. not more than 500k entries with key/val that are less then 100 bytes.
Thank you!
Option 1 is probably not ideal to run inside of a Lua script, since it blocks all other requests from being executed. SCAN works best when you are running it in your application so Redis can still process other requests.
Situation:
I have a PostgreSQL-database that is logging data from sensors in a field-deployed unit (let's call this the source database). The unit has a very limited hard-disk space, meaning that if left untouched, the data-logging will cause the disk where the database is residing to fill up within a week. I have a (very limited) network link to the database (so I want to compress the dump-file), and on the other side of said link I have another PostgreSQL database (let's call that the destination database) that has a lot of free space (let's just, for argument's sake, say that the source is very limited with regard to space, and the destination is unlimited with regard to space).
I need to take incremental backups of the source database, append the rows that have been added since last backup to the destination database, and then clean out the added rows from the source database.
Now the source database might or might not have been cleaned since a backup was last taken, so the destination database needs to be able to only imported the new rows in an automated (scripted) process, but pg_restore fails miserably when trying to restore from a dump that has the same primary key numbers as the destination database.
So the question is:
What is the best way to restore only the rows from a source that are not already in the destination database?
The only solution that I've come up with so far is to pg_dump the database and restore the dump to a new secondary-database on the destination-side with pg_restore, then use simple sql to sort out which rows already exist in my main-destination database. But it seems like there should be a better way...
(extra question: Am I completely wrong in using PostgreSQL in such an application? I'm open to suggestions for other data-collection alternatives...)
A good way to start would probably be to use the --inserts option to pg_dump. From the documentation (emphasis mine) :
Dump data as INSERT commands (rather than COPY). This will make
restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
I don't have the means to test it right now with pg_restore, but this might be enough for your case.
You could also use the fact that from the version 9.5, PostgreSQL provides ON CONFLICT DO ... for INSERTs. Use a simple scripting language to add these to the dump and you should be fine. I haven't found an option for pg_dump to add those automatically, unfortunately.
You might google "sporadically connected database synchronization" to see related solutions.
It's not a neatly solved problem as far as I know - there are some common work-arounds, but I am not aware of a database-centric out-of-the-box solution.
The most common way of dealing with this is to use a message bus to move events between your machines. For instance, if your "source database" is just a data store, with no other logic, you might get rid of it, and use a message bus to say "event x has occurred", and point the endpoint of that message bus at your "destination machine", which then writes that to your database.
You might consider Apache ActiveMQ or read "Patterns of enterprise integration".
#!/bin/sh
PSQL=/opt/postgres-9.5/bin/psql
TARGET_HOST=localhost
TARGET_DB=mystuff
TARGET_SCHEMA_IMPORT=copied
TARGET_SCHEMA_FINAL=final
SOURCE_HOST=192.168.0.101
SOURCE_DB=slurpert
SOURCE_SCHEMA=public
########
create_local_stuff()
{
${PSQL} -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG0
CREATE SCHEMA IF NOT EXISTS ${TARGET_SCHEMA_IMPORT};
CREATE SCHEMA IF NOT EXISTS ${TARGET_SCHEMA_FINAL};
CREATE TABLE IF NOT EXISTS ${TARGET_SCHEMA_FINAL}.topic
( topic_id INTEGER NOT NULL PRIMARY KEY
, topic_date TIMESTAMP WITH TIME ZONE
, topic_body text
);
CREATE TABLE IF NOT EXISTS ${TARGET_SCHEMA_IMPORT}.tmp_topic
( topic_id INTEGER NOT NULL PRIMARY KEY
, topic_date TIMESTAMP WITH TIME ZONE
, topic_body text
);
OMG0
}
########
find_highest()
{
${PSQL} -q -t -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG1
SELECT MAX(topic_id) FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic;
OMG1
}
########
fetch_new_data()
{
watermark=${1-0}
echo ${watermark}
${PSQL} -h ${SOURCE_HOST} -U postgres ${SOURCE_DB} <<OMG2
\COPY (SELECT topic_id, topic_date, topic_body FROM ${SOURCE_SCHEMA}.topic WHERE topic_id >${watermark}) TO '/tmp/topic.dat';
OMG2
}
########
insert_new_data()
{
${PSQL} -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG3
DELETE FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic WHERE 1=1;
COPY ${TARGET_SCHEMA_IMPORT}.tmp_topic(topic_id, topic_date, topic_body) FROM '/tmp/topic.dat';
INSERT INTO ${TARGET_SCHEMA_FINAL}.topic(topic_id, topic_date, topic_body)
SELECT topic_id, topic_date, topic_body
FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic src
WHERE NOT EXISTS (
SELECT *
FROM ${TARGET_SCHEMA_FINAL}.topic nx
WHERE nx.topic_id = src.topic_id
);
OMG3
}
########
delete_below_watermark()
{
watermark=${1-0}
echo ${watermark}
${PSQL} -h ${SOURCE_HOST} -U postgres ${SOURCE_DB} <<OMG4
-- delete not yet activated; COUNT(*) instead
-- DELETE
SELECT COUNT(*)
FROM ${SOURCE_SCHEMA}.topic WHERE topic_id <= ${watermark}
;
OMG4
}
######## Main
#create_local_stuff
watermark="`find_highest`"
echo 'Highest:' ${watermark}
fetch_new_data ${watermark}
insert_new_data
echo 'Delete below:' ${watermark}
delete_below_watermark ${watermark}
# Eof
This is just an example. Some notes:
I assume a non-decreasing serial PK for the table; in most cases it could also be a timestamp
for simplicity, all the queries are run as user postgres, you might need to change this
the watermark method will guarantee that only new records will be transmitted, minimising bandwidth usage
the method is atomic, if the script crashes, nothing is lost
only one table is fetched here, but you could add more
because I'm paranoid, I us a different name for the staging table and put it into a separate schema
The whole script does two queries on the remote machine (one for fetch one for delete); you could combine these.
but there is only one script (executing from the local=target machine) involved.
The DELETE is not yet active; it only does a count(*)
Is there any way to delete a set from namespace (Aerospike) from aql or CLI ???
My set also contains Ldts .
Please suggest me a way to delete whole Set from LDT
You can delete a set by using
asinfo -v "set-config:context=namespace;id=namespace_name;set=set_name;set-delete=true;"
This link explains more about how the set is deleted
http://www.aerospike.com/docs/operations/manage/sets/#deleting-a-set-in-a-namespace
There is a new and better way to do this as of Aerospike Server version 3.12.0, released in March 2017:
asinfo -v "truncate:namespace=namespace_name;set=set_name"
This previous command has been DEPRECATED, and works only up to Aerospike 3.12.1, released in April 2017:
asinfo -v "set-config:context=namespace;id=namespace_name;set=set_name;set-delete=true;"
The new command is better in several ways:
Can be issued during migrations
Can be issued while data is being written to the set
It is sufficient to run it on just one node of the cluster
I used it under those conditions (during migration, while data was being written, on 1 node) and it ran very quickly. A set with 30 million records was reduced to 1000 records in about 6 seconds. (Those 1000 records were presumably the ones written during those 6 seconds)
Details here
As of Aerospike 3.12, which was released in March 2017, the feature of deleting all data in a set or namespace is now supported in the database.
Please see the following documentation:
http://www.aerospike.com/docs/reference/info#truncate
Which provides the command line command that looks like:
asinfo truncate:namespace=;set=;lut=
It can also be truncated from the client APIs, here is the java documentation:
http://www.aerospike.com/apidocs/java/com/aerospike/client/AerospikeClient.html
and scroll down to the "truncate" method.
Please note that this feature can, optionally, take a time specification. This allows you to delete records, then start inserting new records. As timestamps are used, please make certain your server clocks are well synchronized.
You can also delete a set with the Java client as follows:
(1) Use the client "execute" method, which applies a UDF on all queried rows
AerospikeClient.execute(WritePolicy policy, Statement statement, String packageName, String functionName, Value... functionArgs) throws AerospikeException
(2) Define the statement to include all rows of the given set:
Statement statement = new Statement();
statement.setNamespace("my_namespace");
statement.setSetName("my_set");
(3) Specify a UDF that deletes the given record:
function delete_rec(rec)
aerospike:remove(rec)
end
(4) Call the method:
ExecuteTask task = AerospikeClient.execute(null, statement, "myUdf", "delete_rec")
task.waitTillComplete(timeout);
Is it performant? Unclear, but my guess is asinfo is better. But it's very convenient for testing/debugging/setup.
You can't delete a set but you can delete all records that exist in the set by scanning all the records and deleting then one by one. Sample C# code that will do the trick:
AerospikeClient.ScanAll(null, AerospikeNameSpace, category, DeleteAllRecordsCallBack);
Here DeleteAllRecordsCallBack is a callback function where in you can delete records one by one. This callback function gets called for all records.
private void DeleteAllRecordsCallBack(Key key, Record record)
{
AerospikeClient.Delete(null, key);
}
Using AQL:
TRUNCATE namespace_name.set_name
https://www.aerospike.com/docs/tools/aql/aql-help.html
Take care that you'll need to restart your nodes (one by one to avoid downtime) because you'll not retrieve bins left space instead.
I mean maximum limit of bins in Aerospike is 32,767. If you just delete and recreate several times your set, if it's create for example 10000 bins each time, you'll not be able to create more than 2,767 bins the 4th time because bins counter is kept in ram.
If you restart you're cluster, it will be released.
You can't dynamically delete a set from namespace like "drop table" in RDMS.
The following command using asinfo only lazily delete data inside a set:
asinfo -v "set-config:context=namespace;id=namespace_name;set=set_name;set-delete=true;"
There is a post about it in aerospike discuss site and I didn't see progress about this issue yet.
In our production experience special java deletion utility works quite well even without recently introduced durable deletes. You build it from sources, put somewhere near the cluster and run this way:
java -jar delete-set-1.0.0-jar-with-dependencies.jar -h <aerospike_host> -p 3000 -s <set_to_delete> -n <namespace_name>
In our prod environment cold restarts are quite rare events, basically when aerospike crashes. And the data flow is quite high so defragmentation kicks in earlier and we don't even have zombie record issue.
BTW asinfo way mentioned earlier didn't work for us. The records stayed there for couple of days so we use delete-set utility which worked right away.