Redis Cluster too many slots info - redis

Recently, we met a strange error in our prod. We have a REDIS cluster with 3 masters and 3 slaves. 77:7000, 99:7001, 13:7002 are the masters. While I use redis-cli to connect 77:7000 and execute "cluster nodes" command. The info is like below:
It seems like 77:7000 is importing lots of slots out of its range. And output of "cluster info" is like:
While from 99:7001, 13:7002, the output of "cluster info" are all ok. When we want to query keys from 77:7000, it will show me that the cluster is down. From the 99:7001, 13:7002, their keys can be queried.
My solution is to failover 77:7000 to its slave manually, then everything is ok. After this, I failbacked to 77:7000 and execute "cluster info", the output is cluster ok.
But any idea about the weird problem? Thanks a lot!

Related

Redis - Total Keys in Cluster

I want to check total number of keys in Redis Cluster.
Is there any direct command available to get this or I have to check with INFO command from each instance / node.
There is no direct way.
You can do the following with the cli though:
redis-cli --cluster call one-cluster-node-ip-address:the-port DBSIZE
And then sum the results.
Alternatively, there's RedisGears with which you can do the following to get the same result:
redis> RG.PYEXECUTE "GB().count().run()"

Moving data from one node to other node in same cluster in Apache Ignite

In a baseline cluster of 8 nodes, we have data in the partitioned template without backup. Assume I have avg 28K entries in all 8 nodes in SampleTable(#Cache). Total data = 28K*8 = 224K entries.
CREATE TABLE SampleTable(....) WITH "template=partitioned"
Now I want to shut down one node and before shutting down I want to move data from 8th Node to other nodes so approx 32K (32K*7=224K) entries to 7 nodes. Can I move data from any node to other nodes?
How can I move all the data from one node to other nodes (cluster) before shutting down that node? Keeping the data balanced and distributed in rest 7 nodes.
I created the table (SampleTable) using create statement and inserted data using insert statement (using JDBC connection).
Persistence is enabled.
I think that the most straightforward way is using backups.
Anyway, if you need to avoid data loss, using backups (or/and persistence) is a must.
As a simple workaround you can try the following steps:
You can scan local data on the node, which you want to shut down, using ScanQuery and store the data in a database.
After that, shut down the node and exclude it from baseline.
Upload data from a database.
The approach described below will work only if there are backups configured in a cluster (> 0).
To remove a node from Baseline Topology and rebalance data between rest 7 nodes you are able to use Cluster Activation Tool:
Stop the node you want to remove from topology.
Wait until the node is stopped. Message Ignite node stopped OK should appear in logs.
Check that node is offline:
$IGNITE_HOME/bin/control.sh --baseline
Cluster state: active
Current topology version: 8
Baseline nodes:
ConsistentID=<node1_id>, STATE=ONLINE
ConsistentID=<node2_id>, STATE=ONLINE
...
ConsistentID=<node8_id>, STATE=OFFLINE
--------------------------------------------------------------------------------
Number of baseline nodes: 8
Other nodes not found.
Remove the node from baseline topology:
$IGNITE_HOME/bin/control.sh --baseline remove <node8_id>

redis keys not getting deleted after expire time

keys set with a expire is not getting cleared after expire time. For example, in reds-cli
> set hi bye
>expire hi 10
>ttl hi #=> 9
#(after 10 seconds)
>ttl hi #=> 0
>get hi #=> bye
redis version is 2.8.4. This is a master node in a sentinel setup with a single slave. Persistence is turned off. Kindly help me with debugging this issue.
If there is any data in redis with a large size, there might be problems while slave nodes sync from master nodes, which might lead to the TTL of these data in slave won't sync and so that the data in slave nodes won't get deleted.
You can use scripts to delete specific data in master node and slave nodes will delete those data whose key can't be found in master node.
Update redis.conf file to keep notify-keyspace-events Ex and then restart the redis server using redis-server /usr/local/etc/redis.conf

Aerospike Data Recovery Other Cluster Or Local HDD

I use cluster
configuration storage-engine device
when i restart one node ,the data will recovery in other cluster or local HDD?
When I restart the whole cluster,data from which to restore?
I want to know is how the whole process
version : community edition
i have 3 node;
storage-engine device {
file /opt/aerospike/datafile
filesize 1G
data-in-memory true
}
this is config
i stop node1--->the cluster have 2 node -->i modify data(if data before in node1)
i stop node2 and node3,after cluster all stop,i start the node1 -->node2 -->node3
This will have a dirty data?
I can think node3 has all the data?
Let me try to answer from what I could get from your question. Correct me if my understanding is wrong.
You are having a file-backed namespace in aerospike. The data will be persisted to the file. The data is also kept in memory (because of the 'data-in-memory true' setting). The default replication factor is 2. So, your data will reside on 2 nodes in a stable state.
When you shutdown 3 nodes, one by one, the unchanged data will be there in the persistent files. So, when the nodes are restarted, they data will come back from the persistent files.
The data that changed during the shutdown (node1 is down but node2 & node3 are up) is the tricky question. When node1 is done, a copy of its data will be in one of node2 & node3 (because of the replication factor=2). So, when you update a record, we do something called duplicate resolution which will fetch the latest record and update it on the new master node. It will be persisted on that node.

will mysql dump break replication?

I have 2 databses X "production" and Y "testing
Database X should be identical to Y in structure. However, they are not because I mad many changes to the production.
Now, I need to somehow to export X and import it into Y without breaking any replications.
I am thinking to do mysql dump but I don't want to case any issue to replication this is why I am asking this question to confirm.
Here are the step that I want to follow
back up production. (ie. mysqldump -u root -p --triggers --routines X > c:/Y.sql)
Restore it. (ie. mysql -u root -p Y < c:\Y.sql)
Will this cause any issues to the replication?
I believe the dump will execute everything and save it into it's bin log and the slave will be able to see it and replicate it with no problems.
Is what I am trying to do correct? will it cause any replication issues?
thanks
Yes, backing up from X and restoring to Y is a normal operation. We often call this "reinitializing the replica."
This does interrupt replication. There's no reliable way to restore the data at the same time as letting the replica continue to apply changes, because the changes the replica is processing are not in sync with the snapshot of data represented by the backup. You could overwrite changed data, or miss changes, and this would make the replica totally out of sync.
So you have to stop replication on the replica while you restore.
Here are the steps for a typical replica reinitialization:
mysqldump from the master, with the --master-data option so the dump includes the binary log position that was current at the moment of the dump.
Stop replication on the replica.
Restore the dump on the replica.
Use CHANGE MASTER to alter what binary log coordinates the replica starts at. Use the coordinates that were saved in the dump.
Start replication on the replica.
Re your comment:
Okay, I understand what you need better now.
Yes, there's an option for mysqldump --no-data so the output only includes the CREATE TABLE and other DDL, but no INSERT statements with data. Then you can import that to a separate database on the same server. And you're right, by default DDL statements are added to the binary log, so any replication replicas will automatically run the same statements.
You can even do the export & import in just two steps like this:
$ mysqladmin create newdatabase
$ mysqldump --no-data olddatabase | mysql newdatabase