redis keys not getting deleted after expire time - redis

keys set with a expire is not getting cleared after expire time. For example, in reds-cli
> set hi bye
>expire hi 10
>ttl hi #=> 9
#(after 10 seconds)
>ttl hi #=> 0
>get hi #=> bye
redis version is 2.8.4. This is a master node in a sentinel setup with a single slave. Persistence is turned off. Kindly help me with debugging this issue.

If there is any data in redis with a large size, there might be problems while slave nodes sync from master nodes, which might lead to the TTL of these data in slave won't sync and so that the data in slave nodes won't get deleted.
You can use scripts to delete specific data in master node and slave nodes will delete those data whose key can't be found in master node.

Update redis.conf file to keep notify-keyspace-events Ex and then restart the redis server using redis-server /usr/local/etc/redis.conf

Related

Redis Cluster too many slots info

Recently, we met a strange error in our prod. We have a REDIS cluster with 3 masters and 3 slaves. 77:7000, 99:7001, 13:7002 are the masters. While I use redis-cli to connect 77:7000 and execute "cluster nodes" command. The info is like below:
It seems like 77:7000 is importing lots of slots out of its range. And output of "cluster info" is like:
While from 99:7001, 13:7002, the output of "cluster info" are all ok. When we want to query keys from 77:7000, it will show me that the cluster is down. From the 99:7001, 13:7002, their keys can be queried.
My solution is to failover 77:7000 to its slave manually, then everything is ok. After this, I failbacked to 77:7000 and execute "cluster info", the output is cluster ok.
But any idea about the weird problem? Thanks a lot!

Moving data from one node to other node in same cluster in Apache Ignite

In a baseline cluster of 8 nodes, we have data in the partitioned template without backup. Assume I have avg 28K entries in all 8 nodes in SampleTable(#Cache). Total data = 28K*8 = 224K entries.
CREATE TABLE SampleTable(....) WITH "template=partitioned"
Now I want to shut down one node and before shutting down I want to move data from 8th Node to other nodes so approx 32K (32K*7=224K) entries to 7 nodes. Can I move data from any node to other nodes?
How can I move all the data from one node to other nodes (cluster) before shutting down that node? Keeping the data balanced and distributed in rest 7 nodes.
I created the table (SampleTable) using create statement and inserted data using insert statement (using JDBC connection).
Persistence is enabled.
I think that the most straightforward way is using backups.
Anyway, if you need to avoid data loss, using backups (or/and persistence) is a must.
As a simple workaround you can try the following steps:
You can scan local data on the node, which you want to shut down, using ScanQuery and store the data in a database.
After that, shut down the node and exclude it from baseline.
Upload data from a database.
The approach described below will work only if there are backups configured in a cluster (> 0).
To remove a node from Baseline Topology and rebalance data between rest 7 nodes you are able to use Cluster Activation Tool:
Stop the node you want to remove from topology.
Wait until the node is stopped. Message Ignite node stopped OK should appear in logs.
Check that node is offline:
$IGNITE_HOME/bin/control.sh --baseline
Cluster state: active
Current topology version: 8
Baseline nodes:
ConsistentID=<node1_id>, STATE=ONLINE
ConsistentID=<node2_id>, STATE=ONLINE
...
ConsistentID=<node8_id>, STATE=OFFLINE
--------------------------------------------------------------------------------
Number of baseline nodes: 8
Other nodes not found.
Remove the node from baseline topology:
$IGNITE_HOME/bin/control.sh --baseline remove <node8_id>

Redis doesn't update dump.rdb any more

I've been using Redis on a windows server for last 10 months without any issue but this morning I checked my website and saw that it's completely empty!!!
After a few minutes of investigation I realised that Redis database was empty???
Luckily I use redis as a caching solution so I still have all data in MS SQL database and I've managed to recover content of my website.
But I realised that redis has stopped saving data into dump.rdb. The last time file was updated 20.11.2015 at 11:35.
Redis config file has set
save 900 1
save 300 10
save 60 10000
and by just reloading all from MS SQL this morning I had more than 15.000 writes. So the file should be updated, right?
I run redis-check-dump dump.rdb and as result got:
Processed 7924 valid opcodes
I even run manually SAVE command and as result got:
OK <2.12>
But the file size and update date of dump.rdb is the same 20.11.2015
I just want to highlight that between 20.11.2015 and today I haven't changed anything in redis configuration or restarted the server
Any idea?
It's not the answer but at least I've managed to make Redis to start dumping data to disk.
Using console I set a new dbfilename name and now Redis is again dumping data data to disk.
It would be great if someone has a clue why it had stopped duping data to original dump file

Aerospike Data Recovery Other Cluster Or Local HDD

I use cluster
configuration storage-engine device
when i restart one node ,the data will recovery in other cluster or local HDD?
When I restart the whole cluster,data from which to restore?
I want to know is how the whole process
version : community edition
i have 3 node;
storage-engine device {
file /opt/aerospike/datafile
filesize 1G
data-in-memory true
}
this is config
i stop node1--->the cluster have 2 node -->i modify data(if data before in node1)
i stop node2 and node3,after cluster all stop,i start the node1 -->node2 -->node3
This will have a dirty data?
I can think node3 has all the data?
Let me try to answer from what I could get from your question. Correct me if my understanding is wrong.
You are having a file-backed namespace in aerospike. The data will be persisted to the file. The data is also kept in memory (because of the 'data-in-memory true' setting). The default replication factor is 2. So, your data will reside on 2 nodes in a stable state.
When you shutdown 3 nodes, one by one, the unchanged data will be there in the persistent files. So, when the nodes are restarted, they data will come back from the persistent files.
The data that changed during the shutdown (node1 is down but node2 & node3 are up) is the tricky question. When node1 is done, a copy of its data will be in one of node2 & node3 (because of the replication factor=2). So, when you update a record, we do something called duplicate resolution which will fetch the latest record and update it on the new master node. It will be persisted on that node.

Why does my solr slave index keep growing?

I have a 5-core solr 1.4 master that is replicated to another 5-core solr using solr replication as described here. All writes are done against the master and replicated to the slave intermittently. This is done using the following sequence:
Commit on each master core
Replicate on each slave core
Optimize on each slave core
Commit on each slave core
The problem I am having is that the slave seems to be keeping around old index files and taking up ever more disk space. For example, after 3 replications, the master core data directory looks like this:
$ du -sh *
145M index
But the data directory on the slave of the same core looks like this:
$ du -sh *
300M index
144M index.20100621042048
145M index.20100629035801
4.0K index.properties
4.0K replication.properties
Here's the contents of index.properties:
#index properties
#Tue Jun 29 15:58:13 CDT 2010
index=index.20100629035801
And replication.properties:
#Replication details
#Tue Jun 29 15:58:13 CDT 2010
replicationFailedAtList=1277155032914
previousCycleTimeInSeconds=12
timesFailed=1
indexReplicatedAtList=1277845093709,1277155253911,1277155032914
indexReplicatedAt=1277845093709
replicationFailedAt=1277155032914
lastCycleBytesDownloaded=150616512
timesIndexReplicated=3
The solrconfig.xml for this slave contains the default deletion policy:
[...]
<mainIndex>
<unlockOnStartup>false</unlockOnStartup>
<reopenReaders>true</reopenReaders>
<deletionPolicy class="solr.SolrDeletionPolicy">
<str name="maxCommitsToKeep">1</str>
<str name="maxOptimizedCommitsToKeep">0</str>
</deletionPolicy>
</mainIndex>
[...]
What am I missing?
It is useless to commit and optimize on the slaves. Since all the write operations are done on the master, it is the only place where those operations should occur.
This may be the cause of the problem: since you do an additional commit and optimize on the slaves, it keeps more commit points on the slaves. But this is only a guess, it should be easier to understand what happens with your full solrconfig.xml on both the master and the slaves.
the optimize that's done at slave is causing the index to double its size. on optimize separate index segments will be created to rewrite the original index into number of segments mentioned during optimize (default is 1).
Best practice is to optimize once in a while don't invoke it at any event (run a cron job or something) and optimize only at master not at slave. slaves will get these new segments through replication.
You shouldn commit at slave, index reload will take care of the availability of new docs at slave after replication.
I determined that the extra index.* directories seem to be left behind when I replicate after completely reloading the master. What I mean by "completely reloading" is stopping the master, deleting everything under [core]/data/*, restarting (at which point solr creates a new index), indexing all of our docs, then replicating.
Based on some additional testing, I have found that it seems to be safe to remove the other index* directories (other than the one specified in [core]/data/index.properties). If I'm not comfortable with that workaround I may decide to empty the slave index (stop; delete data/*; start) before replicating the first time after completely reloading the master.