Is there any way to check, from an active session, whether a Redis server has persistence (e.g. RDB persistence) enabled? The INFO command does contain a section on persistence, but it is not clear to me whether the values indicate that persistence is turned on.
There are two type of persistance, RDB and AOF.
Check is RDB persistence enabled:
redis-cli CONFIG GET save
RDB persistence enabled if it return something like that:
1) "save"
2) "900 1 300 10 60 10000"
RDB persistence disabled if you get empty result:
1) "save"
2) ""
To check is AOF persistence enabled, invoke:
redis-cli CONFIG GET appendonly
If you get yes - it's enabled, no - disabled.
INFO is one way, but you can also use CONFIG GET for save and appendonly to check if persistence is enabled.
As for using INFO's output to understand your persistency settings, this is a little trickier. For AOF, simply check the value of aof_enabled under the Persistence section of INFO's output - 0 means that it's disabled. RDB files, OTOH, are used both for snapshotting and backups so INFO is less helpful in that context. If you know that no SAVE/BGSAVE commands have been issued to your instances, periodic changes to the value of rdb_last_save_time will indicate that the save configuration directive is used.
Related
On ubuntu 16.04. Interacting with a local redis instance via redis-cli. Working with a node hubot script which uses redis as its primary data store.
when I type keys * I get a single key hubot:storage
so I FLUSHALL and get an ok response. But if the Hubot is running or else as soon as it turns on, it restores the value of that key immediately so I can never delete it.
I'v used the info command to try to see if it is persisting on some other redis instance and I've cleared all backup files from /var/redis. Basically I can't figure out where this data is being stored to keep getting restored from.
Any advice regarding how I could clear this out or where Hubot may be caching this?
It seems to be related to this code: https://github.com/hubotio/hubot-redis-brain/blob/master/src/redis-brain.js specifically the chunk at line 49 is what gets called before each restore.
Steps
Stop hubot
Flush redis (important that this is done while hubot is NOT running)
Start hubot
The reasoning is that hubot has an in-memory representation of the brain and will write it out to redis at intervals. Perhaps a nicer solution to this which would help during script development would be a command that can empty the brain and save that, but I can't see an obvious API for that in either robot.brain or hubot-redis-brain
Recently we decided to add a cache layer to our mule APIs and Redis came to the scope.
We are at Mule3.8.0 and Redis connector 4.0.0. and we met following issues while configuring:
How to separate our keys by Redis DB ? This is not mentioned in document and there is only a 'Default Partition Name' in the configuration seems close but whatever value we put there, seems no effect - it will always be db0 containing all the keys, hence we can't really have "dev", "qa" and "test" key sets in the same redis cluster
The Redis connector document has example as below
<redis:sorted-set-select-range-by-index config- ref="Redis_configuration" key="my_key" start="0" end="-1" />
however when we tried the samething it complains the 'end' value should be >= 0 hence not usable
How to configure a connection pool properly with Redis connector configuration? Not mentioned in document again. The only attribute is the 'Pool Config Reference' and I tried to put a spring bean ref to my own JedisPoolConfig there, seems no effect, and number of the connections remains the same no matter what value I put in that bean.
Thanks in advance If someone could help with these issues above
James
How to separate our keys by Redis DB ?
You can use Redis in cluster mode with sharing data (http://redis.io/topics/cluster-tutorial)
I don't think you need special configuration in Mule.
I think you mix Partition term in Mule and Partition term in Redis.
Regards,
In Knox config file in Ambari we have defined:
<url>http://{{namenode_host}}:{{namenode_http_port}}/webhdfs</url>
The problem is we have 2 namenodes, one active and one passive for high availability. Our active namenode01 failed so namenode02 became active.
This caused problems for a lot scripts as they were hardcoded to point to namenode01. So we used a command to failover namenode02 back to namenode01 using a terminal, not Ambari.
Now, the macro {{namenode_host}} is defined as namenode02 and not namenode01.
So, where is {{namenode_host}} defined?
Or, do we need to failover namenode01 to namenode02, then failover again to namenode01 using Ambari to update the macro?
If we need to failover the namenode using Ambari, I'm assuming we need to select the "Restart" option? There isn't a direct failover command.
See issue here:
https://issues.apache.org/jira/browse/AMBARI-12763
This was committed to Ambari to support HA mode for Knox. However if you're still looking for the location take a look at the file that's edited in the patch. That file is the place where the macros are set. You'll have to find it on your local machine though.
Should be something like params_linux.py
I changed the domain.xml while the server was running and it seems like old configuration has been restored later again. Is it not allowed to change the file directly while the server is running? Does one need to use the admin console in this case?
With dynamic configuration, changes take effect while the DAS or instance is running. The following configuration changes do not require restart:
1) Adding or deleting add-on components
2) Adding or removing JDBC, JMS, and connector resources and pools (Exception: Some connection pool properties affect applications.)
3) Changing a system property that is not referenced by a JVM option or a port
4) Adding file realm users
5) Changing logging levels
6) Enabling and disabling monitoring
7) Changing monitoring levels for modules
8) Enabling and disabling resources and applications
9) Deploying, undeploying, and redeploying applications
One is not allowed to edit this configuration file manually at all:
http://docs.oracle.com/cd/E19798-01/821-1751/gjjrl/index.html
Note –
Changes are automatically applied to the appropriate configuration
file. Do not edit the configuration files directly. Manual editing is
prone to error and can have unexpected results.
When I get my ElasticSearch server settings via
curl -XGET localhost:9200/_cluster/settings
I see persistent and transient settings.
{
"persistent": {
"cluster.routing.allocation.cluster_concurrent_rebalance": "0",
"threadpool.index.size": "20",
"threadpool.search.size": "30",
"cluster.routing.allocation.disable_allocation": "false",
"threadpool.bulk.size": "40"
},
"transient": {}
}
If I set a persistent setting, it doesn't save it to my config/elasticsearch.yml config file? So my question is when my server restarts, how does it know what my persistent settings are?
Don't tell me not to worry about it because I almost lost my entire cluster worth of data because it picked up all the settings in my config file after it restarted, NOT the persistent settings shown above :)
Persistent settings are stored on each master-eligible node in the global cluster state file, which can be found in the Elasticsearch data directory: data/CLUSTER_NAME/nodes/N/_state, where CLUSTER_NAME is the name of the cluster and N is the node number (0 if this is the only node on this machine). The file name has the following format: global-NNN where NNN is the version of the cluster state.
Besides persistent settings this file may contain other global metadata such as index templates. By default the global cluster state file is stored in the binary SMILE format. For debugging purposes, if you want to see what's actually stored in this file, you can change the format of this file to JSON by adding the following line to the elasticsearch.yml file:
format: json
Every time cluster state changes, all master-eligible nodes store the new version of the file, so during cluster restart the node that starts first and elects itself as a master will have the newest version of the cluster state. What you are describing could be possible if you updated the settings when one of your master-eligible nodes was not part of the cluster (and therefore couldn't store the latest version with your settings) and after the restart this node became the cluster master and propagated its obsolete settings to all other nodes.