Where does ElasticSearch store persistent settings? - lucene

When I get my ElasticSearch server settings via
curl -XGET localhost:9200/_cluster/settings
I see persistent and transient settings.
{
"persistent": {
"cluster.routing.allocation.cluster_concurrent_rebalance": "0",
"threadpool.index.size": "20",
"threadpool.search.size": "30",
"cluster.routing.allocation.disable_allocation": "false",
"threadpool.bulk.size": "40"
},
"transient": {}
}
If I set a persistent setting, it doesn't save it to my config/elasticsearch.yml config file? So my question is when my server restarts, how does it know what my persistent settings are?
Don't tell me not to worry about it because I almost lost my entire cluster worth of data because it picked up all the settings in my config file after it restarted, NOT the persistent settings shown above :)

Persistent settings are stored on each master-eligible node in the global cluster state file, which can be found in the Elasticsearch data directory: data/CLUSTER_NAME/nodes/N/_state, where CLUSTER_NAME is the name of the cluster and N is the node number (0 if this is the only node on this machine). The file name has the following format: global-NNN where NNN is the version of the cluster state.
Besides persistent settings this file may contain other global metadata such as index templates. By default the global cluster state file is stored in the binary SMILE format. For debugging purposes, if you want to see what's actually stored in this file, you can change the format of this file to JSON by adding the following line to the elasticsearch.yml file:
format: json
Every time cluster state changes, all master-eligible nodes store the new version of the file, so during cluster restart the node that starts first and elects itself as a master will have the newest version of the cluster state. What you are describing could be possible if you updated the settings when one of your master-eligible nodes was not part of the cluster (and therefore couldn't store the latest version with your settings) and after the restart this node became the cluster master and propagated its obsolete settings to all other nodes.

Related

unable to clear logs stored by RabbitMQ

I have RabbitMQ running on my test server. The log file has grown into 20 GB and i would like to clear it. I even have a scheduler to delete it timely but it is not working due to below issue.
Issue:
If i delete the log file either manually or via a scheudled script, the file automatically gets restored. How do i get this fixed?
Rabbitmq.Config file looks like below,
[
{rabbit, [
{ssl_listeners, [1111]},
{ssl_options, [{cacertfile,"D:\\RabbitMQ Server\\Certs\\certname.cer"},
{certfile,"D:\\RabbitMQ Server\\Certs\\cer_cername_host.cer"},
{keyfile,"D:\\RabbitMQ Server\\Certs\\cer_cername_host.pfx"},
{verify,verify_peer},
{fail_if_no_peer_cert,false}]}
]}
].
Fixed this issue.
Acually it was an issue of config file not having correct name. Someone stored the config file name as rabbitmq copy.config rather than rabbitmq.config.
I can see underm overview sections it is refering to the correct Rabbitmq CONFIG. previously it was not picking up a config.

Naming rabbitmq node with a preconfigured name

I am setting up a Rabbitmq single node container built form a docker image. The Image is configured to persist to nfs mounted disc.
I ran into an issue when the image is restarted. Since every time a node restarted it gets unique name and the restarted node searching for the old nodes it’s reads from cluster_nodes.config file
Error dump shows:
Error during startup: {error,
{failed_to_cluster_with,
[rabbit#9c3bfb851ba3],
"Mnesia could not connect to any nodes."}}
How can I configure my image to use same name each time when it’s restarted instead of using arbitrary node name given by Kubernetes cluster?

ECS Fargate - No Space left on Device

I had deployed my asp.net core application on AWS Fargate and all was working fine. I am using awslogs driver and logs were correctly sent to the cloudwatch. But after few days of correctly working, I am now seeing only one kind of log as shown below:
So no application logs are showing up due to no space. If I update the ECS service, logging starts working again, suggesting that the disk has been cleaned up.
This link suggests that awslogs driver does not take up space and sends log to cloudwatch instead.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_cannot_pull_image.html
Did anyone also faced this issue and knows how to resolve the same?
You need to set the "LibraryLogFileName" parameter in your AWS Logging configuration to null.
So in the appsettings.json file of a .Net Core application, it would look like this:
"AWS.Logging": {
"Region": "eu-west-1",
"LogGroup": "my-log-group",
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
},
"LibraryLogFileName": null
}
It depends on how you have logging configured in your application. The AWSlogs driver is just grabbing all the output sent to the console and saving it to CloudWatch, .NET doesn't necessarily know about this and is going to keep writing logs like it would have otherwise.
Likely .NET is still writing logs to whatever location it otherwise would be.
Advice for how to troubleshoot and resolve:
First, run the application locally and check if log files are being saved anywhere
Second, optinally run a container test to see if log files are being saved there too
Make sure you have docker installed on your machine
Download the container image from ECR which fargate is running.
docker pull {Image URI from ECR}
Run this locally
Do some task you know will genereate some logs
Use docker exec -it to connect up to your container
Check if log files are being written to the location you identified when you did the first test
Finally, once you have identified that logs are being written to files somewhere pick one of these options
Add some flag which can be optionally specified to disable logging to a file. Use this when running your application inside of the container.
Implement some logic to clean up log files periodically or once they reach a certain size. (Keep in mind ECS containers have up to 20GB local storage)
Disable all file logging(not a good option in my opinion)
Best of luck!

Mule Redis connector configuration is not consistent with its document

Recently we decided to add a cache layer to our mule APIs and Redis came to the scope.
We are at Mule3.8.0 and Redis connector 4.0.0. and we met following issues while configuring:
How to separate our keys by Redis DB ? This is not mentioned in document and there is only a 'Default Partition Name' in the configuration seems close but whatever value we put there, seems no effect - it will always be db0 containing all the keys, hence we can't really have "dev", "qa" and "test" key sets in the same redis cluster
The Redis connector document has example as below
<redis:sorted-set-select-range-by-index config- ref="Redis_configuration" key="my_key" start="0" end="-1" />
however when we tried the samething it complains the 'end' value should be >= 0 hence not usable
How to configure a connection pool properly with Redis connector configuration? Not mentioned in document again. The only attribute is the 'Pool Config Reference' and I tried to put a spring bean ref to my own JedisPoolConfig there, seems no effect, and number of the connections remains the same no matter what value I put in that bean.
Thanks in advance If someone could help with these issues above
James
How to separate our keys by Redis DB ?
You can use Redis in cluster mode with sharing data (http://redis.io/topics/cluster-tutorial)
I don't think you need special configuration in Mule.
I think you mix Partition term in Mule and Partition term in Redis.
Regards,

Is Redis Persistence Enabled?

Is there any way to check, from an active session, whether a Redis server has persistence (e.g. RDB persistence) enabled? The INFO command does contain a section on persistence, but it is not clear to me whether the values indicate that persistence is turned on.
There are two type of persistance, RDB and AOF.
Check is RDB persistence enabled:
redis-cli CONFIG GET save
RDB persistence enabled if it return something like that:
1) "save"
2) "900 1 300 10 60 10000"
RDB persistence disabled if you get empty result:
1) "save"
2) ""
To check is AOF persistence enabled, invoke:
redis-cli CONFIG GET appendonly
If you get yes - it's enabled, no - disabled.
INFO is one way, but you can also use CONFIG GET for save and appendonly to check if persistence is enabled.
As for using INFO's output to understand your persistency settings, this is a little trickier. For AOF, simply check the value of aof_enabled under the Persistence section of INFO's output - 0 means that it's disabled. RDB files, OTOH, are used both for snapshotting and backups so INFO is less helpful in that context. If you know that no SAVE/BGSAVE commands have been issued to your instances, periodic changes to the value of rdb_last_save_time will indicate that the save configuration directive is used.