what is "default" in a config file (redis-server in this case) - redis

This may sound to be a very silly or basic doubt, but here's my question.
In this redis server config file
https://github.com/antirez/redis/blob/2.8/redis.conf#L14
Consider the config, log-level notice.
What if I don't use this line :
log-level notice
in my config file. What will be it's default value?
Or does all the configs set in this example config file are default by themselves inside redis?

You can check Redis log level with config get loglevel. After you comment log level property and restart Redis service. You will see that default log level is notice.

Related

Log file specified by gemfire.log-file never gets created

I have a web application hosted in Tomcat that uses a Geode cache but I cannot get Geode to produce a log file. The cache and properties, including the log-file property, are created programmatically. I see some Geode logging in the Tomcat stdout and it seems to confirm the log-file property has been set:
....
13:30:43,149 | INFO | [LoggingSession] | Startup Configuration:
### GemFire Properties defined with system property ###
conserve-sockets=false
### GemFire Properties defined with api ###
....
log-disk-space-limit=0
log-file=/local/install/user1/config/Lev1/Web/WebAppServer/Server1_1/logs/gemfire.log
log-file-size-limit=0
log-level=config
....
However, the file specified never gets created.
I have tried setting the file permissions to 777 on that directory, as well as setting the log-level to 'fine' but neither made a difference. The Geode output only shows up in the stdout, which I believe is the default.
Why isn't the log file specified by the log-file property getting created?

Is there a way to add additional configurable settings in OpsCenter 6.0.2 Lifecycle Manager config profiles?

I would really like to add the following settings to our spark-defaults.conf using OpsCenter 6.0.2 in order to avoid configuration drift. Is there a way to add these config items to the config profile template?
spark.cores.max 4
spark.driver.memory 2g
spark.executor.memory 4g
spark.python.worker.memory 2g
NOTE: As Mike Lococo has pointed out in the comments for this answer -- this answer may work to update the config profile values but will not result in those values being written to spark-defaults.conf.
The following is not a solution!
You can; you have to update the config profile via the LCM Config Profile API (https://docs.datastax.com/en/opscenter/6.0/api/docs/lcm_config_profile.html#lcm-config-profile).
First, identify the config profile that needs updating:
$ curl http://localhost:8888/api/v1/lcm/config_profiles
Get the href for the specific config profile that needs updating, request it, and save the response body to a file:
$ curl http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 > profile.json
Now, in the profile.json file you just saved to, you add or edit the key at json > spark-defaults-conf to include the following keys:
"spark-defaults-conf": {
"spark-cores-max": 4,
"spark-python-worker-memory": "2g",
"spark-ssl-enabled": false,
"spark-drivers-memory": "2g",
"spark-executor-memory": "4g"
}
Save the updated profile.json. Finally, execute an HTTP PUT to the same config profile URL, using the edited file as the request data:
$ curl -X PUT http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 -d #profile.json

JBAS010153: Node identifier property is set to the default value. Please make sure it is unique

I am getting the following WARN message while I start my host which is one of the Host Controller (HC) that is attached to the Domain Controller(DC).
[Server:server-two] 14:06:13,822 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 33) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
And my host-slave.xml has the following config...
<server-identities>
<!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>
</server-identities>
I hope this config is the reason...... maybe I didn't understand..... but I couldn't find node identifier property rather this is the default secret value which I hope could be the cause of this WARN message.
However, I didn't mention HC to lookup host-slave.xml..... the command which I ran to start my HC is.....
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
nnn.nn.nn.88 is my DC
Else please advise what's cause of the WARN message.
And please let me know the implication of this WARN message and advise us on the required config to overcome and sort out any consecutive consequences that would've been bound for this WARN.
I'm new to wildfly, and noticed this warning when I started it standalone from eclipse (I'm doing the following tutorial: https://wwu-pi.github.io/tutorials/lectures/eai/020_tutorial_jboss_project.html)
The fix was to add a node-identifier to the core-environment in the subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:2.0">
<core-environment node-identifier="meindertwillemhoving">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
</subsystem>
This is in file [wildfly]\standalone\configuration\standalone.xml.
This is the same answer as https://developer.jboss.org/message/880136#880136
According to WFLY-10541 if you are using WildFly v14.0.0 or newer you can pass the following to the startup script to set the transaction node identifier:
-Djboss.tx.node.id=<some-unique-id>
Setting the node identifier to an unique value is only required for proper handling of XA Transactions.
You can set it as follows in your XML configuration:
<subsystem xmlns="urn:jboss:domain:transactions:6.0">
<core-environment node-identifier="${jboss.tx.node.id}">
It needs to be a unique value up to 23 bytes long.
More about this here: http://www.mastertheboss.com/jboss-server/jboss-configuration/configuring-transactions-jta-using-jboss-as7-wildfly
Building on #kaptan's answer I added the following to the bottom of
bin/standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Djboss.tx.node.id=`hostname -f`
This way I don't have to remember to add the "-Djboss.tx.node.id=" when running up wildfly by hand.
For this <server-identities> is not the issue. In fact, it shouldn't be touched at all.
When JBoss is started in domain mode by domain.sh, by default there will be three servers server-one server-two server-three. When you are running one more HC attached to the DC.... the defaulted server which is in auto-start mode will get clash when we start HC attaching to DC,- by the following command.
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
Or by having the host configuration at HC (default host.xml... until unless we choose a different one....).
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
In order to solve this, we need to turn auto-start to false..... And we need to create a new server-group...... To that group we need to add dc-created-server and hc-created-server..... we can choose the appropriate same profile either full-ha or full for both created servers across DC and HC.
SO when we start the group by configuring the required HEAP size including permgen space... You could start both DC and HC.... and in DC you could see both of your-created-servers are started in the created server-group.
DC- Domain Controller
HC- Host Controller
To deploy you need to upload .ear or web-archive in the Application Console. You cannot place it in the deployments folder as how you do in standalone mode with .dodeploy file.
If you upload the same .ear next version do the Replace option instead of the Remove & Add option in the upload process.

Rabbitmq configuration problems (Doesn't read config file)

I set: rabbitmq.conf and rabbitmq-env,conf both in /etc/rabbitmq/
rabbit.[{rabbit, [{loopback_users, []}]}].
and in rabbitmq-env.conf
CONFIG_FILE=/etc/rabbitmq/rabbitmq (also tried with the conf e
but in the log It shows (after rabbit restart):
config file(s) : (none)
and of course no configuration is actually loaded.. Any help? Ideas? `
I've got an answer in the rabbitmq google group so I'll share it here:
rabbitmq.conf should be named: rabbitmq.config..
Fixes the problem

How do I use Nagios to monitor a log file

We are using Nagios to monitor our network with great success. However, we have a syslog for critical application errors and while I set up check_log, it doesn't seem to work as well as monitering a device.
The issues are:
It only shows the last entry
There doesn't seem to be a way to acknowledge the critical error and
return the monitor to a good state
Is nagios the wrong tool, or are we just not setting up the service monitering right?
Here are my entries
# log file
define command{
command_name check_log
command_line $USER1$/check_log -F /var/log/applications/appcrit.log -O /tmp/appcrit.log -q ?
}
# Define the log monitering service
define service{
name logfile-check ;
use generic-service ;
check_period 24x7 ;
max_check_attempts 1 ;
normal_check_interval 5 ;
retry_check_interval 1 ;
contact_groups admins ;
notification_options w,u,c,r ;
notification_period 24x7 ;
register 0 ;
}
define service{
use logfile-check
host_name localhost
service_description CritLogFile
check_command check_log
}
For monitoring logs with Nagios, typically the log checker will return a warning only for newly discovered error messages each time it is invoked (so it must retain some state in order to know to ignore them on subsequent runs). Therefore I usually set:
max_check_attempts 1
is_volatile 1
This causes Nagios to send out the alert immeidately, but only once, and then go back to normal.
My favorite log checker is logwarn, but I'm biased because I wrote it myself after not finding any existing ones that I liked. The logwarn package includes a Nagios plugin.
Nothing in your config jumps out at me as being misconfigured.
By design, check_log will only show either an OK message, or the last log entry that triggered an alert. If you need to see multiple entries, you'll need to modify the plugin.
However, I find the fact that you're not getting recoveries somewhat odd. The way check_log works (by comparing the current log to the previous version), you should get a recovery on the very next service check. Except of course, when there have been additional matching entries added to the log since the last check.
Does forcing another service check (or several) cause it to recover?
Also, I don't intend this in a mean way, but make sure it's really malfunctioning.
Is your log getting additional matching entries in between checks, causing it not to recover? Your check is matching "?" which will match anything new in the log. Is something else (a non-error) being added to the log and inadvertently causing a match?
If none of the above are the issue, I would suggest narrowing it down by taking Nagios out of the equation. Try running check_log manually (from the command line, but as the same user as nagios), and with a different oldlog. It should go something like this -
run check with a new "oldlog" - get initialization message
run check - check OK
make change to log
run check - check fails
run check - check OK
If this doesn't work, then you know to focus on the log, the oldlog, and how the check_log is doing the check.
If it works, then it points more towards a problem with your nagios configuration.
There is a Nagios plugin that you can use to check the log files: it's called check_logfiles and it's used to scan the lines of a file for regular expressions.
The following link shows how to install and configure check_logfiles for Nagios and Opsview:
https://www.opsview.com/resources/nagios-alternative/blog/syslog-monitoring-nagios-opsview
As there are many ways to achieve a goal, there is also a nice plugin from Consol available:
https://labs.consol.de/lang/en/nagios/check_logfiles/
supports regex
supports log rotation
To use it, you need a cfg file, this is an example for oracle databases
#searches = ({
tag => 'oraalerts',
options => 'sticky=28800',
logfile => '/u01/app/oracle/diag/rdbms/davmdkp/DAVMDKP1/trace/alert_DAVMDKP1.log',
criticalpatterns => [
'ORA\-0*204[^\d]', # error in reading control file
'ORA\-0*206[^\d]', # error in writing control file
'ORA\-0*210[^\d]', # cannot open control file
'ORA\-0*257[^\d]', # archiver is stuck
'ORA\-0*333[^\d]', # redo log read error
'ORA\-0*345[^\d]', # redo log write error
'ORA\-0*4[4-7][0-9][^\d]',# ORA-0440 - ORA-0485 background process failure
'ORA\-0*48[0-5][^\d]',
'ORA\-0*6[0-3][0-9][^\d]',# ORA-6000 - ORA-0639 internal errors
'ORA\-0*1114[^\d]', # datafile I/O write error
'ORA\-0*1115[^\d]', # datafile I/O read error
'ORA\-0*1116[^\d]', # cannot open datafile
'ORA\-0*1118[^\d]', # cannot add a data file
'ORA\-0*1122[^\d]', # database file 16 failed verification check
'ORA\-0*1171[^\d]', # datafile 16 going offline due to error advancing checkpoint
'ORA\-0*1201[^\d]', # file 16 header failed to write correctly
'ORA\-0*1208[^\d]', # data file is an old version - not accessing current version
'ORA\-0*1578[^\d]', # data block corruption
'ORA\-0*1135[^\d]', # file accessed for query is offline
'ORA\-0*1547[^\d]', # tablespace is full
'ORA\-0*1555[^\d]', # snapshot too old
'ORA\-0*1562[^\d]', # failed to extend rollback segment
'ORA\-0*162[89][^\d]', # ORA-1628 - ORA-1632 maximum extents exceeded
'ORA\-0*163[0-2][^\d]',
'ORA\-0*165[0-6][^\d]', # ORA-1650 - ORA-1656 tablespace is full
'ORA\-16014[^\d]', # log cannot be archived, no available destinations
'ORA\-16038[^\d]', # log cannot be archived
'ORA\-19502[^\d]', # write error on datafile
'ORA\-27063[^\d]', # number of bytes read/written is incorrect
'ORA\-0*4031[^\d]', # out of shared memory.
'No space left on device',
'Archival Error',
],
warningpatterns => [
'ORA\-0*3113[^\d]', # end of file on communication channel
'ORA\-0*6501[^\d]', # PL/SQL internal error
'ORA\-0*1140[^\d]', # follows WARNING: datafile #20 was not in online backup mode
'Archival stopped, error occurred. Will continue retrying',
]
});
I believe there's now a real Nagios plugin that monitors logs effectively.
http://support.nagios.com/forum/viewtopic.php?f=6&t=8851&p=42088&hilit=unixautomation#p42088
The home page of the Nagios plugin on that page is Nagios Log Monitor
Your [ commands.cfg file ] will contain:
define command {
command_name NagiosLogMonitor
command_line $USER1$/NagiosLogMonitor $HOSTNAME$ $ARG1$ $ARG2$ $ARG3$ $ARG4$ '$ARG5$' '$ARG6$' $ARG7$ $ARG8$ $ARG9$ $ARG10$
}
OR
define command {
command_name NagiosLogMonitor
command_line $USER1$/NagiosLogMonitor $HOSTADDRESS$ $ARG1$ $ARG2$ $ARG3$ $ARG4$ '$ARG5$' '$ARG6$' $ARG7$ $ARG8$ $ARG9$ $ARG10$
}
Your [ services.cfg file ] will look similar to:
define service {
check_command NagiosLogMonitor!logrobot!autofig!/var/log/proteus.log!15!500.html!500 Internal Server Error!1!2!-foundn
max_check_attempts 1
service_description 500_ERRORS_LOGCHECK
host_name sky.blat-01.net,sky.blat-02.net,sky.blat-03.net
use fifteen-minute-interval
}
Nagios now has a solution that integrates tightly with Nagios Core, XI, etc.
Nagios Log Server which can alert on any query on any log file on any system in your infrastructure.