How to direct Unicorn stdout and stderr to log4r logs? - ruby-on-rails-3

I have set up log4r as per outlined here: "How to configure Log4r with Rails 3.0.x?"
However now I'm faced with unicorn not sending its output to the log4r logs. How do you do this?

does this comment help?
in your config, Rails isn't overriding the "file" config value of the
outputter - the argument is "filename" :) It defaults to the name of
the file/command that started the process. Note that any custom file
name you specify must have a file extension, or it will fail to stick
the date in the file name! – Nevir Aug 24 '11 at 22:05

Related

SVN Log file command

when i run the SVN log -q command in SVN cli then we got below error.
r102892 | neeraja.gudiwada_xyz.com | 2017-05-09 12:40:05 +0530 (Tue, 09 May 2017)
------------------------------------------------------------------------
r102891 | neeraja.gudiwada_xyz.com | 2017-05-09 12:36:17 +0530 (Tue, 09 May 2017)
SVN: E175009: The XML response contains invalid XML
SVN: E130003: Malformed XML: no element found at line 3507
any idea why we are getting this error.
#phd I have already see all related post but they are not elping to resolve the issue.
Fresh hint (external). If short:
you have to use URL, which can be used without any redirects at Apache-side. Web-browsers supports it, SVN - does not
Possible reasons
Misconfiguration of <Location> section in Apache's httpd.conf (I can't recall, have directory to have or don't have trailing / due to this problem, and haven't SVN-server now). BTW, recommended by SVN-book form is without trailing slash <Location /svn>
You used bad URL in svn log, which have to be "converted" to real (again, traling "/" problem). How to test it: use browser (web-brower) for accessing URL and identify URL, which give you result (after or without possible redirections). If this URL will differ from used in svn log - use correct form

Setting date.timezone in php.ini does not work

Snapshot of the output of phpinfo() function:
As I am trying to install Roundcube which requires to set date.timezone, I would like to set date.timezone by filling the date.timezone field in php.ini.
Server Configuration
Debian 9 (Stretch)
PHP 7.0.19-1 (cli) (built: May 11 2017 14:04:47) ( NTS )
Apache/2.4.25 (Debian)
What I tried
According to http://php.net/manual/en/timezones.php, I modified the date.timezone field from both /etc/php/7.0/apache2/php.ini and /etc/php/7.0/cli/php.ini with different syntaxes:
date.timezone = Europe/Paris
date.timezone = 'Europe/Paris'
date.timezone = "Europe/Paris"
And I always restarted Apache after any change.
Nothing change in phpinfo, the date.timezone field is always set to "no value" and the first step of the installation of Roundcube, date.timezone is NOT OK.
But when I try:
php -i
I get:
date/time support => enabled
"Olson" Timezone Database Version => 0.system
Timezone Database => internal
Default timezone => Europe/Paris
Directive => Local Value => Master Value
date.default_latitude => 31.7667 => 31.7667
date.default_longitude => 35.2333 => 35.2333
date.sunrise_zenith => 90.583333 => 90.583333
date.sunset_zenith => 90.583333 => 90.583333
date.timezone => Europe/Paris => Europe/Paris
I read many pretty old posts about this type of issue but the different solutions did not work for me. Any idea?
EDIT
According to phpinfo():
Configuration File (php.ini) Path
/etc/php/7.0/apache2
Loaded Configuration File
/etc/php/7.0/apache2/php.ini
Scan this dir for additional .ini files
/etc/php/7.0/apache2/conf.d
Additional .ini files parsed
/etc/php/7.0/apache2/conf.d/10-mysqlnd.ini, /etc/php/7.0/apache2/conf.d/10-opcache.ini, /etc/php/7.0/apache2/conf.d/10-pdo.ini, /etc/php/7.0/apache2/conf.d/15-xml.ini, /etc/php/7.0/apache2/conf.d/20-calendar.ini, /etc/php/7.0/apache2/conf.d/20-ctype.ini, /etc/php/7.0/apache2/conf.d/20-curl.ini, /etc/php/7.0/apache2/conf.d/20-dom.ini, /etc/php/7.0/apache2/conf.d/20-exif.ini, /etc/php/7.0/apache2/conf.d/20-fileinfo.ini, /etc/php/7.0/apache2/conf.d/20-ftp.ini, /etc/php/7.0/apache2/conf.d/20-gd.ini, /etc/php/7.0/apache2/conf.d/20-gettext.ini, /etc/php/7.0/apache2/conf.d/20-iconv.ini, /etc/php/7.0/apache2/conf.d/20-imagick.ini, /etc/php/7.0/apache2/conf.d/20-imap.ini, /etc/php/7.0/apache2/conf.d/20-intl.ini, /etc/php/7.0/apache2/conf.d/20-json.ini, /etc/php/7.0/apache2/conf.d/20-mbstring.ini, /etc/php/7.0/apache2/conf.d/20-mcrypt.ini, /etc/php/7.0/apache2/conf.d/20-memcache.ini, /etc/php/7.0/apache2/conf.d/20-mysqli.ini, /etc/php/7.0/apache2/conf.d/20-pdo_mysql.ini, /etc/php/7.0/apache2/conf.d/20-pdo_sqlite.ini, /etc/php/7.0/apache2/conf.d/20-phar.ini, /etc/php/7.0/apache2/conf.d/20-posix.ini, /etc/php/7.0/apache2/conf.d/20-pspell.ini, /etc/php/7.0/apache2/conf.d/20-readline.ini, /etc/php/7.0/apache2/conf.d/20-recode.ini, /etc/php/7.0/apache2/conf.d/20-shmop.ini, /etc/php/7.0/apache2/conf.d/20-simplexml.ini, /etc/php/7.0/apache2/conf.d/20-snmp.ini, /etc/php/7.0/apache2/conf.d/20-sockets.ini, /etc/php/7.0/apache2/conf.d/20-sqlite3.ini, /etc/php/7.0/apache2/conf.d/20-sysvmsg.ini, /etc/php/7.0/apache2/conf.d/20-sysvsem.ini, /etc/php/7.0/apache2/conf.d/20-sysvshm.ini, /etc/php/7.0/apache2/conf.d/20-tidy.ini, /etc/php/7.0/apache2/conf.d/20-tokenizer.ini, /etc/php/7.0/apache2/conf.d/20-wddx.ini, /etc/php/7.0/apache2/conf.d/20-xmlreader.ini, /etc/php/7.0/apache2/conf.d/20-xmlrpc.ini, /etc/php/7.0/apache2/conf.d/20-xmlwriter.ini, /etc/php/7.0/apache2/conf.d/20-xsl.ini
I faced the similar problem problem.
'date.timezone' in php.ini is showin as 'not set' / 'no value' when running php under apache. Checked through phpinfo(). Issue not solved even when apache is restarted. It is solved by restarting php-fpm service ( Fast cgi process manager).
systemctl restart php-fpm
[I am not a native speaker of English. I tried my best to edit the answer after going through the message in my stackoverflow INBOX].
I have found the problem. It was a syntax error in my php.ini file in the error_reporting area.
Let me contribute something: I was installing SilverStripe on a VPS and had the exact same problem. I have been searching the answer the whole late afternoon without positive outcome. After all attempts, I removed the ";" in front of
[Date]
; Defines the default timezone used by the date functions
;date.timezone = Europe/London
Here is where I got the answer:
https://www.silverstripe.org/community/forums/installing-silverstripe/show/15398?start=8
Hope everyone avoids this trouble
I noticed it was different in two places in my php.ini file for xampp, here:
[Date]
; Defines the default timezone used by the date functions
; http://php.net/date.timezone
date.timezone = "America/Phoenix"```
and here:
; List of headers files to preload, wildcard patterns allowed.
;ffi.preload=
[Syslog]
define_syslog_variables=Off
[Session]
define_syslog_variables=Off
[Date]
date.timezone=Europe/Berlin```
Note they are different--I have no idea how "Europe/Berlin" got in my php.ini. I noticed this showing up in my php error logs and searched through the whole xampp folder.
php_info() will always indicate the path of the .ini that's being used, ensure this is correct, and aligns with the ini that your are editing.

what is "default" in a config file (redis-server in this case)

This may sound to be a very silly or basic doubt, but here's my question.
In this redis server config file
https://github.com/antirez/redis/blob/2.8/redis.conf#L14
Consider the config, log-level notice.
What if I don't use this line :
log-level notice
in my config file. What will be it's default value?
Or does all the configs set in this example config file are default by themselves inside redis?
You can check Redis log level with config get loglevel. After you comment log level property and restart Redis service. You will see that default log level is notice.

Rabbitmq configuration problems (Doesn't read config file)

I set: rabbitmq.conf and rabbitmq-env,conf both in /etc/rabbitmq/
rabbit.[{rabbit, [{loopback_users, []}]}].
and in rabbitmq-env.conf
CONFIG_FILE=/etc/rabbitmq/rabbitmq (also tried with the conf e
but in the log It shows (after rabbit restart):
config file(s) : (none)
and of course no configuration is actually loaded.. Any help? Ideas? `
I've got an answer in the rabbitmq google group so I'll share it here:
rabbitmq.conf should be named: rabbitmq.config..
Fixes the problem

How do I use Nagios to monitor a log file

We are using Nagios to monitor our network with great success. However, we have a syslog for critical application errors and while I set up check_log, it doesn't seem to work as well as monitering a device.
The issues are:
It only shows the last entry
There doesn't seem to be a way to acknowledge the critical error and
return the monitor to a good state
Is nagios the wrong tool, or are we just not setting up the service monitering right?
Here are my entries
# log file
define command{
command_name check_log
command_line $USER1$/check_log -F /var/log/applications/appcrit.log -O /tmp/appcrit.log -q ?
}
# Define the log monitering service
define service{
name logfile-check ;
use generic-service ;
check_period 24x7 ;
max_check_attempts 1 ;
normal_check_interval 5 ;
retry_check_interval 1 ;
contact_groups admins ;
notification_options w,u,c,r ;
notification_period 24x7 ;
register 0 ;
}
define service{
use logfile-check
host_name localhost
service_description CritLogFile
check_command check_log
}
For monitoring logs with Nagios, typically the log checker will return a warning only for newly discovered error messages each time it is invoked (so it must retain some state in order to know to ignore them on subsequent runs). Therefore I usually set:
max_check_attempts 1
is_volatile 1
This causes Nagios to send out the alert immeidately, but only once, and then go back to normal.
My favorite log checker is logwarn, but I'm biased because I wrote it myself after not finding any existing ones that I liked. The logwarn package includes a Nagios plugin.
Nothing in your config jumps out at me as being misconfigured.
By design, check_log will only show either an OK message, or the last log entry that triggered an alert. If you need to see multiple entries, you'll need to modify the plugin.
However, I find the fact that you're not getting recoveries somewhat odd. The way check_log works (by comparing the current log to the previous version), you should get a recovery on the very next service check. Except of course, when there have been additional matching entries added to the log since the last check.
Does forcing another service check (or several) cause it to recover?
Also, I don't intend this in a mean way, but make sure it's really malfunctioning.
Is your log getting additional matching entries in between checks, causing it not to recover? Your check is matching "?" which will match anything new in the log. Is something else (a non-error) being added to the log and inadvertently causing a match?
If none of the above are the issue, I would suggest narrowing it down by taking Nagios out of the equation. Try running check_log manually (from the command line, but as the same user as nagios), and with a different oldlog. It should go something like this -
run check with a new "oldlog" - get initialization message
run check - check OK
make change to log
run check - check fails
run check - check OK
If this doesn't work, then you know to focus on the log, the oldlog, and how the check_log is doing the check.
If it works, then it points more towards a problem with your nagios configuration.
There is a Nagios plugin that you can use to check the log files: it's called check_logfiles and it's used to scan the lines of a file for regular expressions.
The following link shows how to install and configure check_logfiles for Nagios and Opsview:
https://www.opsview.com/resources/nagios-alternative/blog/syslog-monitoring-nagios-opsview
As there are many ways to achieve a goal, there is also a nice plugin from Consol available:
https://labs.consol.de/lang/en/nagios/check_logfiles/
supports regex
supports log rotation
To use it, you need a cfg file, this is an example for oracle databases
#searches = ({
tag => 'oraalerts',
options => 'sticky=28800',
logfile => '/u01/app/oracle/diag/rdbms/davmdkp/DAVMDKP1/trace/alert_DAVMDKP1.log',
criticalpatterns => [
'ORA\-0*204[^\d]', # error in reading control file
'ORA\-0*206[^\d]', # error in writing control file
'ORA\-0*210[^\d]', # cannot open control file
'ORA\-0*257[^\d]', # archiver is stuck
'ORA\-0*333[^\d]', # redo log read error
'ORA\-0*345[^\d]', # redo log write error
'ORA\-0*4[4-7][0-9][^\d]',# ORA-0440 - ORA-0485 background process failure
'ORA\-0*48[0-5][^\d]',
'ORA\-0*6[0-3][0-9][^\d]',# ORA-6000 - ORA-0639 internal errors
'ORA\-0*1114[^\d]', # datafile I/O write error
'ORA\-0*1115[^\d]', # datafile I/O read error
'ORA\-0*1116[^\d]', # cannot open datafile
'ORA\-0*1118[^\d]', # cannot add a data file
'ORA\-0*1122[^\d]', # database file 16 failed verification check
'ORA\-0*1171[^\d]', # datafile 16 going offline due to error advancing checkpoint
'ORA\-0*1201[^\d]', # file 16 header failed to write correctly
'ORA\-0*1208[^\d]', # data file is an old version - not accessing current version
'ORA\-0*1578[^\d]', # data block corruption
'ORA\-0*1135[^\d]', # file accessed for query is offline
'ORA\-0*1547[^\d]', # tablespace is full
'ORA\-0*1555[^\d]', # snapshot too old
'ORA\-0*1562[^\d]', # failed to extend rollback segment
'ORA\-0*162[89][^\d]', # ORA-1628 - ORA-1632 maximum extents exceeded
'ORA\-0*163[0-2][^\d]',
'ORA\-0*165[0-6][^\d]', # ORA-1650 - ORA-1656 tablespace is full
'ORA\-16014[^\d]', # log cannot be archived, no available destinations
'ORA\-16038[^\d]', # log cannot be archived
'ORA\-19502[^\d]', # write error on datafile
'ORA\-27063[^\d]', # number of bytes read/written is incorrect
'ORA\-0*4031[^\d]', # out of shared memory.
'No space left on device',
'Archival Error',
],
warningpatterns => [
'ORA\-0*3113[^\d]', # end of file on communication channel
'ORA\-0*6501[^\d]', # PL/SQL internal error
'ORA\-0*1140[^\d]', # follows WARNING: datafile #20 was not in online backup mode
'Archival stopped, error occurred. Will continue retrying',
]
});
I believe there's now a real Nagios plugin that monitors logs effectively.
http://support.nagios.com/forum/viewtopic.php?f=6&t=8851&p=42088&hilit=unixautomation#p42088
The home page of the Nagios plugin on that page is Nagios Log Monitor
Your [ commands.cfg file ] will contain:
define command {
command_name NagiosLogMonitor
command_line $USER1$/NagiosLogMonitor $HOSTNAME$ $ARG1$ $ARG2$ $ARG3$ $ARG4$ '$ARG5$' '$ARG6$' $ARG7$ $ARG8$ $ARG9$ $ARG10$
}
OR
define command {
command_name NagiosLogMonitor
command_line $USER1$/NagiosLogMonitor $HOSTADDRESS$ $ARG1$ $ARG2$ $ARG3$ $ARG4$ '$ARG5$' '$ARG6$' $ARG7$ $ARG8$ $ARG9$ $ARG10$
}
Your [ services.cfg file ] will look similar to:
define service {
check_command NagiosLogMonitor!logrobot!autofig!/var/log/proteus.log!15!500.html!500 Internal Server Error!1!2!-foundn
max_check_attempts 1
service_description 500_ERRORS_LOGCHECK
host_name sky.blat-01.net,sky.blat-02.net,sky.blat-03.net
use fifteen-minute-interval
}
Nagios now has a solution that integrates tightly with Nagios Core, XI, etc.
Nagios Log Server which can alert on any query on any log file on any system in your infrastructure.