Filebeat not logging to files, always only to syslog - filebeat

My filebeat (v7.6.0) config has the following:
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0755
It doesn't create the files, nor does it log to them, it just continues to log to syslog instead.
What haven't I done/have I done wrong?

According to the docs:
When Filebeat is running on a Linux system with systemd, it uses by default the -e command line option, that makes it write all the logging output to stderr so it can be captured by journald. Other outputs are disabled.

Related

"Directory name is invalid." etc. with rabbitmq-plugins on Windows

I'm trying to get RabbitMQ going on Windows 10 by following these instructions.
However, when trying to enable the management plugin via powershell command:
./rabbitmq-plugins enable rabbitmq_management
I get the following:
The directory name is invalid.
The filename, directory name, or volume label syntax is incorrect.
Unsupported node name: hostname is invalid (possibly contains unsupported characters).
If using FQDN node names, use the -l / --longnames argument.
I've tried setting HOMEDRIVE=C: as the blog suggested.
What am I doing wrong?
EDIT
Per the comment below I did the following:
PS C:\program files\rabbitmq server\rabbitmq_server-3.7.15\sbin> ./rabbitmq-service.bat stop
The directory name is invalid.
The filename, directory name, or volume label syntax is incorrect.
The RabbitMQ service is stopping.
The RabbitMQ service was stopped successfully.
PS C:\program files\rabbitmq server\rabbitmq_server-3.7.15\sbin> ./rabbitmq-service.bat uninstall
The directory name is invalid.
The filename, directory name, or volume label syntax is incorrect.
*********************
Service control usage
*********************
rabbitmq-service help - Display this help
rabbitmq-service install - Install the RabbitMQ service
rabbitmq-service remove - Remove the RabbitMQ service
The following actions can also be accomplished by using
Windows Services Management Console (services.msc):
rabbitmq-service start - Start the RabbitMQ service
rabbitmq-service stop - Stop the RabbitMQ service
rabbitmq-service disable - Disable the RabbitMQ service
rabbitmq-service enable - Enable the RabbitMQ service
PS C:\program files\rabbitmq server\rabbitmq_server-3.7.15\sbin> set HOMEDRIVE=C:
PS C:\program files\rabbitmq server\rabbitmq_server-3.7.15\sbin> ./rabbitmq-service.bat install
The directory name is invalid.
The filename, directory name, or volume label syntax is incorrect.
RabbitMQ service is already present - only updating service parameters
"WARNING: Using RABBITMQ_ADVANCED_CONFIG_FILE: C:\Users\Mj\AppData\Roaming\RabbitMQ\advanced.config"
2019-06-14 10:55:09.630000
args: []
format: "Failed to create cookie file 'l:/.erlang.cookie': enoent"
label: {error_logger,error_msg}
2019-06-14 10:55:09.630000 crash_report #{label=>{proc_lib,crash},report=>[[{initial_call,{auth,init,['Argument__1']}},{pid,<0.57.0>},{registered_name,[]},{error_info,{error,"Failed to create cookie file 'l:/.erlang.cookie': enoent",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}},{ancestors,[net_sup,kernel_sup,<0.46.0>]},{message_queue_len,0},{messages,[]},{links,[<0.55.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,27},{reductions,1456}],[]]}
2019-06-14 10:55:09.635000 supervisor_report #{label=>{supervisor,start_error},report=>[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{"Failed to create cookie file 'l:/.erlang.cookie': enoent",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}},{offender,[{pid,undefined},{id,auth},{mfargs,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]}
2019-06-14 10:55:09.704000 supervisor_report #{label=>{supervisor,start_error},report=>[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,{shutdown,{failed_to_start_child,auth,{"Failed to create cookie file 'l:/.erlang.cookie': enoent",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}}}},{offender,[{pid,undefined},{id,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]}
2019-06-14 10:55:09.742000 crash_report #{label=>{proc_lib,crash},report=>[[{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.45.0>},{registered_name,[]},{error_info,{exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Failed to create cookie file 'l:/.erlang.cookie': enoent",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}},{ancestors,[<0.44.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.46.0>,normal}]},{links,[<0.44.0>,<0.43.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,27},{reductions,184}],[]]}
2019-06-14 10:55:09.789000 std_info #{label=>{application_controller,exit},report=>[{application,kernel},{exited,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Failed to create cookie file 'l:/.erlang.cookie': enoent",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,249}]}]}}}}},{kernel,start,[normal,[]]}}},{type,permanent}]}
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{\"Failed to create cookie file 'l:/.erlang.cookie': enoent\",[{auth,init_cookie,0,[{file,\"auth.erl\"},{line,286}]},{auth,init,1,[{file,\"auth.erl\"},{line,140}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,374}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]}}}}},{kernel,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Failed to create cookie file 'l:/.er
Crash dump is being written to: C:\Users\Mj\AppData\Roaming\RabbitMQ\log\erl_crash.dump...done
Seems that the Order at which the commands are ran Matters, this worked for me, shuffle them around
SET HOMEDRIVE=C:
rabbitmq-plugins.bat enable rabbitmq_management
rabbitmq-service.bat stop
rabbitmq-service.bat install
rabbitmq-service.bat start
Based off of the comment #LukeBakken made in the original question, I was able to get around this by creating a local admin user and doing the install under the local admin user. This was after hours of battling by setting the HOMEDRIVE, etc.
I solved this by following way :
1. Opened command prompt using administrative mode
2. Go to sbin directory. Execute command "SET HOMEDRIVE=C:"
Based on the comment from Luke Bakken, this is what worked for me:
Almost the same as his comment, but instead of uninstall I had to use remove. uninstall was not recognized.
"Log in as the admin user you installed RMQ with; Open the "RabbitMQ Command Prompt (sbin dir)" terminal, run:"
.\rabbitmq-service.bat stop
.\rabbitmq-service.bat remove
set HOMEDRIVE=C:
.\rabbitmq-service.bat install
.\rabbitmq-plugins.bat enable rabbitmq_management
.\rabbitmq-service.bat start

How to see error logs when docker-compose fails

I have a docker image that starts the entrypoint.sh script
This script checks if the project is well configured
If everything is correct,
the container starts
otherwise I received this error:
echo "Danger! bla bla bla"
exit 1000
Now if i start the container in this mode:
docker-compose up
i see the error correctly:
Danger! bla bla bla
but i need to launch the container in daemon mode:
docker-compose up -d
How can I show the log only in case of error?
The -d flag in docker-compose up -d stands for detached mode and not deamon mode.
In detached mode, your service(s) (e.g. container(s)) runs in the background of your terminal. You can't see logs in this mode.
To see all service(s) logs you need to run this command :
docker-compose logs -f
The -f flag stands for "Follow log output".
This will output all the logs for each running service you have in your docker-compose.yml
From my understanding you want to fire up your service(s) with :
docker-compose up -d
In order to let service(s) run in the background and have a clean console output.
And you want to print out only the errors from the logs, to do so add a pipe operator and search for error with the grep command :
docker-compose logs | grep error
This will output all the errors logged by a docker service(s).
You'll find the official documentation related to the docker-compose up command here and to the logs command here. More info on logs-handling in this article.
Related answer here.

Docs for redis-server command line options

I've looked "everywhere." I cannot find documentation for all the supported command line options for redis-server I'm using version 5.0.3
I tried redis-server --help It is no help.
The usage given doesn't even mention --port, --slaveof, --replicaof, --loglevel ... yet these options are shown in the help's examples.
Does someone know where I can find full and complete documentation for the server's command line?
Thanks.
Right at the top of the redis configuration documents it says the following:
"... it is possible to ... pass Redis configuration parameters using
the command line directly."
Therefore, every configuration file option is passable on the command line. I'm an idiot.
Edit: Note that config file parameters which have spaces in them will not work as a command line parameter. For example, --save "600 1 30 10 6 100" will not be used. Running redis-cli followed by config get save will show "". Doesn't matter if the parameter is placed at the end of the command line. Doesn't matter if it is enclosed with single quotes, double quotes, or no quotes.
redis-server command line does not parse params with spaces correctly. The issue is known and closed without being resolved:
https://github.com/redis/redis/issues/2366
The most useful information about configuring redis-server is at https://redis.io/docs/manual/config/
Passing arguments via the command line
You can also pass Redis configuration parameters using the command line directly. This is very useful for testing purposes. The following is an example that starts a new Redis instance using port 6380 as a replica of the instance running at 127.0.0.1 port 6379.
./redis-server --port 6380 --replicaof 127.0.0.1 6379
The format of the arguments passed via the command line is exactly the same as the one used in the redis.conf file, with the exception that the keyword is prefixed with --.
Note that internally this generates an in-memory temporary config file (possibly concatenating the config file passed by the user, if any) where arguments are translated into the format of redis.conf.
The .conf file with all the params has reasonably useful inline docs.
man redis-server and redis-server -h are basically useless.
man redis-server:
REDIS-SERVER(1) General Commands Manual REDIS-SERVER(1)
NAME
redis-server - Persistent key-value database
SYNOPSIS
redis-server configfile
DESCRIPTION
Redis is a key-value database. It is similar to memcached but the dataset is not volatile and other
datatypes (such as lists and sets) are natively supported.
OPTIONS
configfile
Read options from specified configuration file.
NOTES
On Debian GNU/Linux systems, redis-server is typically started via the /etc/init.d/redis-server initscript,
not manually. This defaults to using /etc/redis/redis.conf as a configuration file.
AUTHOR
redis-server was written by Salvatore Sanfilippo.
This manual page was written by Chris Lamb <lamby#debian.org> for the Debian project (but may be used by
others).
March 20, 2009 REDIS-SERVER(1)
`redis-server -h`:
Usage: ./redis-server [/path/to/redis.conf] [options] [-]
./redis-server - (read config from stdin)
./redis-server -v or --version
./redis-server -h or --help
./redis-server --test-memory <megabytes>
./redis-server --check-system
Examples:
./redis-server (run the server with default conf)
echo 'maxmemory 128mb' | ./redis-server -
./redis-server /etc/redis/6379.conf
./redis-server --port 7777
./redis-server --port 7777 --replicaof 127.0.0.1 8888
./redis-server /etc/myredis.conf --loglevel verbose -
./redis-server /etc/myredis.conf --loglevel verbose
Sentinel mode:
./redis-server /etc/sentinel.conf --sentinel
Also, if someone knows how to tuck the man and -h snippets of this answer into <details> with SO markup, please edit this response, thanks.

Redis can't write logs or backup but I need to backup whats currently in memory

Someone before me setup a redis instance (version 2.6).
But for some reason, whoever set this, had
Placed the config file etc like this /etc/redis.conf
The dir config has ./ set, like this dir ./
The instance is being run as non-root.
Like this:
$ ps aux | grep "redis"`
user /home/user/redis-stable/src/redis-server /etc/redis.conf
Logging is going to /dev/null, because daemonize yes && logfile stdout
So it is unable to create backups in /etc/ because it doesn't have permissions (I'm guessing), and I can't even see what is going on because the logs are going to /dev/null.
I want to make a backup so I can turn redis off to fix all these things, without losing any data. Any ideas?
I've tried:
touch /etc/dump.rdb
chown user:users /etc/dump.rdb
But it is still not able to write. I'm guessing it might have a temp file it tries to write to before it moves it to /etc/dump.rdb
After looking at Redis source code, it does seem like there is a temp file: https://github.com/antirez/redis/blob/04542cff92147b9b686a2071c4c53574771f4f88/src/rdb.c#L986
snprintf(tmpfile,256,"temp-%d.rdb", (int) getpid());
Also tried
redis 127.0.0.1:6379> config set logfile /home/user/redis.log
(error) ERR Unsupported CONFIG parameter: logfile
Run:
config get dir
and you would see the directory where redis is saving rdb.
Run:
config set dir /home/user/
to change the rdb dump directory to /home/user.
then run:
redis-cli -p <port> bgsave
This would initiate a rdb dump.
Hope this helps.

Sidekiq not processing queue

What possible reasons can Sidekiq prevent from processing jobs in the queue? The queue is full. The log file sidekiq.log indicates no activity at all. Thus the queue is full but the log is empty, and Sidekiq does not seem to process items. There seem to no worker processing jobs. Restarting Redis or flush it with FLUSHALL or FLUSHDB has no effect. Sidekiq has been started with
bundle exec sidekiq -L log/sidekiq.log
and produces the following log file:
2013-05-30..Booting Sidekiq 2.12.0 using redis://localhost:6379/0 with options {}
2013-05-30..Running in ruby 1.9.3p374 (2013-01-15 revision 38858) [i686-linux]
2013-05-30..See LICENSE and the LGPL-3.0 for licensing details.
2013-05-30..Starting processing, hit Ctrl-C to stop
How can you find out what went wrong? Are there any hidden log files?
The reason was in our case: Sidekiq may look for the wrong queue. By default Sidekiq uses a queue named "default". We used two different queue names, and defined them in config/sidekiq.yml
# configuration file for Sidekiq
:queues:
- queue_name_1
- queue_name_2
The problem is that this config file is not automatically loaded by default in your development environment (unlike database.yml or thinking_sphinx.yml for instance) by a simple bundle exec sidekiq command. Thus we wrote our jobs in two certain queues, and Sidekiq was waiting for jobs in a third queue (the default one). You have to pass the path to the config file as a parameter through the -Cor --config option:
bundle exec sidekiq -C ./config/sidekiq.yml
or you can pass the queue names directly (no spaces allowed here after the comma):
bundle exec sidekiq -q queue_name_1,queue_name_2
To find the problem out it is helpful to pass the option -v or --verbose at the command line, too, or to use :verbose: true in the sidekiq.yml file. Everything which is defined in a config file is of course useless if the config file is not loaded.. Therefore make sure you are using the right config file first.
If you have a config/sidekiq.yml check that all the queues are defined there, check this sample file: https://github.com/mperham/sidekiq/blob/master/examples/config.yml
If you are passing queue names in the command line or Procfile, something similar to
bin/sidekiq -q queue1 -q queue2
bundle exec sidekiq -q queue1 -q queue2
check that all your queues are defined there.
In case you are not sure about the names of your queues, you can figure it out with the following script:
require "sidekiq/api"
stats = Sidekiq::Stats.new
stats.queues
# {"production_mailers"=>25, "production_default"=>1}
Then, you can do things with the queues:
queue = Sidekiq::Queue.new("production_mailers")
queue.count
queue.clear
It took me hours to find out that I had set config.active_job.queue_name_prefix = "xxxxx_#{Rails.env}". The queue names in the settings look the same, but sidekiq looks for the queue with prefix.
Wrong setting
app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
end
config/sidekiq.yml
:queues:
- default
Correct setting
app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
end
config/sidekiq.yml
:queues:
- xxxxx_development_default
- xxxxx_production_default
My problem was I had a configure_server but not configure_client in my initialiser, you must have both:
Sidekiq.configure_server do |config|
config.redis = { url: ENV.fetch('SIDEKIQ_REDIS_URL', 'redis://127.0.0.1:6379/1') }
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV.fetch('SIDEKIQ_REDIS_URL', 'redis://127.0.0.1:6379/1') }
end
In my case, sidekiq was fine in development, but stuck in staging. It was human error on the capistrano's deploy configuration. I set the path for sidekiq.yml incorrectly in the Capfile (shared instead of current).
It failed silently:
# Capfile
# WRONG:
set :sidekiq_config, -> { File.join(shared_path, 'config', 'sidekiq.yml') }
^^^^^^^^^^^
# RIGHT:
set :sidekiq_config, -> { File.join(current_path, 'config', 'sidekiq.yml') }
flushing redis worked for me.
WARNING: THIS WILL REMOVE ALL DATA IN YOUR REDIS DATABASE.
redis-cli flushall
I was banging my head against a brick wall on this for a while, my issue was that sidekiq required a newer version of redis-server. I ran "bundle exec sidekiq" and that revealed the error. Once I updated to a newer version of redis-server it was fine.
I just had this issue. Turns out I had made a syntax error in my sidekiq.yml
Spent at least two hours on this as well because queues and configuration and web UI were all fine ... the jobs were just not processed.
My issue was that the sidekiq-server was not running in my docker-compose setup even though it should have been started in the command-section here:
sidekiq:
depends_on:
- 'proddb'
- 'redis'
build: rails-app
--> command: bundle exec sidekiq --environment ${RAILS_ENV} -C config/sidekiq.yml
volumes:
- './rails-app:/project'
- '/project/tmp' # don't mount tmp directory
environment:
- REDIS_URL_SIDEKIQ=${REDIS_URL_SIDEKIQ}
networks:
- backend
My problem was I did not config my initializers/sidekiq.rb properly but even with the correct config, sidekiq was still not running enqueued jobs. I had to run spring stop on top of that and restarted everything and it solved my issue.
I encountered a similar problem wherein the logs would show entries such as INFO Rails : queueing TestWorker (TestWorker). However, the jobs would never get processed, and none of the answers in this question solved the issue.
The tl;dr to my solution is that Sidekiq's Testing Client was getting unexpectedly triggered.
I eventually deduced that there is some "magic" going on underneath the surface that makes it difficult to discretely determine where/when/how the above testing trigger was getting configured, based on the following anecdote...
Running bundle exec sidekiq -C config/sidekiq.yml -e development had the result that Sidekiq::Testing.fake? == true
However, running bundle exec sidekiq -C config/sidekiq.yml -e development_2 had the result that Sidekiq::Testing.fake? == false
^ The only difference between these 2 commands is that I renamed the development environment in sidekiq.yml to development_2, i.e. the same/equivalent environment was running with both commands (at least, presumably it would be the same environment if it wasn't for this inane "magic" under the hood).
I updated sidekiq.rb to explicitly toggle Sidekiq::Testing via the following:
sidekiq_testing_fake = false # set this using env var, etc.
if sidekiq_testing_fake
Sidekiq::Testing.fake!
elsif Sidekiq.constants.include?(:Testing)
Sidekiq::Testing.disable!
end
My issue was that I had both a redis-server running and Redis.app's redis-server running, I killed the redis-server (and kept the Redis.app one)