Logs not Updating in kibana automatically using filebeat - filebeat

I have installed ELK and Filebeat in two different machines(Cent OS).
My log file was there in filebeat machine.
Every time if i want to update the logs i need to run the following command "systemctl restart filebeat"
How to avoid this step? so that filebeat should able to read logfile synchronously and show it in kibana?

You're probably missing the scan_frequency option on filebeat config file.
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html

Related

Apache Hama on Amazon Elastic MapReduce

I am trying to run Apache Hama on Amazon Elastic MapReduce using https://github.com/awslabs/emr-bootstrap-actions/tree/master/hama script. However, when trying out with one master node and two slave nodes, peer.getNumPeers() in the BSP code reports only 1 peer. I am suspecting whether Hama runs in local mode.
Moreover, looking at configurations at https://hama.apache.org/getting_started_with_hama.html, my understanding is that the list of all the servers should go in hama-site.xml file for property hama.zookeeper.quorum and also in groomservers file. However, I wonder whether these are being configured properly in the install script. Would really appreciate if anyone could point out whether it's a limitation in the script or whether I am doing something wrong.
#Madhura
Hama doesn't always need groomserver file to run fully distributed mode.
groomserver file is needed to run hama cluster using only start-bspd.sh. But emr-bootstrap-action of hama runs groomservers on each slave nodes using hama-daemon.sh file. Code executed in install script is as follow.
$ /bin/hama-daemon.sh --config ${HAMA_HOME}/conf start groom
I think you need to check the emr logs whether they have error or not.

How to read updated redis configuration file with running redis server

I made a change to redis.conf, but I don't see the changes applied to the running instance. Do I need to restart redis is order to pick up changes?
Yes you have to restart the server to get changes from redis.conf file. Alternatively you can do it in run time using config set command.
Read more about them on the following links
http://redis.io/topics/config
http://redis.io/commands/config-set

Redis clears data itself..!

I am using Redis since last 12 months without any issue, But from last 30 days unknowingly the database getting empty and we couldn't find any logs regarding this. Even it is flushing all the data out randomly after restoration.
We tried following steps to resolve this but result was zero.
We have checked redis logs
Monitored the redis using MONITOR command
We are trying to renaming the critical commands through config but redis is dump after the config change below is example command
rename-command FLUSHDB e0cc96ad2eab73c2c347011806a76b73
We gone made without knowing anything. Helps are appreciated.
Redis Version : 2.8.17
Running under Debian Linux
Renaming the command through config file will work in this case.
Same rename command you have to place inside the config file.
rename-command FLUSHDB e0cc96ad2eab73c2c347011806a76b73

Remotely create a vhost on a docker container running rabbitmq

I have a Vagrantfile that does 2 important things; firstly pulls and runs dockerfile/rabbitmq, then builds from a custom Dockerfile that runs an application which assumes a vhost on the rabbitmq server, let's say "/foo".
The problem is the vhost is not there.
The container with rabbitmq is running successfully, the app is linked to it using --link as the built image is run. Using the environment variables docker sets I can hit the server. But somewhere in the middle of these operations I need to create the vhost as my connection is refused, i assume because "/foo" is not there.
How can I get the vhost onto the rabbit server?
Thanks
note - using the webadmin is not an option, this has to be done programatically.
You can put default_vhost in /etc/rabbitmq/rabbitmq.config: http://www.rabbitmq.com/configure.html
It will then be created on the first run. (Stop and delete the mnesia directory if has been started already)
There are few ways to get desired configuration:
Export/import whole configuration with rabbitmqadmin - Management Plugin CLI tool.
or
Use HTTP API from management plugin
or
Use rabbitmqctl cli tool to manage access control.
BTW according to docs in here: https://www.rabbitmq.com/vhosts.html
You can du this via curl by using:
curl -u userename:pa$sw0rD -X PUT http://rabbitmq.local:15672/api/vhosts/vh1
So probably it doesnt matter you are doing this remotely or not..

where does Apache Flume logs its error messages?

I'm new to Apache flume, Just I want to know, where does Apacheflume logs its error messages and metadata information.
I searched apche flume directory for captured error logs, but I did'nt see any floder with the name log.
Could anyone help me on this, how to configure logs in apache flume.
Flume logs are in /var/log/flume-ng. This location is specified in logging configuration file /etc/flume-ng/conf/log4j.properties.
Dmitry is right, the log file location is specified in FLUME_HOME/conf/log4j.properties.
I just wanted to add that, in Apache Flume 1.5, the default log location is:
FLUME_HOME/logs/flume.log
The log file may not be generated in case Flume initialization failed - this usually means that Flume couldn't find Java, configuration files, etc.