FileBeat configuration test with output - filebeat

I am trying to test my configuration using filebeat test ouput -e -c filebeat.yml i see only the help message with command list.
I am actually trying to output the data file to verify. Though i have tested filebeat test config -e -c filebeat.yml with ok.

Assuming you're using filebeat 6.x (these tests were done with filebeat 6.5.0 in a CentOS 7.5 system)
To test your filebeat configuration (syntax), you can do:
[root#localhost ~]# filebeat test config
Config OK
If you just downloaded the tarball, it uses by default the filebeat.yml in the untared filebeat directory. If you installed the RPM, it uses /etc/filebeat/filebeat.yml.
If you want to define a different configuration file, you can do:
[root#localhost ~]# filebeat test config -c /etc/filebeat/filebeat2.yml
Config OK
To test the output block (i.e: if you have connectivity to elasticsearch instance or kafka broker), you can do:
[root#localhost ~]# filebeat test output
elasticsearch: http://localhost:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: ::1, 127.0.0.1
dial up... ERROR dial tcp [::1]:9200: connect: connection refused
In this case my localhost elasticsearch is down, so filebeat throws an error saying it cannot connect to my output block.
The same way as syntax validation (test config), you can provide a different configuration file for output connection test:
[root#localhost ~]# filebeat test output -c /etc/filebeat/filebeat2.yml
logstash: localhost:5044...
connection...
parse host... OK
dns lookup... OK
addresses: ::1, 127.0.0.1
dial up... ERROR dial tcp [::1]:5044: connect: connection refused
In this alternative configuration file, my output block also fails to connect to a logstash instance.

-e -c flags must be used before the command "test"
filebeat -e -c filebeat.yml test ouput
-c, --c argList Configuration file, relative to path.config (default beat.yml)

Related

Configuring Container Registry in gitlab over http

I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion

Run docker file but says port already taken

So I was able to build on my MacBook: docker build -t my-first-demo .
But then when I tried to run the app: docker run -p 80:80 my-first-demo
it gives me the error:
Error response from daemon: Ports are not available: port is already allocated.
ERRO[0000] error waiting for container: context canceled
I changed the port to things like 81:80, but still not working:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
Here is my Dockerfile:
FROM php:7.2-apache
COPY src/ /var/www/html/
EXPOSE 80
And a simple php file:
<?php
echo "Hello, World";
?>
Any help would be greatly appreciated!
Use the command
docker ps -a
to see that allocated port. Also check if any other process is using it with:
sudo lsof -i:80
In order to run another container in that same port you should stop any container in used port with:
docker kill <container id> or <container name>
Use tab keyboard and docker will list available active container names.

AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000; Not able to connect to local node from aql

I have installed aerospike on my mac my following this installation steps
All the validations are working fine. I am able to connect to the cluster using browser chrome. Below is the screen shot.
I have also installed the AQL tools following the instructions here.
But I'm unable to connect to local node from aql.
$ aql
2017-11-21 16:06:09 WARN Failed to connect to seed 127.0.0.1 3000.
AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000
Error -1: Failed to connect
$ asadm
Aerospike Interactive Shell, version 0.1.11
ERROR: Not able to connect any cluster.
Also, I have noticed the Java client is giving error.
AerospikeClient client = new AerospikeClient("localhost", 3000);
when I changed the localhost to actual Ip returned by vagrant ssh -c "ip addr"|grep 'global eth1' it is working fine.
How to connect with aql using customer parameters? I want to pass ip address and port as parameters to aql. Any suggestions.
$ aql --help
https://www.aerospike.com/docs/tools/aql/index.html - discusses all various command line options.
$ aql -h a.b.c.d -p 1234
There is another possibility, you have your owned port instead of the default 3000, so when you try to connect to aerospike, you can try to run command like : aql -p4000
Hope this may help you
Seems like the port is not getting freed even after exiting the vagrant console.
Tried closing all the terminal windows and then starting again. But no luck.
Finally, restarting the system resolved the issue.

How do I "dockerize" a redis service using phusion/baseimage-docker

I am getting started with docker, and I am trying by "dockerizing" a simple redis service using Phusion's baseimage. On its website, baseimage says:
You can add additional daemons (e.g. your own app) to the image by
creating runit entries.
Great, so I first started this image interactively with a cmd of /bin/bash. I installed redis-server via apt-get. I created a "redis-server" directory in /etc/service, and made a runfile that reads as follows:
#!/bin/sh
exec /usr/bin/redis-server /etc/redis/redis.conf >> /var/log/redis.log 2>&1
I ensured that daemonize was set to "no" in the redis.conf file
I committed my changes, and then with my newly created image, I started it with the following:
docker run -p 6379:6379 <MY_IMAGE>
I see this output:
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 98
I then run
boot2docker ip
It gives me back an IP address. But when I run, from my mac,
redis-cli -h <IP>
It cannot connect. Same with
telnet <IP> 6379
I ran a docker ps and see the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7bd2dXXXXXX myuser/redis:latest "/sbin/my_init" 11 hours ago Up 2 minutes 0.0.0.0:6379->6379/tcp random_name
Can anyone suggest what I have done wrong when attempting to dockerize a simple redis service using phusion's baseimage?
It was because I did not comment out the
bind 127.0.0.1
parameter in the redis.conf file.
Now, it works!

How to create a cloud9 SSH workspace with dreamhost VPS

I have already installed node.js(v0.10.30) and npm. I'm able to establish a SSH connection between my mac and dreamhost VPS via terminal, but i cant do it in Cloud9. Someone help me, please?
./server.js -p 8080 -l 0.0.0.0 -a :
--settings Settings file to use
--help Show command line options.
-t Start in test mode
-k Kill tmux server in test mode
-b Start the bridge server - to receive commands from the cli [default: false]
-w Workspace directory
--port Port
--debug Turn debugging on
--listen IP address of the server
--readonly Run in read only mode
--packed Whether to use the packed version.
--auth Basic Auth username:password
--collab Whether to enable collab.
--no-cache Don't use the cached version of CSS
So you can use your own VPS,just change 0.0.0.0 to your server ip.