Icinga2: Run check on remote host instead of master - redis

I just updated to Icing 2.8 which needs "the new" way of checking remote hosts, so I'm trying to get that to work.
On the master I added a folder in zones.d with the hostname of the remote host. I added a few checks but they all seem to be executed from the master instead of the remote.
For example: I need to monitor Redis. My redis.conf in /etc/icinga2/zones.d/remotehostname/redis.conf:
apply Service "Redis" {
import "generic-service"
check_command = "Redis"
vars.notification["pushover"] = {
groups = [ "ADMINS" ]
}
assign where host.name == "remotehostname"
}
A new service pops up in IcingaWeb but it errors out with:
execvpe(/usr/lib/nagios/nagios-plugins/check_redis_publish_subscribe.pl) failed: No such file or directory
Which is correct because on the master that file does not exist. It does exist on the remote host however.
How do I get Icinga to execute this on the remote host and have that host return the output to the master?

You can write this to the service:
command_endpoint = host.name
Or you can try to create a zone and add the zone to the Host.
Maybe this could help you:
NetWays Blog

Related

Unable to connect Steampipe tool with my personal postgresql database running locally on port 5432

I am trying to connect locally installed steampipe to my local postgresql database container which is running on port 5432. So that i can see the output of my sql queries will be stored on my personal postgresql database. I don't want to use steampipe's own database.
I have taken have taken the following steps:
I ran on my wsl ubuntu terminal:
export STEAMPIPE_WORKSPACE_DATABASE=postgresql://postgresUser:postgresPW#localhost:5432/postgresDB --> reference to https://steampipe.io/docs/reference/env-vars/steampipe_workspace_database
I checked the configurations inside steampipe.
Made the change inside ~/.steampipe/config/default.spc
options "database" {
port = 5432 # any valid, open port number
listen = "local" # local, network
#search_path = "" # comma-separated string
}
I added the line inside ~/.steampipe/db/14.2.0 /data/postgresql.conf
listen_addresses = '*'
I added the line inside ~/.steampipe/db/14.2.0/data/pg_hba.conf
host all all 0.0.0.0/0
Still i am unable to connect steampipe to my local postgresql database container which is running on port 5432
On executing: steampipe service start, I am getting below result
Error: cannot listen on port 5432

using Ansible on a gcp instance to connect to another instance error

I have a server called master-instance-node and a server called slave-instance-node-1. In the master-instance-node I have Ansible installed, I modified the /etc/ansible/hosts file and added the following
[webservers]
slave-instance-node-1
Then I try the following command
ansible webservers -a "w " -u USERNAME but I get the following error:
slave-instance-node-1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: POSSIBLE DNS SPOOFING DETECTED! #\r\n###########################################################\r\nThe ECDSA host key for slave-instance-node-1 has changed,\r\nand the key for the corresponding IP address XX.XXX.X.XX\r\nis unknown. This could either mean that\r\nDNS SPOOFING is happening or the IP address for the host\r\nand its host key have changed at the same time.\r\n###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #\r\n###########################################################\r\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\r\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\r\nIt is also possible that a host key has just been changed.\r\nThe fingerprint for the ECDSA key sent by the remote host is\nSHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.\r\nPlease contact your system administrator.\r\nAdd correct host key in /home/USERNAME/.ssh/known_hosts to get rid of this message.\r\nOffending ECDSA key in /home/USERNAME/.ssh/known_hosts:1\r\n remove with:\r\n ssh-keygen -f \"/home/USERNAME/.ssh/known_hosts\" -R \"slave-instance-node-1\"\r\nECDSA host key for slave-instance-node-1 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
}
I thought the known hosts file is updated automatically in GCP. What does this error mean and how do I fix it?
In addition to other commentators, you may check which IP your instances are using. If your DNS is configured for the external IPs, you may prefer static external IP to avoid this error after instance reboot. The external addressees are ephemeral, and the issue may occur not only after redeployment, after reboot also. You may be interested in this doc https://cloud.google.com/compute/docs/ip-addresses#externaladdresses
Thanks to the comments in my question I was able to figure out the answer. First I had to remove the known host using the command remove with: ssh-keygen -f "/home/USERNAME/.ssh/known_hosts" -R "slave-instance-node-1". and I also had to set export ANSIBLE_HOST_KEY_CHECKING=false.
Then, I had to add the following line to the /etc/ansible/hosts file next to the server name/ip ansible_user=USERNAME. And finally I had to add the following line inside the /etc/ansible/ansible.cfg file private_key_file = /path/to/file

How can I know current configuration of a running redis instance

I started a redis instance using rc.local script.
su - ec2-user -c redis-server /home/ec2-user/redis.conf
Even in the configuration file I provided(/home/ec2-user/redis.conf) I specified
protected-mode no
Connection to the redis instance still generates the following error message:
Error: Ready check failed: DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
What can I do to check current configuration of a running redis?
connect localy to your redis and run :
127.0.0.1:6379> CONFIG GET protected-mode
You'll get current running value.
You can run your server with more log :
redis-server /etc/myredis.conf --loglevel verbose
Regards,

IBM Container , "Connection refused" when SSH to the public IP

I'm using IBM Bluemix and Docker.
[My goal] I want to create a container. I found from the website that we could use SSH to login as "root" user. So I guess I could also install maven and MySQL on this container. Though IBM Container is a Docker based file system, we could treat container just like a Linux virtual machine (please correct me if wrong).
I found a similar question here, where njleviere said that port 22 is closed. How do I determine if a port is open or closed? If it's closed, how do I open it? Also, I think that port 22 is actually open in my case.
[Problem Description] I mainly followed this website, but I'm using Ubuntu and SSH instead of Putty.
First, I create the key file with ssh-keygen. For the filename, I tried "cloud" and "cloud.key". Both failed. So I think the name for the key does not matter (please correct me if wrong).
I open the .pub key. There is a "yu#yu-VirtualBox" tag at the end of the key file. I am not sure if I should include this tag. So I tried several things:
ssh-rsa KeyString yu#yu-VirtualBox
ssh-rsa KeyString
KeyString
All failed.
Then I created the container. I choose the "ibmliberty". Given the public IP I created before (already unbind from any containers), I added 22 to the public Port. And pasted the "cloud.pub" to the SSH key. After several minutes, the container started to run. The following two links are the screen shot for the Bluemix console on creating the container.
Then I could see the default page for port 9080 in browser for https://169.44.124.121:9080. It said "Welcome to Liberty" and "WebSphere Application Server V8.5.5.9".
Then I typed (cloud and cloud.pub is the key file)
ssh -i cloud root#169.44.124.121
Then I get the
ssh: connect to host 169.44.124.121 port 22: Connection refused
I used cf ic ps to check the port. It looks fine.
I see 169.44.124.121:22->22/tcp under the PORTS.
Also, I see many programmers use the docker file to launch the IBM Container. Should I switch to docker file instead of this IBM console web interface?
The default ibm-liberty image on bluemix doesn't include sshd. You could add it - you'll need to add supervisord, sshd, and the appropriate configuration for both into your Dockerfile.
Conversely, if what you really want is just a secure command line connection into your container, you can use cf ic exec or docker exec. (e.g. cf ic exec -ti mycontainername bash ) That'll give you a command line without having the overhead (and security exposure) of a running sshd.

Error :Could not connect to Redis at redis:6379: Name or service not known

I am trying to execute the container named redis which is running right now.But the error Could not connect to Redis at redis:6379: Name or service not known. Any one please hell me to figure out the issue and fix it.
This is because both the containers are not in same network, Add a network property inside service name and make sure its same for both
redis:
networks:
- redis-net
Naming the container doesn't alter your hosts file or DNS, and depending on how you ran the container it may not be accessible via the standard port as Docker does port translation.
Run docker inspect redis and examine the ports output, it will tell you what port it is accessible on as well as the IP. Note, however, that this will only be connectable over that IP from that host. To access it from off of the host you will need to use the port from the above command and the host's IP address. That assumes your local firewall rules allow it, which are beyond the scope of this site.
Try below command
src/redis-cli -h localhost -p 6379