Is there a possibility for configurating via CLI and not by web interfaces for installing Centreon? - centreon-api

I am trying to use only CLI to install centreon, I don't want to use the web interface. ( I am trying to create an Ansible role who install centreon)
Is there a methode to do the web interface part via CLI ?

Centreon CLAPI aims to offer (almost) all the features that are available on the user web interface in terms of configuration, through a command-line interface.
The main features are:
Add/Delete/Update objects such as hosts, services, host templates,
host groups, contacts etc...
Generate configuration files
Test configuration files
Move configuration files to monitoring pollers
Restart monitoring pollers Import and export objects
All actions in Centreon CLAPI will require authentication, so your commands will always start like this:
# cd /usr/share/centreon/bin
# ./centreon -u admin -p centreon [...]
Obviously, the -u option is for the username and the -p option is for the password. The password can be in clear or the one encrypted in the database.
Here is an example for a HOST object (Object name: HOST)
In order to list available hosts, use the SHOW action:
[root#centreon ~]# ./centreon -u admin -p centreon -o HOST -a show
id;name;alias;address;activate
82;sri-dev1;dev1;192.168.2.1;1
83;sri-dev2;dev2;192.168.2.2;1
84;sri-dev3;dev3;192.168.2.3;0
In order to add a host, use the ADD action:
[root#centreon ~]# ./centreon -u admin -p centreon -o HOST -a ADD -v "test;Test host;127.0.0.1;generic-host;central;Linux"
Required parameters:
Order Description
1 Host name
2 Host alias
3 Host IP address
4 Host templates; for multiple definitions, use delimiter |
5 Instance name (poller)
6 Hostgroup; for multiple definitions, use delimiter |
In order to delete one host, use the DEL action.
[root#centreon ~]# ./centreon -u admin -p centreon -o HOST -a DEL -v "test"
You can retrieve all the CLI instructions online in the official doc. https://documentation.centreon.com/docs/centreon/en/19.04/api/clapi/index.html
I also found a useful Ansible Centreon playbook on Github: https://github.com/centreon/centreon-iac-ansible

Related

Ansible unable to create folder on localhost with different user

I'm executing ansible playbook with appuser whereas I wish to create folder with user webuser on localhost.
ssh keys are setup for webuser on my localhost. So after login with appuser I can simply ssh webuser#localhost to switch user to webuser.
Note: I do not have sudo priveledges so I cannot sudo to switch to webuser from appuser.
Below is my playbook that is run with user appuser but needs to create a folder 04May2020 on localhost using webuser
- name: "Play 1"
hosts: localhost
remote_user: "webuser"
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_ssh_private_key_file: /app/misc_automation/ssh_keys_id_rsa
tasks:
- name: create folder for today's print
file:
path: "/webWeb/htdocs/print/04May2020"
state: directory
remote_user: webuser
However, the output shows that the folder is created with appuser instead of webuser. See output showing ssh connectivity with appuser instead of webuser.
ansible-playbook /app/Ansible/playbook/print_oracle/print.yml -i /app/Ansible/playbook/print_oracle/allhosts.hosts -vvv
TASK [create folder for today] ***********************************
task path: /app/Ansible/playbook/print_oracle/print.yml:33
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
Can you please suggest if it is possible without sudo?
Putting all my comments together in a comprehensive answer.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
This is indicating that you are connecting to localhost through the local connection plugin, either because you explicitelly re-declared the host as such or because you are using the implicit localhost. From discussions, you are in the second situation.
When using the local connection plugin, as indicated in the above documentation, the remote_user is ignored. Trying to change the user has no effect as you can see in the below test run (user (u)ids changed):
# Check we are locally running as user1
$ id -a
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Running the same command through ansible returns the same result
$ ansible localhost -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Trying to change the remote user has no effect
$ ansible localhost -u whatever -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
Without changing your playbook and/or inventory, the only solution is to launch the playbook as the user who needs to create the directory.
Since you have ssh available, an other solution is to declare a new host that you will use only for this purpose, which will target the local IP through ssh. (Note: you can explicitly declare localhost like this but then all connections will go through ssh which might not be what you want to do).
Somewhere at the top of you inventory, add the line:
localssh ansible_host=127.0.0.1
And in your playbook, change
hosts: localssh
Now the connection to your local machine will go through ssh and the remote_user will be obeyed correctly.
One way you can try is by setting the ansible_connection to localhost. To do this, in the directory from which you are running ansible commands, create a host_vars directory. In that sub-directory, create a file named localhost, containing the line ansible_connection: smart

Kubespray with bastion and custom SSH port + agent forwarding

Is it possible to use Kubespray with Bastion but on custom port and with agent forwarding? If it is not supported, what changes does one need to do?
Always, since you can configure that at three separate levels: via the host user's ~/.ssh/config, via the entire playbook with group_vars, or as inline config (that is, on the command line or in the inventory file).
The ssh config is hopefully straightforward:
Host 1.2.* *.example.com # or whatever pattern matches the target instances
ProxyJump someuser#some-bastion:1234
# and then the Agent should happen automatically, unless you mean
# ForwardAgent yes
I'll speak to the inline config next, since it's a little simpler:
ansible-playbook -i whatever \
-e '{"ansible_ssh_common_args": "-o ProxyJump=\"someuser#jump-host:1234\""}' \
cluster.yaml
or via the inventory in the same way:
master-host-0 ansible_host=1.2.3.4 ansible_ssh_common_args="-o ProxyJump='someuser#jump-host:1234'"
or via group_vars, which you can either add to an existing group_vars/all.yml, or if it doesn't exist then create that group_vars directory containing the all.yml file as a child of the directory containing your inventory file
If you have more complex ssh config than you wish to encode in the inventory/command-line/group_vars, you can also instruct the ansible-invoked ssh to use a dedicated config file via the ansible_ssh_extra_args variable:
ansible-playbook -e '{"ansible_ssh_extra_args": "-F /path/to/special/ssh_config"}' ...
In my case where I needed to access the hosts on particular ports, I just had to modify the host's ~/.ssh/config to be:
Host 10.40.45.102
ForwardAgent yes
User root
ProxyCommand ssh -W %h:%p -p 44057 root#example.com
Host 10.40.45.104
ForwardAgent yes
User root
ProxyCommand ssh -W %h:%p -p 44058 root#example.com
Where 10.40.* was the internal IPs.

Ansible connect to specific ip

I'm creating an integration tool thal will rely on ansible for some tasks.
One of them is to create users and change passwords on linux servers.
I'm trying to tell ansible to connect to a specific host IP and execute a command.
In a test, this commans works just fine:
ansible all -i xx.xx.xx.xx, -m ping
Ansile connects to the given IP and executes "ping".
The problem is when I try to use "user" module:
ansible all -i xx.xx.xx.xx, -m user "name=aaa update_password=always password='bbb'"
I get the error: "ERROR! Missing target hosts"
I've made a lot of atempts with variations and it seems like the momment I put quotes in my command, I always get this error... Putting the IP address between quotes changes nothing.
Any ideas on what is happening?
Thanks.
When specifying additional parameters for a module, use the -a flag.
Usage: ansible <host-pattern> [options]
Options:
-a MODULE_ARGS, --args=MODULE_ARGS
module arguments
Thus, change your command to:
ansible all -i xx.xx.xx.xx, -m user -a "name=aaa update_password=always password='bob'"
Note, I didn't specifically test this with the user module, but I did confirm the behavior with the debug module by using anisble all -i xx.xx.xx.xx, -m debug "msg=Hello" and it failed, then added the -a and it succeeded (I'm using version 2.0.2.0).

Host-based ssh authentication failure with Chef

Using chef-12.1.2-1, all nodes running Centos 7.
I've setup up host based ssh authentication on my nodes and can successfully ssh without passwords between them.
I start up a chef server by doing the following:
/opt/chef/bin/chef-zero -H <ip> -p 8889 -d
and try to bootstrap my nodes using knife bootstrap which takes me to a password prompt:
[root#node]# knife bootstrap <ip> -r <role>
Connecting to <ip>
Failed to authenticate root - trying password auth
Enter your password:
After doing some digging I found that knife uses the Ruby implementation of SSH, using the gem net-ssh-multi. I can't find specifically why this wouldn't work with host based authentication.
Why is it prompting me for a password and not using my host based authentication?
Could you try HostbasedAuthentication as an ssh flag passed into knife?
knife bootstrap <ip> -r <role> -a HostbasedAuthentication

How do I setup passwordless ssh on AWS

How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/