I am brand new to learning Ansible. Here is a pretty easy example.
I have computer A, where I will be running playbooks from.
And 10 other host machines that need to be configured. My question is, do I just need to put the public SSH key of my host machine on the 10 hosts in ~/.ssh/authorized_keys ?
I guess my understanding of how to efficiently setup SSH connections between my main computer and all the clients is a little fuzzy. Any help would be appreciated here.
You create a file called hosts with this content
[test-vms]
10.0.0.100 ansible_ssh_pass='password' ansible_ssh_user='username'
In above hosts file leave off ansible_ssh_pass='password' if using ssh keys ... Then you can create a playbook with the commands and call the playbook like below. The first line of the playbook needs to have the hosts declaration
---
- hosts: test-vms
tasks:
-name: "This is a test task"
command: /bin/hostname
Finally, you call the playbook like this
ansible-playbook -i <hosts-file> <playbook.yaml>
Ansible simply uses SSH so you can either copy the public key as you describe or use password authentication using the --user and --ask-pass flags.
Yes. As far as connection to hosts go, Ansible sets up SSH connection between the master machine and the host machines. You have to add the SSH fingerprints to the end machines. You can always skip the Are you sure you want to continue connecting (yes/no/[fingerprint]) step i.e., adding the fingerprints to .ssh/known_hosts by setting host_key_checking=false
I found this great video for initial Ansible Setup - https://youtu.be/-Q4T9wLsvOQ - maybe this can help!
Related
I have an ansible playbook and I run it:
sudo ansible-playbook -i hosts startelk.yml -vvv
Every time, after I change the hosts file, running the same playbook results in "Failed to connect to the host via ssh". If I run
ansible all -m ping
first and then the playbook command, the playbook gets successfully started.
Does anyone know why do I have to run ping each time after changing hosts (or some other) file, and then my ssh connection for playbook works, otherwise no? I don't want to be running ping every time I need to change something in Ansible.
Thanks!
It's not a good idea to run "sudo ansible-playbook ..." This way the controller connects the host as root. Best practice is not to allow root ssh connections.
Best practice is to:
run ansible-playbook as a normal user
configure
remote_user
and
escalate the privilege with become and become_user.
Read more at Understanding Privilege Escalation.
I would like to run an ansible playbook on a target host passing through multiple hosts. The scenario looks similar to the one depicted in the picture:
I partially solved issue creating the ssh_config file in the Ansible project directory:
Host IP_HostN
HostName IP_HOST_N
ProxyJump Username1#IP_HOST_2:22,Username2#IP_HOST_2:22
User UsernameN
and defining in the ansible.cfg in the Ansible project directory:
[ssh_connection]
ssh_args= -F "ssh_config"
The problem is that I need to insert automatically for each transient hosts and target host ssh username and password and I don't know how to automate this task. Moreover, python may not be installed on every transient node.
I found a reasonably good workaround. According to the scenario below:
we create an ssh tunnel until the transient host that can directly reach the target host. We also create a local port binding with -L flag:
ssh -J user_1#transient_host1:port_1 -p port_2 user_2#transient_host2 -L LOCAL_PORT:TARGET_HOST_IP:TARGET_HOST_PORT
Then we can directly enter into Target Host using the local binding:
ssh user_target_host#localhost -p LOCAL_PORT
In this way, we can run ansible playbooks on the local host configuring ansible variables accordingly:
ansible_host: localhost
ansible_user: user_target_host
ansible_port: LOCAL_PORT
ansible_password: password_target_host
I have setup an ansible environment with a control machine (centos) and 3 other remote hosts (centos). Everything is fine with regards to the actual functioning but I want it to work a little seamlessly I guess.
I have setup the ssh authentication using #ssh-key-gen on my master server and then used #ssh-copy-id to all my 3 hosts for the passphrase and it works.
Now each time I run my ansible command to these servers it asks me for passphrase and only then the command completes. I dont want that to happen. I tried defining that in my hosts file as you see below but that hasnt worked. I even tried with the vars and it doesnt work with that as well. When i run the command #ansible servers -m ping it asks me for the ssh passphrase and the it runs...
[servers]
10.0.0.1
ansible_ssh_user=root ansible_ssh_private_key_file=/home/ansible/.ssh/id_rsa
Thanks
A
Now each time I run my ansible command to these servers it asks me for passphrase and only then the command completes. I dont want that to happen.
Generate your ssh key without passphrase.
or
Setup ssh key agent.
This is a bit off-topic for SO
I'm sorry to have to ask this question, but I feel like I've tried every answer so far on SO with no luck.
I have my local machine and my remote server. Jenkins is up and running on my server.
If I open up terminal and do something like scp /path/to/file user#server:/path/to/wherever then my ssh works fine without requiring a password
If I run this command inside of my Jenkins job I get 'Host Key Verification Failed'
So I know my SSH is working correctly the way I want, but why can't I get Jenkins to use this SSH key?
Interesting thing is, it did work fine when I first set up Jenkins and the key, then I think I restarted my local machine, or restarted Jenkins, then it stopped working. It's hard to say exactly what caused it.
I've also tried several options regarding ssh-agent and ssh-add but those don't seem to work.
I verified the local machine .pub is on the server in the /user/.ssh folder and is also in the authorized keys file. The folder is owned by user.
Any thoughts would be much appreciated and I can provide more info about my problem. Thanks!
Update:
Per Kensters suggestion I did su - jenkins, then ssh server, and it asked me to add to known hosts. So I thought this was a step in the right direction. But the same problem persisted afterward.
Something I did not notice before I can ssh server without password when using my myUsername account. But if I switch to the jenkins user, then it asks me for my password when I do ssh server.
I also tried ssh-keygen -R server as suggested to no avail.
Try
su jenkins
ssh-keyscan YOUR-HOSTNAME >> ~/.ssh/known_hosts
SSH Slaves Plugin doesn't support ECDSA. The command above should add RSA key for ssh-slave.
Host Key Verification Failed
ssh is complaining about the remote host key, not the local key that you're trying to use for authentication.
Every SSH server has a host key which is used to identify the server to the client. This helps prevent clients from connecting to servers which are impersonating the intended server. The first time you use ssh to connect to a particular host, ssh will normally prompt you to accept the remote host's host key, then store the key locally so that ssh will recognize the key in the future. The widely used OpenSSH ssh program stores known host keys in a file .ssh/known_hosts within each user's home directory.
In this case, one of two things is happening:
The user ID that Jenkins is using to run these jobs has never connected to this particular remote host before, and doesn't have the remote host's host key in its known_hosts file.
The remote host key has changed for some reason, and it no longer matches the key which is stored in the Jenkins user's known_hosts file.
You need to update the known_hosts file for the user which jenkins is using to run these ssh operations. You need to remove any old host key for this host from the file, then add the host's new host key to the file. The simplest way is to use su or sudo to become the Jenkins user, then run ssh interactively to connect to the remote server:
$ ssh server
If ssh prompts you to accept a host key, say yes, and you're done. You don't even have to finish logging in. If it prints a big scary warning that the host key has changed, run this to remove the existing host from known_hosts:
$ ssh-keygen -R server
Then rerun the ssh command.
One thing to be aware of: you can't use a passphrase when you generate a key that you're going to use with Jenkins, because it gives you no opportunity to enter such a thing (seeing as it runs automated jobs with no human intervention).
I manually deploy websites through SSH, I manage source code in github/bitbucket. For every new site I'm currently generating a new keypair on the server and adding it to github/bitbucket, so that I can pull chances from server.
I came across a feature in capistrano to use local machine's key pair for pulling updates to server, which is ssh_options[:forward_agent] = true
How can I do something like this and forward my local machine's keypair to the server I'm SSH-ing into, so that I can avoid adding keys into github/bitbucket for every new site.
This turned out to be very simple, complete guide is here Using SSH Forwarding
In essence, you need to create a ~/.ssh/config file, if it doesn't exist.
Then, add the hosts (either domain name or IP address in the file and set ForwardAgent yes)
Sample Code:
Host example.com
ForwardAgent yes
Makes SSH life a lot easier.
Create ~/.ssh/config
Fill it with (host address is the address of the host you want to allow creds to be forwarded to):
Host [host address]
ForwardAgent yes
If you haven't already run ssh-agent, run it:
ssh-agent
Take the output from that command and paste it into the terminal. This will set the environment variables that need to be set for agent forwarding to work. Optionally, you can replace this and step 3 with:
eval "$(ssh-agent)"
Add the key you want forwarded to the ssh agent:
ssh-add [path to key if there is one]/[key_name].pem
Log into the remote host:
ssh -A [user]#[hostname]
From here, if you log into another host that accepts that key, it will just work:
ssh [user]#[hostname]
To use it simply with the default identity (id_rsa) you can use the following couple of command:
ssh-add
ssh -A [username]#[server-address]
The configuration file is very helpful but the trick for agent forwarding does the ssh-add command. It seems that this have to be initial triggered before any remote connections or after restart of the computer. To permanently add the key try the following solution from the user daminetreg:
Add private key permanently with ssh-add on Ubuntu
It is very useful :
ssh -i [private-key] -A [user]#[host]
You can set one command in bash_aliases or other command routines.