My hosts:
➜ ansible cat hosts
[Production]
60.205.94.138
My ansible.cfg:
➜ ansible cat ansible.cfg
[Production]
60.205.94.138 ansible_ssh_private_key_file=/Users/yuanyuan/.ssh/yyb
My command, and its results:
The ssh command:
ssh-copy-id -i ~/.ssh/yyb.pub root#60.205.94.138
What is the problem?
You use ansible.cfg incorrectly. The content you have in there should be in your hosts file.
Try hosts:
[Production]
60.205.94.138 ansible_ssh_private_key_file=/Users/yuanyuan/.ssh/yyb
and:
$ ansible all -i hosts -u root -m ping
Related
Using latest Ansible 2.9.12 on Ubuntu 18.04
Have followed the steps to setup password less ssh, I have the same same user on both server and node machine.
a#A:~> ssh-keygen -t rsa
a#A:~> ssh b#B mkdir -p .ssh
b#B's password:
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'
b#B's password:
a#A:~> ssh b#B <- this works
$ ansible -m ping all <- this works
Try executing a basic ansible command:
$ ansible -b all -m apt -a "name=apache2 state=latest"
192.168.37.129 | FAILED! => {
"msg": "Missing sudo password"
$ ansible -b all -m apt -a "name=apache2 state=latest" -kk
SSH password:
192.168.37.129 | FAILED! => {
**"msg": "Missing sudo password"
Anything I am missing here? Should I create a new/different user in node machine?
Implemented these steps on both the server and node machine
$ sudo visudo #added these 2 lines
root ALL=(ALL) ALL
<user> ALL=(ALL) NOPASSWD:ALL
$ sudo nano /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes
$ sudo service sshd restart
That error went away
This did work previously!
My deployment step in my pipeline SSH's onto a DO box & pulls the code from a docker registry. As mentioned, this worked previously & this was my deploy step in my .gitlab-ci.yml back then which worked fine inspiration from here under Using SSH:
deploy:
stage: deploy
image: docker:stable-dind
only:
- master
services:
# Specifying the DinD version here as the latest DinD version introduced a timeout bug
# Highlighted here: https://forum.gitlab.com/t/gitlab-com-ci-stuck-on-docker-build/34401/2
- docker:19.03.5-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
environment:
name: production
when: manual
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOYMENT_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -S)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
script:
- ssh -vvv gitlab#${DEPLOYMENT_SERVER_IP}
"docker stop ${CI_PROJECT_NAME};
docker rm ${CI_PROJECT_NAME};
docker container prune -f;
docker rmi ${CI_REGISTRY}/${CI_PROJECT_PATH};
docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};
docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest;
docker run -d -p ${HTTP_PORT}:${HTTP_PORT} --restart=always -m 800m --init --name ${CI_PROJECT_NAME} --net ${NETWORK_NAME} --ip ${NETWORK_IP} ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest;"
Once I just attempted to run the deploy step & failed. Coming back with this error:
...
$ mkdir -p ~/.ssh
$ echo "${DEPLOYMENT_SERVER_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 22
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
$ ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
# xxx.xxx.xxx.xxx:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.xxx.xxx.xxx:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.xxx.xxx.xxx:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.xxx.xxx.xxx:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.xxx.xxx.xxx:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
$ ssh gitlab#${DEPLOYMENT_SERVER_IP} "docker stop ${CI_PROJECT_NAME}; docker rm ${CI_PROJECT_NAME}; docker container prune -f; docker rmi ${CI_REGISTRY}/${CI_PROJECT_PATH}; docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; docker run -d -p ${PORT}:${PORT} --restart always -m 2g --init --name ${CI_PROJECT_NAME} --net ${NETWORK_NAME} --ip ${NETWORK_IP} ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest;"
ssh: connect to host xxx.xxx.xxx.xxx port 22: Connection refused
Running after_script
00:02
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 255
Steps I took to set this up originally
Run ssh-keygen -t rsa -b 2048 on DO box (with no password)
Added public key into authorized_keys on the DO box
Copy the private key into a CI variable DEPLOYMENT_SERVER_PRIVATE_KEY
I know the port is open for SSH as I am able to SSH from my local machine into gitlab user. I have now changed my deployment step (based on comments from here, this article, & this one) to:
deploy:
stage: deploy
image: docker:stable-dind
only:
- master
services:
# Specifying the DinD version here as the latest DinD version introduced a timeout bug
# Highlighted here: https://forum.gitlab.com/t/gitlab-com-ci-stuck-on-docker-build/34401/2
- docker:19.03.5-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
environment:
name: production
when: manual
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_SERVER_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- cat ~/.ssh/config
- echo ${CI_REGISTRY_USER}
- ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
script:
- ssh -vvv gitlab#${DEPLOYMENT_SERVER_IP}
"docker stop ${CI_PROJECT_NAME};
docker rm ${CI_PROJECT_NAME};
docker container prune -f;
docker rmi ${CI_REGISTRY}/${CI_PROJECT_PATH};
docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};
docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest;
docker run -d -p ${HTTP_PORT}:${HTTP_PORT} --restart=always -m 800m --init --name ${CI_PROJECT_NAME} --net ${NETWORK_NAME} --ip ${NETWORK_IP} ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest;"
Still to no avail! The verbosing logging of ssh spit out:
...
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 18
$ echo "$DEPLOYMENT_SERVER_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
Identity added: (stdin) ((stdin))
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ cat ~/.ssh/config
Host *
StrictHostKeyChecking no
$ echo ${CI_REGISTRY_USER}
gitlab-ci-token
$ ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
# xxx.209.184.138:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.209.184.138:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.209.184.138:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.209.184.138:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
# xxx.xxx.xxx.xxx:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
$ ssh -vvv gitlab#${DEPLOYMENT_SERVER_IP}
OpenSSH_8.3p1, OpenSSL 1.1.1g 21 Apr 2020
debug1: Reading configuration data /root/.ssh/config
debug1: /root/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: resolve_canonicalize: hostname 134.xxx.xxx.xxx is address
Pseudo-terminal will not be allocated because stdin is not a terminal.
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug2: ssh_connect_direct
debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22.
debug1: connect to address xxx.xxx.xxx.xxx port 22: Connection refused
ssh: connect to host xxx.xxx.xxx.xxx port 22: Connection refused
ERROR: Job failed: exit code 255
I also added the -T option suggested here to disable pseudo-tty allocation but all that did was remove the pseudo line from the logs.
EDIT
Looking at the logs on the DO box (/var/log/auth.log), I've got the error:
Jun 22 15:53:37 exchange-apis sshd[16159]: Connection closed by 35.190.162.232 port 49750 [preauth]
Jun 22 15:53:38 exchange-apis sshd[16160]: Connection closed by 35.190.162.232 port 49754 [preauth]
Jun 22 15:53:38 exchange-apis sshd[16162]: Connection closed by 35.190.162.232 port 49752 [preauth]
Jun 22 15:53:38 exchange-apis sshd[16163]: Unable to negotiate with 35.190.162.232 port 49756: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256#openssh.com [preauth]
Jun 22 15:53:38 exchange-apis sshd[16161]: Unable to negotiate with 35.190.162.232 port 49758: no matching host key type found. Their offer: sk-ssh-ed25519#openssh.com [preauth]
Googling this error, common cause seems to be due to OpenSSH dropping support for DSA keys. However, not sure why this would effect me as I generated an RSA key pair. Anyway, running dpkg --list | grep openssh spits out:
ii openssh-client 1:7.6p1-4ubuntu0.3 amd64 secure shell (SSH) client, for secure access to remote machines
ii openssh-server 1:7.6p1-4ubuntu0.3 amd64 secure shell (SSH) server, for secure access from remote machines
ii openssh-sftp-server 1:7.6p1-4ubuntu0.3 amd64 secure shell (SSH) sftp server module, for SFTP access from remote machines
& sshd -v spits out:
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
Nevertheless, worked of the answers; here & here so my deploy stage is now:
deploy:
stage: deploy
image: docker:stable-dind
only:
- master
services:
# Specifying the DinD version here as the latest DinD version introduced a timeout bug
# Highlighted here: https://forum.gitlab.com/t/gitlab-com-ci-stuck-on-docker-build/34401/2
- docker:19.03.5-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
environment:
name: production
when: manual
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- echo "$DEPLOYMENT_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\tHostkeyAlgorithms +ssh-dss\n\tPubkeyAcceptedKeyTypes +ssh-dss\n\n" > ~/.ssh/config'
- cat ~/.ssh/config
- ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh -oHostKeyAlgorithms=+ssh-dss gitlab#${DEPLOYMENT_SERVER_IP} ls
Still got no look with that & I get the same error in the output of the runner & the log on the DO box. Any ideas?
Ideally, if you can log on to the DO box, you would stop the ssh service, and launch /usr/bin/sshd -de, in order to establish a debug session on the SSH daemon side, with logs written on stderr (instead of system messages)
But if you cannot, at least try and generate an rsa key without passphrase, for testing. That means you don't need the ssh-agent.
And try a ssh -Tv gitlab#${DEPLOYMENT_SERVER_IP} ls to see what log is produced there.
Try with a classic PEM format
ssh-keygen -t rsa -P "" -m PEM
after editing the pipeline a bit more, I've noticed that it is actually this line that is causing the issue: ssh-keyscan -H ${DEPLOYMENT_SERVER_IP} >> ~/.ssh/known_hosts
It can be the case if it leads to a badly formatted ~/.ssh/known_hosts, especially if the ${DEPLOYMENT_SERVER_IP} is not correctly set.
Try and add a echo "DEPLOYMENT_SERVER_IP='${DEPLOYMENT_SERVER_IP}'", and a cat ~/.ssh/known_hosts commands to the before_script section, to know more.
I'm trying to run ansible role on multiple servers, but i get an error:
fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh.", "unreachable": true}
My /etc/ansible/hosts file looks like this:
192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user
I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.
The log from verbose execution:
<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10>
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10
'/bin/sh -c '"'"'( umask 22 && mkdir -p "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" &&
echo "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829"
)'"'"''
Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.
I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:
192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant
After that I can connect to the remote host:
ansible all -i tests -m ping
With the following result:
192.168.33.100 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Hope that help you.
EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"
I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.
Try to modify your host file to:
192.168.0.10
192.168.0.11
192.168.0.12
$ansible -m ping all -vvv
After installing ansible on Ubuntu or CentOS.
You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/].
"Authentication or permission failure".
This preconisaion will solve the problem.
[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
"changed": false,
"ping": "pong"
}
Your_server | SUCCESS => {
"changed": false,
"ping": "pong"
}
Best Practice for me I'm using SSH keys to access to server hosts
1.Create hosts file in inventories folder
[example]
example1.com
example2.com
example3.com
2. Create ansible-playbook file playbook.yml
---
- hosts:
- all
- roles:
- example
3. let's try to deploy ansible-playbook with multiple server hosts
ansible-playbook playbook.yml -i inventories/hosts example --user vagrant
The ansible_ssh_port changed while reloading the vm.
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
So I had to update the inventory/hosts file as follows:
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>
I need to create a tar and shipped it to my local folder.
If i can create tar file, i can easily get it on local folder using scp.
Here problem is at first step: Creating TAR on remote server. Server is accessible only through another remote server (bastion server).
Here is the command i'm using currently:
timestamp="20160226-085856"
ssh bastion_server -t ssh remote_server "sudo su -c \"cp -r /etc/nginx /home/ubuntu/backup/nginx_26Feb && cd /home/ubuntu/backup && tar -C /home/ubuntu/backup -cf backup_nginx-$timestamp.tar ./nginx_26Feb\" "
Here is the error i am getting:
su: invalid option -- 'r'
Usage: su [options] [LOGIN]
Any help here would be great.
Give it a try without the fancy sudo su -c. Using sudo -s should be enough:
ssh bastion_server -t ssh remote_server "sudo -s cp -r /etc/nginx \
/home/ubuntu/backup/nginx_26Feb && cd /home/ubuntu/backup && \
tar -C /home/ubuntu/backup -cf backup_nginx-$timestamp.tar ./nginx_26Feb"
Or rather set up proper two-hops ~/.ssh/config:
Host bastion
Hostname bastion_server
Host remote
Hostname remote_server
ProxyCommand ssh -W %h:%p bastion
and then just run
ssh remote sudo su -c "cp -r /etc/nginx /home/ubuntu/backup/nginx_26Feb \
&& cd /home/ubuntu/backup && tar -C /home/ubuntu/backup -cf \
backup_nginx-$timestamp.tar ./nginx_26Feb"
Without the fancy escaping and stuff.
I have just discovered Ansible and it is great! I have written some cool playbooks to manage 0downtime docker deployments on my servers, but I waste quite a bit of time waiting things to happen due to the fact that I sometimes have to work with poor internet connection. So i thought, I might be able to run Ansible against boot2docker, but got no success and after doing a lil bit of research I realized it would be too hacky and it would never behave like my actual Ubuntu server. So here I am trying to make it work with Vagrant.
I want to achive something like Laptop > Ansible > Vagrant Box; don`t want to run the playbooks from the Vagrant Box!
VagrantFile
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.forward_agent = true
end
vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/cesco/Code/vagrant/.vagrant/machines/default/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
Thanks to some SO question I was able to do this:
$ vagrant ssh-config > vagrant-ssh
$ ssh -F vagrant-ssh default
$ vagrant#vagrant-ubuntu-trusty-64:~$
But I keep getting localhost | FAILED => SSH Error: Permission denied (publickey,password).every time I try to run the Ansible ping ont the vagrant box.
Ansible inventory
[staging]
vagrant#localhost
Ansible config
[ssh_connection]
ssh_args = -o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-o PasswordAuthentication=no \
-o IdentityFile=/Users/cesco/.vagrant.d/insecure_private_key \
-o IdentitiesOnly=yes \
-o LogLevel=FATAL \
-p 2222
How do I translate the ssh file to ansible configurantion?
It does not work on the command line also!
ssh -vvv vagrant#localhost -p 2222 -i /Users/cesco/.vagrant.d/insecure_private_key -o IdentitiesOnly=yes -o LogLevel=FATAL -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
To use vagrant with and classic ssh connection, first add another private IP to your Vagrant file.
config.vm.network "private_network", ip: "192.168.1.2"
Reload your instance
vagrant reload
Then you can connect by ssh using the private key.
ssh -vvv vagrant#192.168.1.2 -p 2222 -i /Users/cesco/.vagrant.d/insecure_private_key
That is the best way.
You misunderstand. The vagrant ansible plugin does not run ansible from the vagrant, but instead SSHs into the vagrant from your local box. That's the way to go since it means with a few small changes you can target a remote host instead.
To get it working you need to add something like this to your Vagrantfile:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/vagrant.yml"
ansible.sudo = true
ansible.ask_vault_pass = true # comment out if you don't need
ansible.verbose = 'vv' # comment out if you don't want
ansible.groups = {
"tag_Role_myrole" => ["myrole"]
}
ansible.extra_vars = {
role: "myrole"
}
end
# Set the name of the VM.
config.vm.define "myrole" do |myrole|
luigi.vm.hostname = "myrole"
end
Create/update your ansible.cfg file with:
hostfile = ../.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
Create a hosts inventory file containing:
localhost=127.0.0.1 ansible_connection=local
Now vagrant up will bring up and provision the instance, or run vagrant provision to (re)provision a running vagrant.
To run a playbook directly against your vagrant use:
ansible-playbook -u vagrant --private-key=~/.vagrant.d/insecure_private_key yourplaybook.yml