Ansible: Bad configuration option: pubkeyacceptedalgorithms - ssh

I'm running ansible core v2.12.5 via Docker with Python 3.10.4.
In my host Ubuntu 22.04 ssh/config I have these entries:
Host git-codecommit.*.amazonaws.com
User AP-----------------
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
PubkeyAcceptedAlgorithms +ssh-rsa
HostkeyAlgorithms +ssh-rsa
Host 172.x.y.z
IdentitiesOnly yes
PubkeyAcceptedAlgorithms +ssh-rsa
HostkeyAlgorithms +ssh-rsa
Because that two hosts need that ssh-rsa.
Now, I'm tryng to run Ansible from my Ubuntu 22.04 to perform some task to another Ubuntu 22.04.
I get this error in console:
TASK [Gathering Facts] ******************************************************************************************************************************************************************************
fatal: [192.168.1.42]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: /root/.ssh/config: line 5: Bad configuration option: pubkeyacceptedalgorithms\r\n/root/.ssh/config: line 10: Bad configuration option: pubkeyacceptedalgorithms\r\n/root/.ssh/config: terminating, 2 bad configuration options", "unreachable": true}
Cannot understand why ansible looks at that options if host to setup is totally different.

Related

Set ForwardX11 with ansible ssh config

While connecting to a managed host(netapp device) using command module, I get the below error.
TASK [Gathering Facts] *********************************************************
fatal: [10.20.30.40]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: X11 forwarding request failed", "unreachable": true}
How to set ssh setting "ForwardX11 no" with ansible configuration / ansible-playbook command line option.
I don't want to change ssh settings in user directory.
Try passing ssh arguments in command line ansible-playbook --ssh-common-args='-o ForwardX11=no' <rest_of_the_commands>

Ansible Permission denied (public key) but ssh using same key works

I'm running this Ansible ad-hoc command on Ubuntu 16.x (ansible ver. 2.2.1.0 and 2.2.2.0)
ansible host_alias -a "df -h" -u USER
where host_alias is the defined the ansible hosts file (defines an ec2 instance and its .pem file).
the host file looks like this:
[host_alias]
my_host.compute.amazonaws.com
private_key_file=/path/to/key/my_key.pem
I get this error:
private_key_file=/path/to/key/my_key.pem | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname private_key_file=/path/to/key/my_key.pem: Name or service not known\r\n",
"unreachable": true
}
my_host.compute.amazonaws.com | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n",
"unreachable": true
The same host and key work fine when I ssh (defined by ~/.ssh/config).
I have made triple sure the key is there and has read permissions. I also tried setting the ansible_user in the Ansible hosts file.
Any ideas?
Please check the format of the Ansible inventory file in the documentation.
You have defined two hosts in a host group named host_alias:
the first host is: my_host.compute.amazonaws.com,
the second host is: private_key_file=/path/to/key/my_key.pem.
Ansible complains it cannot connect to the second host:
Could not resolve hostname private_key_file=/path/to/key/my_key.pem
It also cannot connect to the first host, because the SSH key is not defined:
Failed to connect to the host via ssh: Permission denied (publickey).
On top of the mistake of splitting the hostname and the parameter into separate lines, you also got the name of the parameter wrong -- it should be ansible_ssh_private_key_file.
The parameters are listed in a later section of the same document.
Your inventory file should look like this:
[host_group_name]
my_host.compute.amazonaws.com ansible_ssh_private_key_file=/path/to/key/my_key.pem
and your command:
ansible host_group_name -a "df -h" -u USER
The second line needs to be dropped in the
[host_alias] section.
The above section is meant for hosts only.
Once you do that try
ansible all -m ping
to check if you can ping the host.

Ansible: Cannot login to local Vagrant server

I have two Vagrant instances running having different IP:
192.168.33.17 [Ansible installed here]
192.168.33.19 [Another server where I am trying to connect]
My Ansible hosts file is in /etc/ansible/hosts and it looks like:
[example]
192.168.33.19:2222
I can easily connect via SSH to the second server with command:
ssh vagrant#192.168.33.19
without password.
But running the Ansible command yields error:
[root#centos72x64 vagrant]# ansible example -m ping -u vagrant
192.168.33.19 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
How I can solve this error?
My Ansible hosts file is in /etc/ansible/hosts and it looks like
[example]
192.168.33.19:2222
You don't put port number in the Ansible inventory file this way. To learn how to do it, confer the docs.
But you also mentioned:
I can easily connect via ssh to the second server with command
ssh vagrant#192.168.33.19
So you don't use port 2222 at all.

Vagrant VM : Bad configuration option: IdentitiesOnly

I installed the vagrant VM, but when i run:
vagrant ssh
it display an error of configuration:
command-line: line 0: Bad configuration option: IdentitiesOnly
I checked :
vagrant ssh-config
it display me:
C:\Vagrant\Ubuntu1>vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Vagrant/Ubuntu1/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
Can you tell me why, knowing that the first time i'm using vagrant.
IdentitiesOnly option is in OpenSSH since 2004 (OpenSSH 3.9). If you are using older version, you should certainly update.
Other possibility is to remove the colliding option, since it is not crucial to the functionality.

Ansible - establishing initial SSH connection

I am trying to copy an SSH public key to a newly created VM:
- hosts: vm1
remote_user: root
tasks:
- name: deploy ssh key to account
authorized_key: user='root' key="{{lookup('file','/root/.ssh/id_rsa.pub')}}"
But getting error:
fatal: [jenkins]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
So to establish SSH I need first to establish SSH?
How can I establish SSH for newly created KVM automatically, without manual key copy.
(host_key_checking = False in ancible.cfg)
Assuming the target machine allows root-login with password (from the error message it seems it does), you must provide the credentials to your playbook:
ansible-playbook playbook.yml --extra-vars "ansible_ssh_user=root ansible_ssh_pass=password"
Something I tried (and it worked) when I had this same issue:
ansible target-server-name -m command -a "whatever command" -k
The -k prompts you for the ssh password to the target server.
Add below changes to the /etc/ansible/hosts file:
[target-server-name]
target_server_ip
Example:
ansible target-server-name -m ping -k