I´m newbie with this fantastic automation engine, and have a little issue with the vars file:
By the momment, I must connect via SSH without keypars using an specifics users and password.
hosts file
[all:vars]
connection_mode1=ssh
ssh_user1=user1
ssh_pass1=pass1
[serverstest]
host1 ansible_connection=connection_mode1 ansible_ssh_user=ssh_user1 ansible_ssh_pass=ssh_pass1
I'm also trying wrap with "" and {} but doesn't works.
How can I use variables on this parameters?
ansible_ssh_user has been deprecated since v. 2.0. It becomes ansible_user. See here.
Never store ansible_ssh_pass variable in plain text; always use a vault. See Variables and Vaults.
Anyway, having a mytest.inventory file as follows
[all:vars]
ssh_user1=user1
[serverstest]
host1 ansible_user="{{ ssh_user1 }}"
it works, e.g.
ansible -i mytest.inventory serverstest -m ping -k
Option -k asks for the password.
If you still want to write the password in the inventory you can leave the password variable definition and add ansible_ssh_pass="{{ ssh_pass1 }}"
[serverstest]
192.168.15.201 ansible_user="{{ ssh_user1 }}" ansible_ssh_pass="{{ ssh_pass1 }}"
Related
What I am trying to accomplish overall is to ssh into systems which are untouched by ansible, and have them set up by ansible, including its account, and ssh keys, and adding to the dynamic inventory... and so on and so forth. In this case, it's via a proxy jump. Unfortunately this means having to ssh into them using the ssh command and the shell module, as well as storing a password. Keep in mind I am on ansible 2.9, and this is a build environment, so passwords can be copied to files during build for use and then deleted at the end of the run, so this isn't a problem. If this succeeds, we can set up accounts and ssh keys, then delete the build files and everyone is happy.
I don't need all that much I hope, I would just like to get one sticky piece of that working better. That part is the ssh options that are needed for a proxyjump connection. ansible-controller doesn't have direct access to host p0, but the ecc67 host does. I have it working in the shell command no problem, but for whatever reason, I can't shift it up to the ansible_ssh_common_args variable where it belongs.
Here is the working example of the task as it functions now:
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ access_var.ansible_ssh_pass_ssn }}" ssh -o 'ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p' bob#p0 "w; exit"
register: output_1
The above works just fine and uses an undefined ansible_ssh_common_args. The nc is the netcat binary and is simply being passed options through the proxy command. Then we have the below playbook in which I tried to complete my stated mission, however, it is not functional, and fails at the sshpass task:
- name: Play that is testing for a successful proxyjump connection to p0 through ecc67.
hosts: ansible-controller
remote_user: bob
become: no
become_method: sudo
gather_facts: no
vars:
ansible_connection: ssh
ansible_ssh_common_args: '-o "ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p"'
tasks:
- name: Import the password file so that we have the bob account's password.
include_vars:
file: ~/project/copyable-files/dynamic-files/build/active-vars-repository/access.yml
name: access_var
- name: Set password for the bob account from the file value using previous operator input.
set_fact:
ansible_ssh_pass: "{{ access_var.ansible_ssh_pass_b }}"
ansible_become_password: "{{ access_var.ansible_ssh_pass_b }}"
cacheable: yes
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ ansible_ssh_pass_b }}" ssh "{{ ansible_ssh_common_args }}" bob#p0 "hostname; exit"
register: output_1
- debug:
var: output_1
The error I get when I attempt to use the above playbook with the reworked task and variables is as follows:
TASK [sshpass attempt with the raw module for testing.] ***********************************************
fatal: [ansible-controller]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: Killed by signal 1.", "unreachable": true}
The password is not the issue despite the error stating it is, though it's possible it's accessing something I don't expect. Is there any way to do what I want, heck, is there even just a better way to go about it that I didn't think of? Any suggestions would be helpful thanks!
From your description I understand that there is an issue with special characters in variables, quoting, templating and debugging. Therefore I am explicit not addressing the question "Is there ... a better way to go?".
To address the different topics I've created the following minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
vars:
ansible_ssh_pass: !unsafe "P4$$w0rd!_%&"
ansible_ssh_common_args: !unsafe '-o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p"'
tasks:
- name: Debug task to show command content
lineinfile:
path: ssh.file
create: true
line: 'sshpass -p {{ ansible_ssh_pass | quote }} ssh {{ ansible_ssh_common_args }} user#test.example.com "hostname; exit"'
resulting into an output of
sshpass -p 'P4$$w0rd!_%&' ssh -o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p" user#test.example.com "hostname; exit"
... the content of ssh.file and what the shell would "see"
Further Documentation
Advanced playbook syntax - Unsafe or raw strings for usage of !unsafe
The most common use cases include passwords that allow special characters
Using filters to manipulate data
You can use YAML single quote escaping ... Escaping single quotes within single quotes in YAML is done by doubling the single quote.
Using filters to manipulate data - Manipulating strings for usage of quote
To add quotes for shell usage ... | quote
Templating (Jinja2)
Ansible uses Jinja2 templating to enable dynamic expressions and access to variables and facts.
I find it easy to describe what I want to do showing how I tried to implement it, as below.
In one playbook1.yml, I have:
- name: set facts with ssh connecton details
set_fact:
ssh_user: "user01"
ssh_pass: "passw0rd"
In my playbook2.yml, I want to do something similar to this:
import_playbook: playbook1.yml
Vars:
ansible_user: "{{ ssh_user }}"
ansible_ssh_pass: "{{ ssh_pass }}"
After this, my tasks in the playbook2.yml should attempt to use sshpass when connecting to my remote hosts, getting user and pass for the "ansible_*" Vars above.
Can be done? Clearly that is not working and I am unable to find a solution for this.
Setting Ansible facts for the ssh Vars does work - clearly it needs Vars to trigger sshpass and use password for remote access.
I know ssh keys is the way to go, and that is being covered, however I also need a solution for this specific use case.
Thanks in advance for any help.
The only solution I found so far was:
playbook1:
Set the facts, and save to a file
On playbook2:
Vars:
ansible_user: "{{lookup(file',('userfactsfile')}}"
ansible_ssh_pass: "{{lookup(file',('passfactsfile')}}"
Tasks:
- import_playbook: playbook1.yml
...
my actions here
...
delete userfactsfile
delete passfactsfile
There is still a problem: the password is saved in clear text for the duration of the playbook run. If there is an unexpected interruption in the process, the password file might be left stored in the server (which is the primary concern being addressed in this very same work).
An acceptable solution would be:
Encrypt the password before saving it to file in paybook1, However, I faced some technical challenges (I am pretty noob in Ansible), but still a viable solution if I could achieve.
The encryption in playbook1 would use the actual password as passphrase.The passphrase is persistent across both playbooks, in a fact.
In playbook2, this password would be used to decrypt the password file in the Vars lookup.
Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question:
GATHERING FACTS ***************************************************************
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:yy:zz:....
Are you sure you want to continue connecting (yes/no)?
I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
Two options - the first, as you said in your own answer, is setting the environment variable ANSIBLE_HOST_KEY_CHECKING to False.
The second way to set it is to put it in an ansible.cfg file, and that's a really useful option because you can either set that globally (at system or user level, in /etc/ansible/ansible.cfg or ~/.ansible.cfg), or in an config file in the same directory as the playbook you are running.
To do that, make an ansible.cfg file in one of those locations, and include this:
[defaults]
host_key_checking = False
You can also set a lot of other handy defaults there, like whether or not to gather facts at the start of a play, whether to merge hashes declared in multiple places or replace one with another, and so on. There's a whole big list of options here in the Ansible docs.
Edit: a note on security.
SSH host key validation is a meaningful security layer for persistent hosts - if you are connecting to the same machine many times, it's valuable to accept the host key locally.
For longer-lived EC2 instances, it would make sense to accept the host key with a task run only once on initial creation of the instance:
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts"
There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should manage host key checking differently per logical environment.
Leave checking enabled by default (in ~/.ansible.cfg)
Disable host key checking in the working directory for playbooks you run against ephemeral instances (./ansible.cfg alongside the playbook for unit tests against vagrant VMs, automation for short-lived ec2 instances)
I found the answer, you need to set the environment variable ANSIBLE_HOST_KEY_CHECKING to False. For example:
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ...
Changing host_key_checking to false for all hosts is a very bad idea.
The only time you want to ignore it, is on "first contact", which this playbook will accomplish:
---
- name: Bootstrap playbook
# Don't gather facts automatically because that will trigger
# a connection, which needs to check the remote host key
gather_facts: false
tasks:
- name: Check known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -F {{ inventory_hostname }}
register: has_entry_in_known_hosts_file
changed_when: false
ignore_errors: true
- name: Ignore host key for {{ inventory_hostname }} on first run
when: has_entry_in_known_hosts_file.rc == 1
set_fact:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
# Now that we have resolved the issue with the host key
# we can "gather facts" without issue
- name: Delayed gathering of facts
setup:
So we only turn off host key checking if we don't have the host key in our known_hosts file.
You can pass it as command line argument while running the playbook:
ansible-playbook play.yml --ssh-common-args='-o StrictHostKeyChecking=no'
forward to nikobelia
For those who using jenkins to run the play book, I just added to my jenkins job before running the ansible-playbook the he environment variable ANSIBLE_HOST_KEY_CHECKING = False
For instance this:
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook 'playbook.yml' \
--extra-vars="some vars..." \
--tags="tags_name..." -vv
If you don't want to modify ansible.cfg or the playbook.yml then you can just set an environment variable:
export ANSIBLE_HOST_KEY_CHECKING=False
Ignoring checking is a bad idea as it makes you susceptible to Man-in-the-middle attacks.
I took the freedom to improve nikobelia's answer by only adding each machine's key once and actually setting ok/changed status in Ansible:
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
However, Ansible starts gathering facts before the script runs, which requires an SSH connection, so we have to either disable this task or manually move it to later:
- name: Example play
hosts: all
gather_facts: no # gather facts AFTER the host key has been accepted instead
tasks:
# https://stackoverflow.com/questions/32297456/
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
- name: Gathering Facts
setup:
One kink I haven't been able to work out is that it marks all as changed even if it only adds a single key. If anyone could contribute a fix that would be great!
You can simply tell SSH to automatically accept fingerprints for new hosts. Just add
StrictHostKeyChecking=accept-new
to your ~/.ssh/config. It does not disable host-key checking entirely, it merely disables this annoying question whether you want to add a new fingerprint to your list of known hosts. In case the fingerprint for a known machine changes, you will still get the error.
This policy also works with ANSIBLE_HOST_KEY_CHECKING and other ways of passing this param to SSH.
I know the question has been answered and it's correct as well, but just wanted to link the ansible doc where it's explained clearly when and why respective check should be added: host-key-checking
The most problems appear when you want to add new host to dynamic inventory (via add_host module) in playbook. I don't want to disable fingerprint host checking permanently so solutions like disabling it in a global config file are not ok for me. Exporting var like ANSIBLE_HOST_KEY_CHECKING before running playbook is another thing to do before running that need to be remembered.
It's better to add local config file in the same dir where playbook is. Create file named ansible.cfg and paste following text:
[defaults]
host_key_checking = False
No need to remember to add something in env vars or add to ansible-playbook options. It's easy to put this file to ansible git repo.
This one is the working one I used in my environment. I use the idea from this ticket https://github.com/mitogen-hq/mitogen/issues/753
- name: Example play
gather_facts: no
hosts: all
tasks:
- name: Check SSH known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -l -F {{ inventory_hostname }}
register: checkForKnownHostsEntry
failed_when: false
changed_when: false
ignore_errors: yes
- name: Add {{ inventory_hostname }} to SSH known hosts automatically
when: checkForKnownHostsEntry.rc == 1
changed_when: checkForKnownHostsEntry.rc == 1
local_action:
module: shell
args: ssh-keyscan -H "{{ inventory_hostname }}" >> $HOME/.ssh/known_hosts
Host key checking is important security measure so I would not just skip it everywhere. Yes, it can be annoying if you keep reinstalling same testing host (without backing up it's SSH certificates) or if you have stable hosts but you run your playbook for Jenkins without simple option to add host key if you are connecting to the host for a first time. So:
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Used some other responses here and a co-worker solution, thank you!
Use the parameter named as validate_certs to ignore the ssh validation
- ec2_ami:
instance_id: i-0661fa8b45a7531a7
wait: yes
name: ansible
validate_certs: false
tags:
Name: ansible
Service: TestService
By doing this it ignores the ssh validation process
I work with several different servers, and it would be useful to be able to set some environment variables such that they are active on all of them when I SSH in. The problem is, the contents of some of the variables contain sensitive information (hashed passwords), and so I don't want to leave it lying around in a .bashrc file -- I'd like to keep it only in memory.
I know that you can use SSH to forward the DISPLAY variable (via ForwardX11) or an SSH Agent process (via ForwardAgent), so I'm wondering if there's a way to automatically forward the contents of arbitrary environment variables across SSH connections. Ideally, something I could set in a .ssh/config file so that it would run automatically when I need it to. Any ideas?
You can, but it requires changing the server configuration.
Read the entries for AcceptEnv in sshd_config(5) and SendEnv in ssh_config(5).
update:
You can also pass them on the command line:
ssh foo#host "FOO=foo BAR=bar doz"
Regarding security, note than anybody with access to the remote machine will be able to see the environment variables passed to any running process.
If you want to keep that information secret it is better to pass it through stdin:
cat secret_info | ssh foo#host remote_program
You can't do it automatically (except for $DISPLAY which you can forward with -X along with your Xauth info so remote programs can actually connect to your display) but you can use a script with a "here document":
ssh ... <<EOF
export FOO="$FOO" BAR="$BAR" PATH="\$HOME/bin:\$PATH"
runRemoteCommand
EOF
The unescaped variables will be expanded locally and the result transmitted to the remote side. So the PATH will be set with the remote value of $HOME.
THIS IS A SECURITY RISK Don't transmit sensitive information like passwords this way because anyone can see environment variables of every process on the same computer.
Something like:
ssh user#host bash -c "set -e; $(env); . thescript.sh"
...might work (untested)
Bit of a hack but if you cannot change the server config for some reason it might work.
How can you make SSH read the password from stdin, which it doesn't do by default?
based on this post you can do:
Create a command which open a ssh session using SSH_ASKPASS (seek SSH_ASKPASS on man ssh)
$ cat > ssh_session <<EOF
export SSH_ASKPASS="/path/to/script_returning_pass"
setsid ssh "your_user"#"your_host"
EOF
NOTE: To avoid ssh to try to ask on tty we use setsid
Create a script which returns your password (note echo "echo)
$ echo "echo your_ssh_password" > /path/to/script_returning_pass
Make them executable
$ chmod +x ssh_session
$ chmod +x /path/to/script_returning_pass
try it
$ ./ssh_session
Keep in mind that ssh stands for secure shell, and if you store your user, host and password in plain text files you are misleading the tool an creating a possible security gap
You can use sshpass which is for example in the offical debian repositories. Example:
$ apt-get install sshpass
$ sshpass -p 'password' ssh username#server
You can't with most SSH clients. You can work around it with by using SSH API's, like Paramiko for Python. Be careful not to overrule all security policies.
Distilling this answer leaves a simple and generic script:
#!/bin/bash
[[ $1 =~ password: ]] && cat || SSH_ASKPASS="$0" DISPLAY=nothing:0 exec setsid "$#"
Save it as pass, do a chmod +x pass and then use it like this:
$ echo mypass | pass ssh user#host ...
If its first argument contains password: then it passes its input to its output (cat) otherwise it launches whatver was presented after setting itself as the SSH_ASKPASS program.
When ssh encounters both SSH_ASKPASS AND DISPLAY set, it will launch the program referred to by SSH_ASKPASS, passing it the prompt user#host's password:
An old post reviving...
I found this one while looking for a solution to the exact same problem, I found something and I hope someone will one day find it useful:
Install ssh-askpass program (apt-get, yum ...)
Set the SSH_ASKPASS variable (export SSH_ASKPASS=/usr/bin/ssh-askpass)
From a terminal open a new ssh connection without an undefined TERMINAL variable (setsid ssh user#host)
This looks simple enough to be secure but did not check yet (just using in a local secure context).
Here we are.
FreeBSD mailing list recommends the expect library.
If you need a programmatic ssh login, you really ought to be using public key logins, however -- obviously there are a lot fewer security holes this way as compared to using an external library to pass a password through stdin.
a better sshpass alternative is :
https://github.com/clarkwang/passh
I got problems with sshpass, if ssh server is not added to my known_hosts sshpass will not show me any message, passh do not have this problem.
I'm not sure the reason you need this functionality but it seems you can get this behavior with ssh-keygen.
It allows you to login to a server without using a password by having a private RSA key on your computer and a public RSA key on the server.
http://www.linuxproblem.org/art_9.html