I have wired problem with running ansible
I'd like to print to log or anywhere else the extra-vars I'm executing the playbook with.
I am running the following command:
ansible-playbook /foo/main.yml --extra-vars "main_playbook=app_install start_path=work" --extra-vars {"db_config":{"db_multi_config":["value1","value2"]}}'
with the following main.yml playbook
- name: start playbook
import_playbook: "{{ playbook_dir }}/{{ main_playbook}}/{{ start_path}}.yml"
I'd like to print the values of all extra-vars passed on the command line before it gets to main.yml. Is there any way to do this?
You can get this information in the output running ansible in full debug mode:
Given the following test.yml playbook:
- hosts: localhost
gather_facts: false
and running it with the command:
ansible-playbook test.yml -vvvv -e toto=bla -e '{"test1":2}'
I get the following result (see the debug lines for playbook before play starts)
ansible-playbook [core 2.11.3]
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
No config file found; using defaults
The vault password file /home/user/bin/vault-keyring-client is a client script.
Executing vault password client script: /home/user/bin/vault-keyring-client --vault-id avaultid1
The vault password file /home/user/bin/vault-keyring-client is a client script.
Executing vault password client script: /home/user/bin/vault-keyring-client --vault-id avaultid2
The vault password file /home/user/bin/vault-keyring-client is a client script.
Executing vault password client script: /home/user/bin/vault-keyring-client --vault-id avaultid3
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.8/dist-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml **********************************************************************************************************************
Positional arguments: test.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
extra_vars: ('toto=bla', '{"test1":2}')
forks: 5
1 plays in test.yml
PLAY [all] ******************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP ******************************************************************************************************************************
You can't separate extra vars from all other vars, because they are merged. But you can see all variables for the host. (except for setup facts, which you can browse separately with -m setup).
The trick is to print all variables from hostvars group for 'itself', ansible_host:
ansible -i inventory -e extra_vars_here -m debug -a 'msg={{hostvars[inventory_hostname]}}' all
(replace all with specific hostname if you wish).
If you are overwhelmed, you can filter it with .keys(): hostvars[inventory_hostname].keys().
Related
My corporate firewall policy allows only 20 connections per minute 60 seconds between the same source and destinations.
Owing to this the ansible play hangs after a while.
I would like multiple tasks to use the same ssh session rather than creating new sessions. For this purpose i set the below pipelining = True in the local folder ansible.cfg as well as in the command line.
cat /opt/automation/startservices/ansible.cfg
[defaults]
host_key_checking = False
gathering = smart
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
control_path = %(directory)s/%%h-%%r
pipelining = True
ANSIBLE_SSH_PIPELINING=0 ansible-playbook -i /opt/automation/startservices/finalallmw.hosts /opt/automation/startservices/va_action.yml -e '{ dest_host: myremotehost7 }' -e dest_user=oracle
The playbook is too big to be shared here but it is this task which loops and this is where it hangs due to more than 20 ssh connections in 60 seconds.
- name: Copying from "{{ inventory_hostname }}" to this ansible server.
synchronize:
src: "{{ item.path }}"
dest: "{{ playbook_dir }}/homedirbackup/{{ inventory_hostname }}/{{ dtime }}/"
mode: pull
copy_links: yes
with_items:
- "{{ to_copy.files }}"
With the pipelining settings set, my play still hangs after 20 connections.
Below are the playbook settings:
hosts: "{{ groups['dest_nodes'] | default(groups['all']) }}"
user: "{{ USER | default(dest_user) }}"
any_errors_fatal: True
gather_facts: false
tags: always
vars:
ansible_host_key_checking: false
ansible_ssh_extra_args: -o StrictHostKeyChecking=no -o ConnectionAttempts=5
Post suggestions this far on this thread the issue persists. Below is my local directory ansible.cfg
$ cat /opt/automation/startservices/ansible.cfg
# config file for ansible -- http://ansible.com/
# ==============================================
# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first
[defaults]
host_key_checking = False
roles_path = roles/
gathering = smart
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=1200s -o ControlPath=~/.ansible/cp/%r#%h:%p
[persistent_connection]
control_path_dir = ~/.ansible/cp
$
Can you please suggest any solution to the issue on the ansible side where all tasks use the same ssh session and is pipelining not working here?
First: pipelining = True does not do what you are looking for. It reduces the number of network operations, but not the number of ssh connections. Check the docs for more information.
Imho, it is still a good thing to use, as it will speed up your playbooks.
What you want to use is the "persistent control mode" which is a feature of OpenSSH to keep a connection open.
You could for example do this in your ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=1200
This will keep the connection open for 1200 seconds.
The problem is not with ansible controller running the module(s) (i.e. copying the necessary temporary AnsibleZ files to your target and execute them - you already have the correct options for that in ansible.cfg to use master sessions) but with the synchronize module itself which needs to spawn its own ssh connections to transfer files between relevant servers while it is running on the target.
The latest synchronize module version is now part of the ansible.posix collection and has recently gained 2 options that will help you work around your problem to apply the use of master sessions to the module itself while using rsync.
ssh_multiplexing: yes
use_ssh_args: yes
Although it is possible to install this collection in ansible 2.9 to override the older stock module version (which does not have those options), I strongly suggest you use ansible version 2.10 or 2.11. My personal preferred installation method for ansible is through pip as it will let you install any ansible version on any OS for any user in any number of (virtual) environment.
Regarding pip, the versioning has changed (and is quite a mess IMO...)
ansible is now a meta package with its own independent versioning.
the usual ansible binaries (ansible, ansible-playbook, ....) are packaged in ansible-core which has the version the corresponding to what you get when running ansible -v
the meta ansible package installs a set of collections by default (including the ansible.posix one if I'm not wrong)
=> To get ansible -v => 2.10 you want to install ansible pip package 3.x
=> To get ansible -v => 2.11 you want to install ansible pip package 4.x
You will have to remove any previous version installed via pip in your current environment before proceeding.
Ansible 2.7.9 is not using host_vars
Ive setup a very simple setup with 3 hosts, mainly for testing purposes . I have hosts :
- ansible1 (this is where I store the code)
- ansible2
- ansible3
My inventory :
[ansible#ansible1 ~]$ cat /etc/ansible/hosts
[common]
ansible1
ansible2
ansible3
My cfg looks like that :
[ansible#ansible1 ~]$ cat /etc/ansible/ansible.cfg
[defaults]
roles_path = /etc/ansible/roles
inventory = /etc/ansible/hosts
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
pipelining = True
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining = False
[accelerate]
[selinux]
[colors]
I have defined a master playbook called common which calls common :
[ansible#ansible1 ~]$ ls /etc/ansible/roles/
common common.retry common.yml
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common.yml
--- # Playbook for webservers
- hosts: common
roles:
- common
[ansible#ansible1 ~]$
The task/main.yml :
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/tasks/main.yml
- name: test ansible1
lineinfile:
dest: /tmp/ansible.txt
create: yes
line: "{{ myvar }}"
- name: set ansible2
lineinfile:
dest: /tmp/ansible2.txt
create: yes
line: "hi"
[ansible#ansible1 ~]$
[ansible#ansible1 ~]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
THen I placed some info at /etc/ansible/host_vars
[ansible#ansible1 ~]$ ls /etc/ansible/hosts_vars/
ansible2.yml
[ansible#ansible1 ~]$ cat /etc/ansible/hosts_vars/ansible2.yml
myvar: "myvar from host_vars"
[ansible#ansible1 ~]$
This works great with playbook :
[ansible#ansible1 ~]$ ansible-playbook /etc/ansible/roles/common.yml --limit ansible2
PLAY [common] ******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [ansible2]
TASK [common : test ansible1] **************************************************
changed: [ansible2]
TASK [common : set ansible2] ***************************************************
changed: [ansible2]
PLAY RECAP *********************************************************************
ansible2 : ok=3 changed=2 unreachable=0 failed=0
I see the file with the content of myvar :
[root#ansible2 ~]# cat /tmp/ansible.txt
value of myvar from common/vars
[root#ansible2 ~]#
But then I dont understand why it is not taking the value from /etc/ansible/hosts_vars/ansible2.yml , in fact if I comment the line from /etc/ansible/roles/common/vars/main.yml it says undefined variable :
[ansible#ansible1 ansible]$ cat /etc/ansible/roles/common/vars/main.yml
copyright_msg: "Copyrighta 2019"
myvar: "value of myvar from common/vars"
This is as expected the main.yml will get sourced automatically while executing the playbook. consider this file as global variables.
The reason ansible2.yml is not getting sourced is because ansible expects you to source that explicitly while executing.
You can use below code for that(generic).
---
- name: play
hosts: "{{ hosts }}"
tasks:
- include_vars: "{{ hosts }}.yml"
trigger -->
ansible-playbook -i inventory --extra-vars "hosts=ansible2"
Ansible uses that priority for values from vars :
From least to most important
role defaults
inventory file or script group vars
inventory group_vars/all
playbook group_vars/all
inventory group_vars/*
playbook group_vars/*
inventory file or script host vars
inventory host_vars/*
playbook host_vars/*
host facts
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
role (and include_role) params
include params
include_vars
set_facts / registered vars
extra vars (always win precedence)
So is better to forget about using roles/vars because it takes precedence over host_vars, so I should use instead roles/defaults, which has a lower priority .
I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.
I am creating a restricted user without shell for port forwarding only and I need to execute a script on login via pubkey, even if the user is connected via ssh -N user#host which doesn't asks SSH server for a shell.
The script should warn admin on connections authenticated with pubkey, so the user connecting shouldn't be able to skip the execution of the script (e.g., by connecting with ssh -N).
I have tried to no avail:
Setting the command at /etc/ssh/sshrc.
Using command="COMMAND" in .ssh/authorized_keys (man authorized_keys)
Setting up a script with the command as user's shell. (chsh -s /sbin/myscript.sh USERNAME)
Matching user in /etc/ssh/sshd_config like:
Match User MYUSERNAME
ForceCommand "/sbin/myscript.sh"
All work when user asks for shell, but if logged only for port forwarding and no shell (ssh -N) it doesn't work.
The ForceCommand option runs without a PTY unless the client requests one. As a result, you don't actually have a shell to execute scripts the way you might expect. In addition, the OpenSSH SSHD_CONFIG(5) man page clearly says:
The command is invoked by using the user's login shell with the -c option.
That means that if you've disabled the user's login shell, or set it to something like /bin/false, then ForceCommand can't work. Assuming that:
the user has a sensible shell defined,
that your target script is executable, and
that your script has an appropriate shebang line
then the following should work in your global sshd_config file once properly modified with the proper username and fully-qualified pathname to your custom script:
Match User foo
ForceCommand /path/to/script.sh
If you only need to run a script you can rely on pam_exec.
Basically you reference the script you need to run in the /etc/pam.d/sshd configuration:
session optional pam_exec.so seteuid /path/to/script.sh
After some testing you may want to change optional to required.
Please refer to this answer "bash - How do I set up an email alert when a ssh login is successful? - Ask Ubuntu" for a similar request.
Indeed in the script only a limited subset on the environment variables is available:
LANGUAGE=en_US.UTF-8
PAM_USER=bitnami
PAM_RHOST=192.168.1.17
PAM_TYPE=open_session
PAM_SERVICE=sshd
PAM_TTY=ssh
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
PWD=/
If you want to get the user info from authorized_keys this script could be helpful:
#!/bin/bash
# Get user from authorized_keys
# pam_exec_login.sh
# * [ssh - What is the SHA256 that comes on the sshd entry in auth.log? - Server Fault](https://serverfault.com/questions/888281/what-is-the-sha256-that-comes-on-the-sshd-entry-in-auth-log)
# * [bash - How to get all fingerprints for .ssh/authorized_keys(2) file - Server Fault](https://serverfault.com/questions/413231/how-to-get-all-fingerprints-for-ssh-authorized-keys2-file)
# Setup log
b=$(basename $0| cut -d. -f1)
log="/tmp/${b}.log"
function timeStamp () {
echo "$(date '+%b %d %H:%M:%S') ${HOSTNAME} $b[$$]:"
}
# Check if opening a remote session with sshd
if [ "${PAM_TYPE}" != "open_session" ] || [ $PAM_SERVICE != "sshd" ] || [ $PAM_RHOST == "::1" ]; then
exit $PAM_SUCCESS
fi
# Get info from auth.log
authLogLine=$(journalctl -u ssh.service |tail -100 |grep "sshd\[${PPID}\]" |grep "${PAM_RHOST}")
echo ${authLogLine} >> ${log}
PAM_USER_PORT=$(echo ${authLogLine}| sed -r 's/.*port (.*) ssh2.*/\1/')
PAM_USER_SHA256=$(echo ${authLogLine}| sed -r 's/.*SHA256:(.*)/\1/')
# Get details from .ssh/authorized_keys
authFile="/home/${PAM_USER}/.ssh/authorized_keys"
PAM_USER_authorized_keys=""
while read l; do
if [[ -n "$l" && "${l###}" = "$l" ]]; then
authFileSHA256=$(ssh-keygen -l -f <(echo "$l"))
if [[ "${authFileSHA256}" == *"${PAM_USER_SHA256}"* ]]; then
PAM_USER_authorized_keys=$(echo ${authFileSHA256}| cut -d" " -f3)
break
fi
fi
done < ${authFile}
if [[ -n ${PAM_USER_authorized_keys} ]]
then
echo "$(timeStamp) Local user: ${PAM_USER}, authorized_keys user: ${PAM_USER_authorized_keys}" >> ${log}
else
echo "$(timeStamp) WARNING: no matching user in authorized_keys" >> ${log}
fi
I am the author of the OP; I came to the conclusion that what I need to achieve is not possible using SSH only to the date (OpenSSH_6.9p1 Ubuntu-2, OpenSSL 1.0.2d 9 Jul 2015), but I found a great piece of software that uses encrypted SPAuthentication to open SSH port and it's new version (to the date of this post, it's GitHub master branch) has a feature to execute a command always that a user authorizates successfully.
FWKNOP - Encrypted Single Packet Authorization
FWKNOP set iptables rules that allow access to given ports upon a single packet encrypted which is sent via UDP. Then after authorization it allow access for the authorized user for a given time, for example 30 seconds, closing the port after this, leaving the connection open.
1. To install on an Ubuntu linux:
The current version (2.6.0-2.1build1) on Ubuntu repositories to the date still doesn't allow command execution on successful SPA; (please use 2.6.8 from GitHub instead)
On client machine:
sudo apt-get install fwknop-client
On server side:
sudo apt-get install fwknop-server
Here is a tutorial on how to setup the client and server machines
https://help.ubuntu.com/community/SinglePacketAuthorization
Then, after it is set up, on server side:
Edit /etc/default/fwknop-server
Change the line START_DAEMON="no" to START_DAEMON="yes"
Then run:
sudo service fwknop-server stop
sudo service fwknop-server start
2. Warning admin on successful SPA (email, pushover script etc)
So, as stated above the current version present in Ubuntu repositories (2.6.0-2.1build1) cannot execute command on successful SPA. If you need this feature as of the OP, but it will be released at fwknop version (2.6.8), as can it is stated here:
https://github.com/mrash/fwknop/issues/172
So if you need to use it right now you can build from github branch master which have the CMD_CYCLE_OPEN option.
3. More resources on fwknop
https://help.ubuntu.com/community/SinglePacketAuthorization
https://github.com/mrash/fwknop/ (project on GitHub)
http://www.cipherdyne.org/fwknop/ (project site)
https://www.digitalocean.com/community/tutorials/how-to-use-fwknop-to-enable-single-packet-authentication-on-ubuntu-12-04 (tutorial on DO's community)
I am the author of the OP. Also, you can implement a simple logwatcher as the following written in python3, which keeps reading for a file and executes a command when line contains pattern.
logwatcher.python3
#!/usr/bin/env python3
# follow.py
#
# Follow a file like tail -f.
import sys
import os
import time
def follow(thefile):
thefile.seek(0,2)
while True:
line = thefile.readline()
if not line:
time.sleep(0.5)
continue
yield line
if __name__ == '__main__':
logfilename = sys.argv[1]
pattern_string = sys.argv[2]
command_to_execute = sys.argv[3]
print("Log filename is: {}".format(logfilename))
logfile = open(logfilename, "r")
loglines = follow(logfile)
for line in loglines:
if pattern_string in line:
os.system(command_to_execute)
Usage
Make the above script executable:
chmod +x logwatcher.python3
Add a cronjob to start it after reboot
crontab -e
Then write this line there and save it after this:
#reboot /home/YOURUSERNAME/logwatcher.python3 "/var/log/auth.log" "session opened for user" "/sbin/myscript.sh"
The first argument of this script is the log file to watch, and the second argument is the string for which to look in it. The third argument is the script to execute when the line is found in file.
It is best if you use something more reliable to start/restart the script in case it crashes.
I'm trying to run my first playbook to install Java on four servers and subsequently define a JAVA_HOME environment variable.
ansible-playbook site.yml --check
PLAY [crave_servers] **********************************************************
GATHERING FACTS ***************************************************************
ok: [54.174.151.196]
ok: [54.174.197.35]
ok: [54.174.207.83]
ok: [54.174.208.240]
TASK: [java | install Java JDK] ***********************************************
changed: [54.174.197.35]
changed: [54.174.151.196]
changed: [54.174.208.240]
changed: [54.174.207.83]
ERROR: change handler (setvars) is not defined
I've placed my site.yml under /etc/ansible
---
- hosts: crave_servers
remote_user: ubuntu
sudo: yes
roles:
- java
I've placed main.yml under /etc/ansible/java/tasks
---
- name: install Java JDK
apt: name=default-jdk state=present
notify:
- setvars
I've placed main.yml under /etc/ansible/handlers
---
- name: setvars
shell: echo "JAVA_HOME=\"/usr/lib/jvm/java-7-openjdk-amd64\"" >> /etc/environment
Now I'm not sure if the syntax is structure of my handlers is correct. But it's obvious from the output that Ansible is able to find the correct role and execute the correct task. But the task can't find the handler.
Nobody else seems to have the same problem. And I don't really know how to debug it because my ansible version seems to be missing the config file.
You should put your handler to /etc/ansible/java/handlers/main.yml
As handlers are part of a role.
Remarks:
You should not use your handler as it would paste the line into /etc/environment each time you call this playbook. I would recommend the lineinefile module.
You should reconsider your decision to put ansible playbooks into /etc