Where i can put common variable outside playbook in ansible - variables

I am using playboooks to run my modules. i have a doubt, whether i can put my common variables outside playbook due to following reasons
Security reason like username and password
To reduce the repetitive code by using putting global variabble at common place and loc of playbook.
Right now my playbook looks like some thing below:
- hosts: localhost
tasks:
- name: Get all Storage Service Levels
StorageServiceLevelModule: host=<ip> port=<port> user=admin password=<password>
action=get name='my_ssl'
register: jsonResultforSSLs
- name: print the SSL key
debug: msg="{{ jsonResultforSSLs.meta.result.records[0].key}}"
- name: Get all Storage VMs
StorageVMModule: host=<ip> port=<port> user=admin password=<password>
action=get name=my_svm
register: jsonResultforSVMs
I want to put
host=<ip> port=<port> user=admin password=<password>
outside playbook and use it in all tasks of my playbooks. How can I do this ?
Please let me know if any clarification is required.

You can specify your own variables to all or certain hosts in the Inventory file or in the sub-directories related to it (like ./group_vars). Go to this webpage. There you can see an example of a file in that directory, which must have the name of a group and be written in yaml). The ./group_vars directory must be in the same directory of your hosts file. For example, if your hosts file is in ./inventory/hosts, then the files with variables should be ./inventory/group_vars/<group_name>. Keep in mind that the variables defined in those files will only apply to the members of the group. Example of the content of a file in that directory:
---
ip=1.1.1.1
port=420
password='password1' # should be encrypted with Ansible Vault
...
And then you would use them like normally:
- name: Get all Storage VMs
StorageVMModule: host='{{ip}}' port='{{port}}' user=admin action=get name=my_svm
register: jsonResultforSVMs

variables can be loaded in different ways. You can define a file named all inside vars/ directory, and these are available through the whole playbook.
Also, you can define it in a file and provide it when execution the playbook with -e #filename. I found this most convinient way.
Check this link from the docs, I think you might find it very useful
I strongly suggest you to use roles. There, in each role you have a vars folder where you can put relevant variables to the role. Then, you can provide their values using the OS environment variables.

To specify things as you describe you'd be best using a secret store for your secrets so something like hashicorp vault but luckily ansible also has a way of encrypting secret information called Ansible vault, which operates at a file level.
What you should never do is put secrets in plain text files then commit them to a source control system. Ansible vault will encrypt stuff to get around this.
Ansible vault isn't complicated but has very good documentation here
You can create a new encrypted file like this:
ansible-vault create filename.yml
You can edit the file with this:
ansible-vault edit filename.yml
You can encrypt an unencrypted file like this:
ansible-vault encrypt filename.yml
You can decrypt with ansible-vault decrypt
You can then use these in playbooks and commit to src control with them protected.
Another approach is to store it in an external secret store (vault) then export to environmental variables. Then read in the environmental variables and assign to ansible variables. This way nothing ever goes into source control at all, this is my preferred approach.
That's secrets taken care of.
For common structures you can use group_vars and set different values for different roles this is explained here

To second Vinny -
Roles. Roles, roles, roles.
The default structure for roles includes a defaults directory in which you can define default values in defaults/main.yml. This is about the lowest priority setting you can use, so I like it better than vars/main.yml for setting reasonable values that can be easily overridden at runtime, but as long as you pick a consistent structure you're good.
I don't personally like the idea of a "common" role just for variables everything uses, but if your design works well with that, you should be sure to prefix all the variable names with "virtual namespace" strings. for example, don't call it repo, call it common_git_repo or common_artifactory, or something more specific if you can.
Once you include that role in a playbook, make certain the default file is statically loaded before the values are called, but if it is you don't have to worry so much about it. Just use your {{ common_git_repo }} where you need it. It will be there...which is why you want to use virtual namespacing to avoid collisions with effectively global names.
When you need to override values, you can stage them accordingly. We write playbook-specific overrides of role defaults in the vars: section of a playbook, and then dynamically write last-minute overrides into a Custom.yml file that gets loaded in the vars_files: section. Watch your security, but it's very flexible.
We also write variables right into the inventory. If you use a dynamic inventory you can embed host- and/or group-specific variables there. This works very well. For the record, you can use YAML output instead of JSON. Here's a simplistic template - we sometimes use the shell script that runs the ansible as the inventory:
case $# in
0) # execute the playbook
ansible-playbook -i Jenkins.sh -vv site.yml
;;
*) case $1 in
# ansible-playbook will call the script with --list for hosts
--list)
printf "%s\n\n" ---
for group in someGroup otherGroup admin managed each all
do printf "\n$group:\n hosts:\n"
for s in $Servers
do printf " - $s\n"
done
printf " vars:\n"
printf " ansible_ssh_user: \"$USER\"\n"
printf " ansible_ssh_pass: \"$PSWD\"\n\n"
done
;;
esac
;;
esac
You can also use --extra-vars as a last-minute, highest-priority override.

Related

Ansible: to how make Paramiko use ~/.ssh/config?

Ideally, of course, I'd like Ansible to completely take care of this.
If this is not possible (why?!), then, at least, I want to be able to extract ~/.ssh/config contents into some other format and then make Ansible feed this to Paramiko. I am sure I'm not the first one faced with this task, so what's the accepted way of doing this?
I need this in order to use authorized_keys module to turn on passwordless authentication.
Btw, I wish Ansible emitted some warning when falling back to non-default backend (like Paramiko). I lost a couple of hours yesterday and actually had to download Ansible sources to figure out why perfectly running Ansible command suddenly stopped running when adding -k / --ask-pass option (yes, I am completely new to Ansible).
You can define this configuration in the Ansible configuration ini file or environment variables -- specifically the section for ANSIBLE_SSH_ARGS.

Ansible, how to copy host key from control machine to targets

How to copy the ansible control machine host key to the target server known_hosts? The problem is, ansible expect this setup have been already made, to connect to those target servers without prompt.
Should I use tasks with password authentication and secret variables to setup the keys, or configure host keys manually before provisioning?
You can either set ansible_ssh_user and ansible_ssh_pass variables, or pass them from command line when you run the playbook:
--user and --ask-pass.
You can put variables to var file encoded with Ansible Vault to keep them secret(but don't forget to include this file for you target hosts)
Please check this answer for more details: https://serverfault.com/questions/628989/how-set-to-ansible-a-default-user-pass-pair-to-ssh-connection
You can specify the variable for SSH key called ansible_ssh_private_key_file later in the task(I suppose you should use set_fact). I'm not completely sure but if you play around perhaps it would work.
P.S. in other hand, if you already specify the username and password, there is no difference what to use

use other command instead of ssh for ansible

I have an ansible configuration which I know works on my local machines. However, I'm trying to now set it up on my company's machines which use a wrapper command similar to ssh (let's call it 'myssh')
for example, to access these machines, instead of writing
ssh myuser#123.123.123.123
you write
myssh myuser#123.123.123.123
which ends up calling ssh, among other things.
My question is, is there a way to swap which command ansible uses for accessing machines?
You can create a Connection Type Plugin to archive this. Looking at the ssh plugin, it appears like it might be as easy as replacing the ssh_cmd in line 333. Also specify myssh in line 69.
See here where to place the modified file. Additionally to that information, you can specify a custom location and let Ansible know about it in connection_plugins setting in ansible.cfg.
Finally again in your ansible.cfg set the transport setting to your new plugin:
transport = myssh
PS: I have never done anything like that before. This is only info from the docs.

Why file uploaded to the server becomes read only?

After uploading .xls file to the server it becomes read only. How to make it back read/write.
u can use chmod() php function.
simply type smthing like that:
chmod("/dir/file.xls", 0777);
more about it in documentation:
http://php.net/manual/en/function.chmod.php
You have to check what read only really means in this case - is it lack of write permission (then use chmod to set one after all) or maybe your file is owned by someone else (which often happens on incorrectly configured hostings, where uploads are handled by httpd, not the user owning file hierarchy. If you got shell access do this:
$ ls -l
to list the files and see who owns it, then check what is your ownder id:
$ id
if these do not match, then you may need to reconfigure your server

Is it possible to restirct an ssh key to specific directories

I have an account on a server that I need to give sftp access to another person. This person however only needs access to a small subset of directories. Is it possible, without creating another user account, to restrict an ssh key to that subset of directories.
Basically the website on which these directories are located lives within the home directory of a specific user account. I would prefer not to have to create a separate user account just to lock the use down to those directories. If it is possible to lock down the access to specific directories using an ssh key that would be ideal.
It's possible, but it's sort of a hack. The much preferred, simpler way is just to only grant that user permissions to certain files and directories.
This is an answer on how to accomplish your goal using ssh rather than sftp. This has some chance of being acceptable to the OP because it still uses the ssh tool chain.
This technique is using a feature of ssh that allows ssh to run a command based on the private key presented to host machine. When the host sees that key, then it runs the associated command. For the command we will use "cat" which will dump the file.
add a line that looks like this to ~mr_user/.ssh/authorized_keys2
command="/usr/bin/cat ~/sshxfer/myfile.tar.gz.uu",no-port-forwarding ssh-dss xxxPUBLIC_KEYxxx mr_user#tgtmach
populate the file like this:
uuencode -m myfile.tar.gz /dev/stdout >~mr_user/sshxfer/myfile.tar.gz.uu
transfer the file by being on the target machine and running this:
ssh -i ~/keys/privatekey.dsa mr_user#srcmach |sed -e's/
//g' |uudecode >myfile.tar.gz
The tricky part to that command is there is a newline in the sed command to remove the newlines from the .uu file.
I did not found a way to pass in a name of a file to transfer, so I had to make a key for each file I wanted to transfer. This was okay for my use case because I only had two files I wanted to transfer.