Ansible aws_s3 module get file location - amazon-s3

I have a playbook to download a file from s3 bucket to a target host. I am using the aws_s3 module in ansible. The block looks something like this:-
- name: Get backup file from s3
aws_s3:
bucket: "{{ bucket_name }}"
object: "{{ object_name }}"
dest: /usr/local/
mode: get
My question is whether this will get the file to the ansible host or to the target host. Should there be any other specification I should be giving to address this difference.

Unless you delegate this action to another host, it will download the object to the managed nodes (aka. target hosts).

Related

Can Ansible match hosts passed as parameter without using add_hosts module

Is it possible to pass the IP address as parameter 'Source_IP' to ansible playbook and use it as hosts ?
Below is my playbook ipinhost.yml:
---
- name: Play 2- Configure Source nodes
hosts: "{{ Source_IP }}"
serial: 1
tasks:
- name: Copying from "{{ inventory_hostname }}" to this ansible server.
debug:
msg: "MY IP IS: {{ Source_IP }}"
The playbook fails to run with the message "Could not match supplied host pattern." Output below:
ansible-playbook ipinhost.yml -e Source_IP='10.8.8.11'
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 10.8.8.11
PLAY [Play 2- Configure Source nodes] ***********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP **************************************************************************************************************************************************
I do not wish to use ansible's add_host i.e i do not wish to build a dynamic host list as the Source_IP will always be a single server.
Please let me know if this is possible and how can my playbook be tweaked to make it run with hosts matching '10.8.8.11'?
If it is always a single host, a possible solution is to pass a static inline inventory to ansible-playbook.
target your play to the 'all' group. => hosts: all
call your playbook with an inlined inventory of one host. Watch out: the comma at the end of IP in the command is important:
ansible-playbook -i 10.8.8.11, ipinhost.yml

Get S3 object size with Ansible

There's a backup script that dumps some databases and uploads the backups to S3.
I'm writing an Ansible playbook to check the S3 backup sizes independently, from some other host. It would alert me if size is less than X GiB as that would indicate a failed backup. Nothing unknown so far, but...
I don't seem to be able get the requested object size from S3 bucket with aws_s3 module. Any ideas?
I don't know if there is an S3 module available that allows running ls commands over S3 Buckets. What you could do is run an aws s3api command, using the command module.
---
- name: Get S3 object size
hosts: all
connection: local
gather_facts: no
vars_files:
- ./secret.yml
tasks:
- name: Get the `list-object` result for the `object`
command: >
aws s3api list-objects
--bucket {{ bucket }}
--prefix {{ object }}
register: output
- name: Parse the `list-object` output
set_fact:
object_size: '{{ output.stdout | from_json | json_query("Contents[0].Size") }}'
I hope it helps

Is Ansible Tower compatible with the aws_S3 module?

I have been trying to automate a backup of some server files from a target machine to our S3 instance, but when I run the playbook from Ansible Tower it doesn't seem that the S3 module is able to see any files on the target machine.
AWS authentication is set up with IAM and working properly (authentication check succeeds) and I've confirmed that the Ansible session is successfully signing in to the ubuntu EC2 instance from the log files.
The S3 copy step looks like
- name: Push wp conf to archive
s3:
bucket: '{{master_config.aws.s3.bucket_name}}'
object: wp_config.php
src: '{{master_config.server.wp_config}}'
mode: put
become: yes
become_user: root
which works fine.
But when I tried using the aws_s3 instance with the 'remote_src' flag set like so
- name: Push wp conf to archive
aws_s3:
bucket: '{{master_config.aws.s3.bucket_name}}'
object: wp_config.php
src: '{{master_config.server.wp_config}}'
mode: put
remote_src: yes
become: yes
become_user: root
but it produces an error:
fatal: [server_address]: FAILED! => {"changed": false, "msg": "Could not find or access '/var/www/html/wp-config.php'"}
I came across this discussion in the Github repo for the project which seems to confirm my suspicions: https://github.com/ansible/ansible/pull/40192
If anyone's managed to get this working I'd really appreciate any tips. I've double and triple checked everything else that could be causing an issue, but it seems to be that the s3 / aws_s3 modules are just behaving differently on tower.
I'm running the AMI provided on the tower website at https://www.ansible.com/products/tower/trial

Ansible Return Value - Need IP Adress

We are currently using Ansible in conjunction with OpenStack. I've written a playbook (to deploy new server via OpenStack) where i use the module os_server where i use auto_ip: yes, the new server will become an IP Address assigned from the OpenStack Server.
If I use the -vvvv output command, i get a long output where in the middle of that output an IP-Address is listed.
So, cause I am a lazy guy, I want to put just this IP Address in a variable and let me show this IP Address in an extra field.
It should look like this:
"........output stuf.....
................................
.............................
..............................
..............................."
"The IP Adress of the New server is ....."
Is there any possibility you know to put these IP Address Field in a variable or to filter that output to the IP Address?
If you need an screenshot to see what I mean, no problem just write it and I'll give it to you!
Ansible OpenStack module uses shade python package to create a server.
According to the shade source code, create_server method returns a dict representing the created server.
Try to register the result of os_server and debug it. The IP Address should be there.
Example :
- name: launch a compute instance
hosts: localhost
tasks:
- name: launch an instance
os_server:
state: present
...
auto_ip: yes
register: result
- debug: var=result
Also, you can have a look to this sample playbook which does exactly this.
Here's an excerpt:
- name: create cluster notebook VM
register: notebook_vm
os_server:
name: "{{ cluster_name }}-notebook"
flavor: "{{ notebook_flavor }}"
image: "CentOS-7.0"
key_name: "{{ ssh_key }}"
network: "{{ network_name }}"
security_groups:
- "{{ cluster_name }}-notebook"
auto_ip: yes
boot_from_volume: "{{ notebook_boot_from_volume }}"
terminate_volume: yes
volume_size: 25
- name: add notebook to inventory
add_host:
name: "{{ cluster_name }}-notebook"
groups: notebooks
ansible_ssh_host: "{{ notebook_vm.openstack.private_v4 }}"
ansible_ssh_user: cloud-user
public_ip: "{{ notebook_vm.openstack.public_v4 }}"
public_name: "{{ lookup('dig', notebook_vm.openstack.public_v4 + '/PTR', wantlist=True)[0] }}"
tags: ['vm_creation']

amazon ec2 - AWS EC2 instance create fails for user via Ansible

I'm trying to create an ec2 instance and running into the following problem:
msg: Instance creation failed => UnauthorizedOperation:
You are not authorized to perform this operation.
Encoded authorization failure message: ....very long encoded message.
Update: This only happens when using the secret and access key for a specific user on my account. If I use the access keys for root then it works. But that's not what I want to do. I guess I'm missing something about how users authorize with ec2.
My ansible yml is using aws access and secret key in that order.
---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- test_vars.yml
tasks:
- name: Spin up Ubuntu Server 14.04 LTS (PV) instance
local_action:
module: ec2
region: 'us-west-1'
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
instance_type: 't1.micro'
image: ami-f1fdfeb4
wait: yes
count: 1
register: ec2
You need to go into the AWS IAM console ( https://console.aws.amazon.com/iam ) and give that user (related to the Access Key in your script) and give it permissions (a policy) to create EC2 instances.
It sounds like your 'root' user account in AWS already has those permissions if that helps any for comparing the two users to figure out what policy you need to add - you could just create an EC2 group with the right policy from the policy generator and add that user to that EC2 group.
It looks like a permission issue with AWS. Root user have full permission so it will definitely work with that. Check if your AWS specific user has permissions to launch an instance.