I have been trying to automate a backup of some server files from a target machine to our S3 instance, but when I run the playbook from Ansible Tower it doesn't seem that the S3 module is able to see any files on the target machine.
AWS authentication is set up with IAM and working properly (authentication check succeeds) and I've confirmed that the Ansible session is successfully signing in to the ubuntu EC2 instance from the log files.
The S3 copy step looks like
- name: Push wp conf to archive
s3:
bucket: '{{master_config.aws.s3.bucket_name}}'
object: wp_config.php
src: '{{master_config.server.wp_config}}'
mode: put
become: yes
become_user: root
which works fine.
But when I tried using the aws_s3 instance with the 'remote_src' flag set like so
- name: Push wp conf to archive
aws_s3:
bucket: '{{master_config.aws.s3.bucket_name}}'
object: wp_config.php
src: '{{master_config.server.wp_config}}'
mode: put
remote_src: yes
become: yes
become_user: root
but it produces an error:
fatal: [server_address]: FAILED! => {"changed": false, "msg": "Could not find or access '/var/www/html/wp-config.php'"}
I came across this discussion in the Github repo for the project which seems to confirm my suspicions: https://github.com/ansible/ansible/pull/40192
If anyone's managed to get this working I'd really appreciate any tips. I've double and triple checked everything else that could be causing an issue, but it seems to be that the s3 / aws_s3 modules are just behaving differently on tower.
I'm running the AMI provided on the tower website at https://www.ansible.com/products/tower/trial
Related
I am trying to run an Ansible Tower Template with a task that requires root privileges on localhost. According to RedHat's knowledgebase:
It isn't possible to use Tower with local action to escalate to the root user. It will be necessary to alter your task to connect via SSH and then escalate to root using another user(not AWX).
RedHat solution
I have already tried changing the ansible_connection host variable under localhost to ssh with no success. I also tried completely removing the variable hoping that it would default to an SSH connection which also did not work.
I realize that the variable value is probably not defined correctly but I was unable to find listed options in the documentation.
The task in question:
- name: "Ansible Create directory if not exists"
file:
path: /etc/custom-switch
state: directory
mode: 0755
group: awx
owner: awx
when: exec_dir.stat.exists == false
delegate_to: localhost
become: yes
Job fails with the following error:
`sudo: effective uid is not 0, is sudo installed setuid root`
I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.
Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.
I have a playbook to download a file from s3 bucket to a target host. I am using the aws_s3 module in ansible. The block looks something like this:-
- name: Get backup file from s3
aws_s3:
bucket: "{{ bucket_name }}"
object: "{{ object_name }}"
dest: /usr/local/
mode: get
My question is whether this will get the file to the ansible host or to the target host. Should there be any other specification I should be giving to address this difference.
Unless you delegate this action to another host, it will download the object to the managed nodes (aka. target hosts).
I'm trying to create an ec2 instance and running into the following problem:
msg: Instance creation failed => UnauthorizedOperation:
You are not authorized to perform this operation.
Encoded authorization failure message: ....very long encoded message.
Update: This only happens when using the secret and access key for a specific user on my account. If I use the access keys for root then it works. But that's not what I want to do. I guess I'm missing something about how users authorize with ec2.
My ansible yml is using aws access and secret key in that order.
---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- test_vars.yml
tasks:
- name: Spin up Ubuntu Server 14.04 LTS (PV) instance
local_action:
module: ec2
region: 'us-west-1'
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
instance_type: 't1.micro'
image: ami-f1fdfeb4
wait: yes
count: 1
register: ec2
You need to go into the AWS IAM console ( https://console.aws.amazon.com/iam ) and give that user (related to the Access Key in your script) and give it permissions (a policy) to create EC2 instances.
It sounds like your 'root' user account in AWS already has those permissions if that helps any for comparing the two users to figure out what policy you need to add - you could just create an EC2 group with the right policy from the policy generator and add that user to that EC2 group.
It looks like a permission issue with AWS. Root user have full permission so it will definitely work with that. Check if your AWS specific user has permissions to launch an instance.
I was testing a jenkins build job in which I was using ansible to scp a tarball to a number of servers. Below is the ansible yaml file:
- hosts: websocket_host
user: root
vars:
tarball: /data/websocket/jenkins/deployment/websocket_host/websocket.tgz
deploydir: /root
tasks:
- name: copy build to websocket server
action: copy src=$tarball dest=$deploydir/websocket.tgz
- name: untar build on websocket server
action: command tar xvfz $deploydir/websocket.tgz -C $deploydir
- name: restart websocket server
action: command /root/websocket/bin/websocket restart
The first two commands worked successfully with command /root/websocket/bin/websocket restart failing. I have since been able to log in (without a password) to any of the servers defined in my ansible host file for websocket_host. I have verified that all my permissions settings are correct on both the host and client machines. I have tested this from several client machines and they all now require me to enter a password to ssh. Yesterday I was able to ssh (via my public key) no problem. I am using the root user on the host machines and wonder if copying files to the /root directory caused this issue as it was the last command I was able to successfully run via a passwordless ssh session.
Turns out the Jenkins job changed ownership and group of my /root directory. The command: chown root.root /root fixes everything.