amazon ec2 - AWS EC2 instance create fails for user via Ansible - authentication

I'm trying to create an ec2 instance and running into the following problem:
msg: Instance creation failed => UnauthorizedOperation:
You are not authorized to perform this operation.
Encoded authorization failure message: ....very long encoded message.
Update: This only happens when using the secret and access key for a specific user on my account. If I use the access keys for root then it works. But that's not what I want to do. I guess I'm missing something about how users authorize with ec2.
My ansible yml is using aws access and secret key in that order.
---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- test_vars.yml
tasks:
- name: Spin up Ubuntu Server 14.04 LTS (PV) instance
local_action:
module: ec2
region: 'us-west-1'
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
instance_type: 't1.micro'
image: ami-f1fdfeb4
wait: yes
count: 1
register: ec2

You need to go into the AWS IAM console ( https://console.aws.amazon.com/iam ) and give that user (related to the Access Key in your script) and give it permissions (a policy) to create EC2 instances.
It sounds like your 'root' user account in AWS already has those permissions if that helps any for comparing the two users to figure out what policy you need to add - you could just create an EC2 group with the right policy from the policy generator and add that user to that EC2 group.

It looks like a permission issue with AWS. Root user have full permission so it will definitely work with that. Check if your AWS specific user has permissions to launch an instance.

Related

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

How to access eks cluster from local machine

I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.
Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.

Is Ansible Tower compatible with the aws_S3 module?

I have been trying to automate a backup of some server files from a target machine to our S3 instance, but when I run the playbook from Ansible Tower it doesn't seem that the S3 module is able to see any files on the target machine.
AWS authentication is set up with IAM and working properly (authentication check succeeds) and I've confirmed that the Ansible session is successfully signing in to the ubuntu EC2 instance from the log files.
The S3 copy step looks like
- name: Push wp conf to archive
s3:
bucket: '{{master_config.aws.s3.bucket_name}}'
object: wp_config.php
src: '{{master_config.server.wp_config}}'
mode: put
become: yes
become_user: root
which works fine.
But when I tried using the aws_s3 instance with the 'remote_src' flag set like so
- name: Push wp conf to archive
aws_s3:
bucket: '{{master_config.aws.s3.bucket_name}}'
object: wp_config.php
src: '{{master_config.server.wp_config}}'
mode: put
remote_src: yes
become: yes
become_user: root
but it produces an error:
fatal: [server_address]: FAILED! => {"changed": false, "msg": "Could not find or access '/var/www/html/wp-config.php'"}
I came across this discussion in the Github repo for the project which seems to confirm my suspicions: https://github.com/ansible/ansible/pull/40192
If anyone's managed to get this working I'd really appreciate any tips. I've double and triple checked everything else that could be causing an issue, but it seems to be that the s3 / aws_s3 modules are just behaving differently on tower.
I'm running the AMI provided on the tower website at https://www.ansible.com/products/tower/trial

Ansible `authorized_key` copies the key to remote user but not working when trying to ssh

I have the following task in my ansible playbook that adds my ssh public key for a remote user pranjal that was already created by a previous task.
- authorized_key:
user: pranjal
key: "{{ lookup('file', 'pranjal.pub') }}"
When I run the ansible playbook, it runs successfully. However when I try logging in to the server using: ssh pranjal#<server_ip>
I get a Permission denied (publickey) error.
To be sure I logged into server from another user and double checked that key listed in /home/pranjal/.ssh/authorized_keys matches with my local public key that I am using to login.
The issue that I am guessing here could be a permissions issue and I understood the solution from a related question.
But how do we change permissions of authorized_key from within the Ansible task itself? (So that I don't have to separately log into the instance to modify permissions of .ssh/authorized_keys)
- file: path=/home/pranjal/.ssh state=directory owner=pranjal mode=0700
- file: path=/home/pranjal/.ssh/authorized_keys state=file owner=pranjal mode=0600
You may also want to check/verify /etc/ssh/sshd_config has the following:
PubkeyAuthentication yes
You can debug further by ssh -vvv pranjal#<server_ip>

How do I get a variable with the name of the user running ansible?

I'm scripting a deployment process that takes the name of the user running the ansible script (e.g. tlau) and creates a deployment directory on the remote system based on that username and the current date/time (e.g. tlau-deploy-2014-10-15-16:52).
You would think this is available in ansible facts (e.g. LOGNAME or SUDO_USER), but those are all set to either "root" or the deployment id being used to ssh into the remote system. None of those contain the local user, the one who is currently running the ansible process.
How can I script getting the name of the user running the ansible process and use it in my playbook?
If you gather_facts, which is enabled by default for playbooks, there is a built-in variable that is set called ansible_user_id that provides the user name that the tasks are being run as. You can then use this variable in other tasks or templates with {{ ansible_user_id }}. This would save you the step of running a task to register that variable.
See: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts
If you mean the username on the host system, there are two options:
You can run a local action (which runs on the host machine rather than the target machine):
- name: get the username running the deploy
become: false
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
In this example, the output of the whoami command is registered in a variable called "username_on_the_host", and the username will be contained in username_on_the_host.stdout.
(the debug task is not required here, it just demonstrates the content of the variable)
The second options is to use a "lookup plugin":
{{ lookup('env', 'USER') }}
Read about lookup plugins here: docs.ansible.com/ansible/playbooks_lookups.html
I put something like the following in all templates:
# Placed here by {{ lookup('env','USER') }} using Ansible, {{ ansible_date_time.date }}.
When templated over it shows up as:
# Placed here by staylorx using Ansible, 2017-01-11.
If I use {{ ansible_user_id }} and I've become root then that variable indicates "root", not what I want most of the time.
This seems to work for me (ansible 2.9.12):
- name: get the non root remote user
set_fact:
remote_regular_user: "{{ ansible_env.SUDO_USER or ansible_user_id }}"
You can also simply set this as a variable - e.g. in your group_vars/all.yml:
remote_regular_user: "{{ ansible_env.SUDO_USER or ansible_user_id }}"
This reads the user name from the remote system, because it is not guaranteed, that the user names on the local and the remote system are the same. It is possible to change the name in the SSH configuration.
- name: Run whoami without become.
command: whoami
changed_when: false
become: false
register: whoami
- name: Set a fact with the user name.
set_fact:
login_user: "{{ whoami.stdout }}"
When you use the "become" option to launch Ansible or run a task, the logged in user will change to the user you are changing to (typically root). To get the name of the original user used to log in to the remote host with (ie: before escalating) you can use the ansible_user special variable. In addition, if you want to gather facts for a specific user other than the one currently running a task, you can use the user built-in module by doing something like this:
- user
name: "username"
register: user_data
Now the user_data fact contains a bunch of useful information about that user, including their uid, gid, home folder, and a bunch of other stuff. See the return value for this task in the docs for details. Using this technique, you can get details about the original user Ansible was launched with by doing something like this:
- user
name: "{{ ansible_user }}"
register: user_data
Conversely, if all you want is the name of the active user that is running a specific task (ie: which accounts for any user-switches that occur with the "become" operation) you can use the ansible_user_id fact instead.
if you want to get the user who run the template in ansible tower you could use this var {{tower_user_name}} in your playbook but it´s only defined on manually executions
tower_user_name :The user name of the Tower user that started this job. This is not available for callback or scheduled jobs.
check this docs https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html