Ansible hangs while applying firewall rules - iptables

I've tried a number of ways to do this. I have a few firewall rules that are a little more complex then the ufw module supports. Though, I could probably use that module if I really have to.
So far I have tried the following:
1. putting my rules in a shell script and executing it asynchronously. Seems to work some of the time. The rules get applied, but ansible hangs.
- name: Apply firewall rules
shell: iptables.sh
async: 45
poll: 5
Putting my rules in a rule file and then using iptables-restore < rules to apply them. The rules get applied, but ansible hangs. (this is the method I am currently trying)
- name: Set up the v4 firewall rules
template:
src=templates/firewallv4_template.j2
dest=/tmp/rules.v4
owner=vagrant group=vagrant mode=0644
- name: Apply firewall rules
shell: iptables-restore < /tmp/rules.v4
async: 45
poll: 5
Used iptables-persistent but kept getting ipv6 rule failures and got tired of that.
Has anyone had any success with this?

Use a rule file, and then use iptables restore like this:
- name: Do not wait for a response
shell: >
iptables-restore /tmp/rules.v4
become: true
async: 10
poll: 0

Related

auditdctl -l not showing all rules

I am using Florian Roth's auditd rules (Florian Roth rules. I add them using auditctl -R /etc/audit/rules.d/audit.rules. There are no errors when I load the rules. I restart the service and use auditctl -l to list them. They consistently stop after this rule:
-a always,exclude -F msgtype=CRYPTO_KEY_USER
It shows all rules up and including the line above, even if I comment it out. Why is it doing that? Can auditd only display a certain number of rules? (seems unlikely).
Is there something I am doing wrong?
This happens on both Centos 7, Debian 10, and Debian 11 hosts.
Edit: when I manually try to add the rule above, and any other rules below, it says the rules already exist.

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

Is there an ansible conditional or fact to check if the host can successfully login to another server with SSH?

I need to check if the host can successfully login to another server with SSH.
Is there a way to do this?
There's no such thing. If you think about it, it would be impossible to do that with built-in facts or other mechanisms. How should Ansible know about the other host(s), what username, authentication method etc to use... Also it would be a waste if Ansible would try to make each host connect to just every other host it knows about.
You will need to write a task and save the output for further use. But that should actually be quite easy.
- shell: ssh other.host echo awesome that works
failed_when: false
register: ssh_test
Then you can use that output in any other task as a condition:
- foo: bar
when: "'awesome that works' in ssh_test.stdout_lines"
There is no fact for this, but you can use the ping module. http://docs.ansible.com/ansible/ping_module.html
To expand on the answer provided by #smiller171, here is a play that will test SSH (or other inventory configured communication) connectivity from target1 -> target2. The status variable will contain information you can use to determine if connectivity failed
- hosts: target2
tasks:
- ping:
delegate_to: target1
register: status
ignore_errors: yes
- debug: msg="target1 failed to communicated with target2"
when: status | failed

How to have an idempotent Ansible playbook if we change the SSH port?

My playbook needs to change the ssh port and updates the firewall rules. (unfortunately, I cannot "get" a new server directly with the desired custom port).
Managing the change during the execution is easy.
However I do not know how to have an idempotent playbook.
The first run must be initiated on the default port (22).
The next runs must be initiated on the custom port.
A solution could be done but with performance issues.
Is there any other possibility with Ansible 2.0+?
You could approach this a couple of ways really.
The simplest way might be to simply separate the SSH port configuration into a separate playbook/role that specifies the SSH port as 22 but then your inventory would normally define the SSH port as your custom one.
ssh_port.yml
- hosts: all
vars:
ansible_ssh_port: 22
tasks:
- name: change the default ssh port
lineinfile ...
notify: restart ssh
handlers:
- name: restart ssh
service:
name : sshd
state: restarted
You would then only run this playbook on the creation of the machine and then only re-run your main playbook again and again, sidestepping the idempotency of this step.
Alternatively, as Mikko Ohtamaa pointed out in the comments, you could have Ansible modify your inventory file when you change the port. This will mean that you can run the whole thing end to end idempotently as the next run through will connect on the non default SSH port and then simply (pointlessly obviously) check that the SSH port is still set to the desired one. You can get at the inventory file by using the "magic variable" inventory_file. A rough example might look like this:
- name: change the default ssh port
lineinfile ...
notify: restart ssh
- name: change ssh port used by ansible
set_fact:
ansible_ssh_port: {{ custom_ssh_port }}
- name: change ssh port in inventory
lineinfile:
dest: inventory_file
insert_after: '[all:vars]'
line: 'ansible_ssh_port="{{ custom_ssh_port }}"'
Just make sure you have an inline group variables block for all in the inventory file and this will mean all future runs of any playbook against this inventory will connect to all of the hosts contained inside it on your custom SSH port.
If you use source control then you will also need a local_action task to push the change back to your remote.

Start Phoenix app with cowboy server on different port

Is it possible to start locally a few Phoenix apps on different ports from the console using some command like mix phoenix.server --port=4001? This one does not work, of course, but, maybe, there is similar way.
Yep! Make sure you set the mix config to reference the env port, i.e.
config :my_app, MyApp.Endpoint,
http: [port: {:system, "PORT"}],
Then from the terminal:
$ PORT=4001 mix phoenix.server
$ PORT=4002 mix phoenix.server
$ PORT=4003 mix phoenix.server
Edit your config/dev.exs and change the Endpoint http port like the following:
config :my_app, MyApp.Endpoint,
http: [port: System.get_env("PORT") || 4000],
This allows the port to be set, or left as the default 4000:
PORT=4002 mix phoenix.server # to run on port 4002
mix phoenix.server # to run on port 4000
This answer was described by #chris-mccord on github.
This was needed for me as a solution since my issue was that I needed to let C9.io dictate the port, for me, adding this code to the dev.exs file solved the problem:
config :my_app, MyApp.Endpoint,
http: [port: {:system, "PORT"}],
and then in the Terminal, I just needed to run the server as normal:
mix phoenix.server