Migrating from iptables to firewalld settings with ansible in CentOS 7 - redis

I'm setting up a new Redis cluster on my webservers, and currently I was adding some chain rules with the iptables settings, but now I'm switching to automatically deploying through ansible.
My iptables looks like this:
1 iptables -N REDIS
2 iptables -A REDIS -s 10.0.1.11 -j ACCEPT ## Master server
3 iptables -A REDIS -s 10.0.1.10 -j ACCEPT ## Slave 01/03
4 iptables -A REDIS -j LOG --log-prefix "unauth-redis-access"
5 iptables -A REDIS -j REJECT --reject-with icmp-port-unreachable
6 iptables -I INPUT -p tcp --dport 6379 -j REDIS
In this way, I have to manually add the rule #3 to each slave server (currently there are only 3 slave servers, but it's going to be way more at some point, thus, I'm planning on automatically deploying it through ansible).
And the ansible config that I've set looks like this:
- name: Redis service
tags: ['redis']
firewalld:
service=redis
zone=internal
state=enabled
permanent=yes
- name: Redis connections
tags: ['redis']
firewalld:
source=10.0.1.0/24
port=6379/tcp
zone=internal
state=enabled
permanent=yes
notify: restart redis
I'm using my webservers' subnet as source, or should I list each webserver's ip as source?
Although when I deploy the ansible configuration, it doesn't work at all. Using iptables works just fine, but I have to switch it to firewalld due dev-env issues mentioned above.
Any ideas?

try adding immediate=yes or add a service-handler to reload firewalld.
using firewalld with permanent=yes only changes the configuration files, it doesn't install the iptables-rules

Related

Can't access RabbitMQ web management interface from external ips

After a fresh install of RabbitMQ server on CentOs 7.7
I can reach the :15672 port from localhost
curl -i http://localhost:15672
HTTP/1.1 200 OK
But i cant reach the web interface from external ips
curl -i http://serverRemoteIp:15672
curl: (7) Failed connect to serverRemoteIp:15672; Connection timed out
the server is remote, so i need access from remote ips
any idea?
First, yesterday I exec this on my server
sudo iptables -A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
and the problem continue. Yoday I run:
iptables -I INPUT 1 -p tcp --dport 15672 -j ACCEPT
service iptables save
service iptables restart
and works!!

Boot from NFS server with UBoot

I have a problem with an NFS server. I basically have to boot an embedded processor from NFS.
On an ubuntu machine I simply put the filesystem in /tftpboot,
added in /etc/exports this line:
/tftpboot *(rw,no_root_squash,no_all_squash,sync)
then I executed the commands:
sudo /usr/sbin/exportfs -av
sudo /etc/init.d/nfs-server restart
but on the embedded processor I get this error:
Looking up port of RPC 100003/2 on 192.168.2.11
Looking up port of RPC 100005/1 on 192.168.2.11
VFS: Unable to mount root fs via NFS, trying floppy.
VFS: Cannot open root device "nfs" or unknown-block(2,0)
Please append a correct "root=" boot option
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0)
in particular the lines
Looking up port of RPC 100003/2 on 192.168.2.11
Looking up port of RPC 100005/1 on 192.168.2.11
make me think that the problem is in the configuration of the NFS server, anybody can help me?
I had today exactly the same problem with an old embedded device and an NFS Server installed on SUSE Leap.
I sniffed the communication with Wireshark and it gave me an idea of what went wrong.
In my case the problem had to do with "iptable filter" and "NFS server version":
Configure iptables to open the NFS related ports at the NFS server side
My device only supported version 2 of NFS, and SUSE NFS server was configured by default to support v3 and v4.
To solve 1:
You can check post
Iptables Rules for NFS Server and NFS Client
sudo iptables -A INPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p udp -m multiport --dports 10053,111,2049,32769,875,892,20048,950 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p tcp -m multiport --dports 10053,111,2049,32803,875,892,20048,950 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p udp -m multiport --sports 10053,111,2049,32769,875,892,20048,950 -m state --state ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p tcp -m multiport --sports 10053,111,2049,32803,875,892,20048,950 -m state --state ESTABLISHED -j ACCEPT
To solve 2 You can check:
https://documentation.suse.com/sles/15-SP1/html/SLES-all/cha-nfs.html#sec-nfs-configuring-nfs-server
Enable NFS version 2 on the server by modifying /etc/sysconfig/nfs by setting:
NFSD_OPTIONS="-V2"
MOUNTD_OPTIONS="-V2
I hope it helps someone, i lost some hours with this issue
I add a screenshot of problem 2 which was found because of wireshark capture:

Iptables on centos 7 rejects SSH and WHM connection

I installed centos 7 and cPanel; disabled/masked firewalld and installed and enabled iptables. As soon as I enabled iptables, I disconnected from WHM and SSH. When I disable iptables in rescue mode, I can connect to server via SSH and WHM.
I checked the rules in /etc/sysconfig/iptables, but there is no any rule that rejects access to SSH or WHM ports.
My next step was to install CSF.
Any idea how to fix it?
The quick solution to get rid of the issue is flushing all the Iptables rules with the command
iptables -F
However since you want to keep the Iptables running you will have to configure it to open the required ports with the command
iptables -A INPUT -p tcp --dport 22 -j ACCEPT --- 22 is for SSH , same way you will have to open other ports.

Reproduce RabbitMQ network partition scenario

I would like to reproduce the network partition scenario with all the three modes - ignore, autoheal and pause_minority.
How can I achieve this? I tried stopping(/sbin/service reboot) one of the nodes of the cluster but this didn't cause any network partitioning. I also tried deleting the mnesia on one node to create inconsistent mnesia across the cluster but that also didn't help.
In order to simulate a network partition you can block the outgoing connections using iptables
Suppose you have 3 nodes:
node1 - ip : 10.10.0.1
node2 - ip : 10.10.0.2
node3 - ip : 10.10.0.3
After creating the cluster, go to node 2 for example and
iptables -A OUTPUT -d 10.10.0.1 -j DROP
In this way you blocked the connections and the node will go in network partition.
Then
iptables -F
to restore the network.
If you are using a docker, disconnecting the connected network will activate the partitioning.
docker network disconnect network_name rabbitmq_container_name
Adding more details to above answer:
Execute below command either in node2 or node3 to block the connection from other node(s)
sudo iptables -A INPUT -s 10.10.0.1 -j DROP
To allow the connection from other nodes(s)/delete the firewall rule that we created earlier
sudo iptables -D INPUT -s 10.10.0.1 -j DROP
To view existing firewall rules
iptables --list
Note: In few cluster setup, the net partition occurs only when the nodes whose connections were blocked earlier (via 'iptables' commands) are able to communicate with each other again. So, try block and unblock connections after 60 seconds (which is default 'net_ticktime' value)
I managed to simulate / reproduce network partition for RabbitMQ by blocking 25672 port.
25672: used for inter-node and CLI tools communication
I had two RabbitMQ nodes in different AWS instances.
To simulate a network partition I configured dropping tcp packets for that port, waited 60 seconds (may differ according to Net Tick Time parameter) and then removed the port blocking (that's required for network partition detection).
Adding the rules for port blocking (with the highest priority):
sudo iptables -I INPUT 1 -p tcp --dport 25672 -j DROP
sudo iptables -I OUTPUT 1 -p tcp --dport 25672 -j DROP
Removing the rules (after 60+ seconds):
sudo iptables -D INPUT -p tcp --dport 25672 -j DROP
sudo iptables -D OUTPUT -p tcp --dport 25672 -j DROP
To check that network partition has occurred:
sudo rabbitmqctl cluster_status
partitions property will have nodes in it's array like this
{partitions,[{'rabbit#ip-163-10-1-10',['rabbit#ip-163-10-0-15']}]}
RabbitMQ network partition docs

How to configure JMeter for SSH tunneling over a different host

i have trouble setting up a JMeter client to connect to a remote JMeter server over an intermediate jumphost.
Especially which ports need to be open and forwarded to which host and how to configure JMeter for that. Apparently there are some blog posts about similar setups but neither describes the ports in detail nor do the connect over an external host (all use localhost?).
The setups is:
JMeter GUI(client) <-> Jumphost <-> JMeter Server
I need to setup one or more SSH Tunnels on the Jumphost and tell the Client and server to connect to this host.
Help will be much appreciated!
http://rolfje.wordpress.com/2012/02/16/distributed-jmeter-through-vpn-and-ssl/
Here I see ports in the article:
-A RH-Firewall-1-INPUT -p udp -m udp --dport 1099 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 1099 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 50000 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 50000 -j ACCEPT
Tried with Java 8
1. Client - modify jmeter.properties file adding:
remote_hosts=127.0.0.1:55511
client.rmi.localport=55512
2. Server - modify jmeter.properties file adding:
server_port=55511
server.rmi.localhostname=127.0.0.1
server.rmi.localport=55511
3. Connect to the server using:
Linux and Mac users
ssh solr#server -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512
Windows users
putty.exe -ssh user#server -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512
4. Server - start jmeter
cd apache-jmeter-2.13/bin/
./jmeter-server -Djava.rmi.server.hostname=127.0.0.1
5. Client - start jmeter
cd apache-jmeter-2.13/bin/
./jmeter.sh -Djava.rmi.server.hostname=127.0.0.1 -t test.jmx