I would like to reproduce the network partition scenario with all the three modes - ignore, autoheal and pause_minority.
How can I achieve this? I tried stopping(/sbin/service reboot) one of the nodes of the cluster but this didn't cause any network partitioning. I also tried deleting the mnesia on one node to create inconsistent mnesia across the cluster but that also didn't help.
In order to simulate a network partition you can block the outgoing connections using iptables
Suppose you have 3 nodes:
node1 - ip : 10.10.0.1
node2 - ip : 10.10.0.2
node3 - ip : 10.10.0.3
After creating the cluster, go to node 2 for example and
iptables -A OUTPUT -d 10.10.0.1 -j DROP
In this way you blocked the connections and the node will go in network partition.
Then
iptables -F
to restore the network.
If you are using a docker, disconnecting the connected network will activate the partitioning.
docker network disconnect network_name rabbitmq_container_name
Adding more details to above answer:
Execute below command either in node2 or node3 to block the connection from other node(s)
sudo iptables -A INPUT -s 10.10.0.1 -j DROP
To allow the connection from other nodes(s)/delete the firewall rule that we created earlier
sudo iptables -D INPUT -s 10.10.0.1 -j DROP
To view existing firewall rules
iptables --list
Note: In few cluster setup, the net partition occurs only when the nodes whose connections were blocked earlier (via 'iptables' commands) are able to communicate with each other again. So, try block and unblock connections after 60 seconds (which is default 'net_ticktime' value)
I managed to simulate / reproduce network partition for RabbitMQ by blocking 25672 port.
25672: used for inter-node and CLI tools communication
I had two RabbitMQ nodes in different AWS instances.
To simulate a network partition I configured dropping tcp packets for that port, waited 60 seconds (may differ according to Net Tick Time parameter) and then removed the port blocking (that's required for network partition detection).
Adding the rules for port blocking (with the highest priority):
sudo iptables -I INPUT 1 -p tcp --dport 25672 -j DROP
sudo iptables -I OUTPUT 1 -p tcp --dport 25672 -j DROP
Removing the rules (after 60+ seconds):
sudo iptables -D INPUT -p tcp --dport 25672 -j DROP
sudo iptables -D OUTPUT -p tcp --dport 25672 -j DROP
To check that network partition has occurred:
sudo rabbitmqctl cluster_status
partitions property will have nodes in it's array like this
{partitions,[{'rabbit#ip-163-10-1-10',['rabbit#ip-163-10-0-15']}]}
RabbitMQ network partition docs
Related
I have a VM running OpenVPN with client-to-client disabled and I need some specific forwarding rules. IP forwarding on the VM is turned on.
The OpenVPN base network is 172.30.0.0/16 and that is further subdivided into /24 subnets with their own rules.
172.30.0.0/24 should have access to all the clients. The rest should not. I have 2 subnets defined at the moment; 172.30.0.0/24 and 172.30.10.0/24.
Following the suggestion at the bottom here;
https://openvpn.net/community-resources/configuring-client-specific-rules-and-access-policies/ - I set up my rules as follows;
iptables -A FORWARD -i tun1 -s 172.30.0.0/24 -j ACCEPT
iptables -P FORWARD DROP
This does not work. If I add a -j LOG at the top, I can see that traffic from my client at 172.30.0.1 connects fine to the client at 172.30.10.3, but all traffic it sends BACK is blocked.
If I set the policy to ACCEPT everything works and I can connect to the client just fine, so this is not a routing problem.
How can I set this up? And why doesn't the suggestion in that OpenVPN guide work?
I solved this by adding
iptables -A FORWARD -i tun1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
which allows established connections to return. Everything works as desired now.
I have a VM in VirtualBox with Debian 10 and I'm trying to NAT masquerade it's output interface (enp0s8) so that it's clients (VMs connected to it) can access the Internet.
All interfaces in the system have an IP. I've already enabled forwarding with:
echo 1 > /proc/sys/net/ipv4/ip_forward
sysctl -w net.ipv4.ip_forward=1
And then I executed:
iptables -t nat -A POSTROUTING -o enp0s8 -j MASQUERADE
However, whenever I execute the above, the following happens:
And no matter how many times I iptables --flush -t nat and repeat the process, the result is always the same. The rule I want to apply is never saved properly and the client's IPs are never masked.
What is the issue here? Almost all tutorials say this is the correct way for masquerading.
I've also tried using nftables, without success.
It is already showing the right output. To show the rules with the interface details, you need to use,
iptables -t nat -L -n -v
And btw, if you have setup NAT networking, it is already taken care to connect outside.
And have you set the default gateway of your clients to this box?
I'm setting up a new Redis cluster on my webservers, and currently I was adding some chain rules with the iptables settings, but now I'm switching to automatically deploying through ansible.
My iptables looks like this:
1 iptables -N REDIS
2 iptables -A REDIS -s 10.0.1.11 -j ACCEPT ## Master server
3 iptables -A REDIS -s 10.0.1.10 -j ACCEPT ## Slave 01/03
4 iptables -A REDIS -j LOG --log-prefix "unauth-redis-access"
5 iptables -A REDIS -j REJECT --reject-with icmp-port-unreachable
6 iptables -I INPUT -p tcp --dport 6379 -j REDIS
In this way, I have to manually add the rule #3 to each slave server (currently there are only 3 slave servers, but it's going to be way more at some point, thus, I'm planning on automatically deploying it through ansible).
And the ansible config that I've set looks like this:
- name: Redis service
tags: ['redis']
firewalld:
service=redis
zone=internal
state=enabled
permanent=yes
- name: Redis connections
tags: ['redis']
firewalld:
source=10.0.1.0/24
port=6379/tcp
zone=internal
state=enabled
permanent=yes
notify: restart redis
I'm using my webservers' subnet as source, or should I list each webserver's ip as source?
Although when I deploy the ansible configuration, it doesn't work at all. Using iptables works just fine, but I have to switch it to firewalld due dev-env issues mentioned above.
Any ideas?
try adding immediate=yes or add a service-handler to reload firewalld.
using firewalld with permanent=yes only changes the configuration files, it doesn't install the iptables-rules
I have 2 kinds of proxies in my local machine : stunnel and TOR-VPN.
stunnel is listening on port 6666
TOR-VPN is listening on port 9040
I want to get web traffic to go to stunnel first and the output traffic of stunnel go to tor-vpn. This needs double redirecting. is it possible to do it with iptables? I mean by using "table nat chain OUTPUT".
Because as far as I know "table nat chain OUTPUT" cant be called twice.
web traffic = browser listening on 127.0.0.1:6666
these are my rules:
iptables -t nat -A OUTPUT -p tcp -j REDIRECT --to-ports 6666
iptables -t nat -A OUTPUT -p tcp -m owner --uid-owner bob -m tcp -j
REDIRECT --to-ports 9040
iptables -t nat -A OUTPUT -p udp -m owner --uid-owner bob -m udp
--dport 53 -j REDIRECT --to-ports 53
iptables -t filter -A OUTPUT -p tcp --dport 6666 -j ACCEPT
iptables -t filter -A OUTPUT -p tcp -m owner --uid-owner bob -m tcp
--dport 9040 -j ACCEPT
iptables -t filter -A OUTPUT -p udp -m owner --uid-owner bob -m udp
--dport 53 -j ACCEPT
iptables -t filter -A OUTPUT -m owner --uid-owner bob -j DROP
the above rules make stunnel work independently from TOR/VPN.
i mean when browser is set with proxy, no traffic will go through TOR/VPN but if i turn off the proxy in browser, all traffic will go through TOR/VPN.
now i want to let browser have the proxy on and all web traffic go to stunnel first, but outgoing stunnel traffic(outgoing loopback traffic) redirects to TOR/VPN(127.0.0.1:9040)
is it possible ? how can i do that? somehow i mean double redirecting inside system.
Policy of all tables is ACCEPT
Checking that this is what you mean :
You have stunnel bound to port 6666 (localhost:6666) and you have tor bound to 9040 (localhost:9040). You want it so your web traffic will go THROUGH stunnel (so destination is localhost:6666) but the OUTBOUND traffic FROM stunnel (with inbound traffic originally from your client redirected to stunnel) should be DESTINED to tor (localhost:9040) ? Is this correct ?
If so, and I am thinking clearly enough (it is just 7:00 and I've been awake far too many hours for a difficult night), this is indeed possible (the reverse is, too). You need to masquerade the destination address (and indeed port) based on the source (address and port (you don't have to specify both, I might add)). Something like this:
iptables -t nat -I PREROUTING -p tcp --sport 6666 -j DNAT --to-destination 9040
If this is not what you mean (or alternatively I made a typo, am not clear headed or being an idiot in some way (in all cases showing myself to be a user as everyone is!), if any it is probably the latter) then please respond. I'll see about enabling email notification so that I see the response. If I don't, however, I apologise in advance.
As an aside: unless you have a final rule in each CHAIN (not table, just as an fyi: a table is filter, nat (which I specify in the above and indeed it is necessary), etc. and CHAIN is INPUT, OUTPUT, FORWARD and others created by the option -N) you shouldn't have -P ACCEPT ('that which is not explicitly permitted is forbidden' and similar wording - i.e. have DROP). The exception is perhaps OUTPUT (but depends on what you need, in the end). However, when dealing with interface 'lo' you'll want to ACCEPT all traffic always, in any case (i.e. specify -i lo and -o lo, depending on chain, and jump to ACCEPT). Of course, maybe you're behind another device but still best practise to not accept anything and everything! (I should also state that you have different chains per table so yes you can specify different tables but the policy is for the chain IN that table)
Edit: something else: no, you don't have to deal with SNAT when you want DNAT and the reverse is true. Anything to the contrary is a misunderstanding. The reason is you're masquerading the CONNECTION. As the man page shows:
It specifies that the destination address of the
packet should be modified (and all future packets in this connection will also be mangled), and rules should cease being examined.
Edit:
If I understand you (now) you actually have two interfaces involved. Or more specifically you need the following:
You have a service you want encrypted. This is tor. Now, you're using stunnel to do this. To this end you want stunnel to forward traffic to tor. Is this right? If yes, then know that stunnel has the following directives (I actually use similar for something else). Here's a mock setup of a service.
[tor]
accept = 6666
connect = 9040
In addition, just as a note: connect can also be a remote address (remote address implies an external address (with port) or even a certain interface (by IP and also with port) on the system (I use external in the sense of you specify ip and port rather than just a port). Furthermore, accept can specify address (with same rules: ip before the port (except that it is obviously on the local machine so no external IP)). You could explain it, perhaps, as stunnel is where the service would bind to except that the service is stunnel and the service it is encrypting is elsewhere (shortly: the bind(2) call allows specific IP or all IPs on the system, and you're basically configuring stunnel to do this).
(And yes, you're right: the sport should have been dport.)
IF this is not what you need then I do not understand all variables. In that case, if you can elaborate on which interfaces (this includes ports and which service per interface) are involved as well as clients involved (and where they send). Because it is a lot more helpful if others know EXACTLY what you need than infer certain parts. Makes it much easier to solve a problem if you know what the problem is entirely. Maybe I've been dense and I should put together it all (and I admit sleep problems - which I have for a long, long time - does not help that, but...) I haven't, I think.
I found the answer by myself. in my first post, i said something that was completely wrong and because of that, i could not do double redirecting.
i said:
Because as far as I know "table nat chain OUTPUT" cant be called twice
it is wrong and "table nat chain OUTPUT" can be called twice. i dont know what exactly i did 2 months ago that thought "table nat chain OUTPUT" cant be called twice.
this is the tables and chains order when using some services on loopback interface or not:
Without having any services on loopback:
Generated packets on local machine -> nat(OUTPUT) -> filter(OUTPUT) -> wlan(ethernet) interface
With having some services on loopback:
Generated packets on local machine -> nat(OUTPUT) -> filter(OUTPUT) -> loopback interface -> nat(OUTPUT) -> filter(OUTPUT) -> wlan(ethernet) interface
these are my rules to solve the problem:
iptables -t nat -A OUTPUT -p tcp -m tcp --dport 6666 -j REDIRECT --to-ports 6666
iptables -t nat -A OUTPUT -p tcp -m owner --uid-owner bob -m tcp -j REDIRECT --to-ports 9040
iptables -t nat -A OUTPUT -p udp -m owner --uid-owner bob -m udp --dport 53 -j REDIRECT --to-ports 53
iptables -t nat -A OUTPUT -d "StunnelServerIp" -o wlan0 -p tcp -j REDIRECT --to-ports 9040
iptables -t filter -A OUTPUT -p tcp -m owner --uid-owner bob -m tcp --dport 9040 -j ACCEPT
iptables -t filter -A OUTPUT -p udp -m owner --uid-owner bob -m udp --dport 53 -j ACCEPT
iptables -t filter -A OUTPUT -p tcp -m tcp --dport 6666 -j ACCEPT
iptables -t filter -A OUTPUT -m owner --uid-owner bob -j DROP
It seems I don't understand IPTABLES logic.
I reinstalled ubuntu server 11.10 on my server and turned on forwarding (net.ipv4.ip_forward=1 in /etc/sysctl.conf). Server has two network interfaces - eth0 (ip 192.168.1.1) looks to local network and eth1 (ip 213.164.156.130) looks to internet.
There's also another computer in local network with ip 192.168.1.2.
Then I added two simple rules to ITABLE *nat:
-A PREROUTING -i eth1 -j DNAT --to-destination 192.168.1.2
-A POSTROUTING -o eth1 -j SNAT --to-source 213.164.156.130
I thought that the first rule means forwarding every incoming packet to 192.168.1.2.
But if I run "ping google.com", "wget google.com" from server, they successfully work. Server receives packets and doesn't do forwarding, and I'm really stuck with this.
In case I run these commands from 192.168.1.2 they also work, that means here forwarding works.
These are NAT rules.
In your first rule, address translation occurs before routing the packet. You're changing the destination address to 192.168.1.2 and in the second rule, you're changing the source address before routing to 213.164.156.130.
I'm guessing you can ping & wget because your INPUT and OUTPUT chains have a default action.
TBH, I'm confused about what you actually want to do but if you want to forward packets, you need to modify the FORWARD chain. Here's a link for detailed and helpful information on iptables so you can understand the logic better - Ch14:_Linux_Firewalls_Using_iptables">http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:Ch14:_Linux_Firewalls_Using_iptables.