Iptables Rules for NFS Server and NFS Client - iptables

Without iptables rules I am able to mount my NFSSERVER:/PATH but with it(firewall/iptables) enabled I am not able to mount.
[.e.g., after iptables --flush/ firewaalld stop ; mount NFSSERVER:/Path works ]
I am not supposed to disable/clear the firewall/iptables but I am allowed to open a port. What is the rule that I need to add to open up the port/mount?
Current default policy is DROP all INCOMING/OUTGOING/FORWARD and there are couple of rules to allow wget from external 80 port etc.,
adding the NFS Server port didnt help.
iptables -A OUTPUT -p tcp --dport 2049 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --sport 2049 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p udp --dport 2049 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p udp --sport 2049 -m state --state ESTABLISHED -j ACCEPT
Thanks.
PS: This is for nfs client not NFS server machine.

If all you need is NFS version 4 (which is already over 10 years old), you don't need to go to all of the effort described in #Sathish's answer. Just make sure TCP port 2049 is open the server's firewall, and that the client's firewall allows outbound traffic to port 2049 on the server.
CentOS 5 (also old) has a nice explanation of why NFSv4 is more firewall friendly than v3 and v2.

NFS SERVER:
Configure Ports for rquotd(875/udp; 875/tcp), lockd(32803/tcp; 32769/udp), mountd(892/udp; 892/tcp), statd(10053/udp; 10053/tcp), statd_outgoing(10054/udp; 10054/tcp)
vim /etc/sysconfig/nfs
If desired, disable NFS v3 and NFS v2 suport by editing lines 5 & 6 of /etc/sysconfig/nfs
MOUNTD_NFS_V2="no"
MOUNTD_NFS_V3="no"
Save current Iptables rules for later use. (if iptables-save is absent in your distribution, you may try iptables -S filename )
iptables-save > pre-nfs-firewall-rules-server
Flush and check Iptables rules
iptables -F
iptables -L
Stop and Start NFS and related Services in the following sequence
service rpcbind stop
service nfslock stop
service nfs stop
service rpcbind start
service nfslock start
service nfs start
Make sure the configured NFS and its associated ports shows as set before and notedown the port numbers and the OSI layer 4 protcols. The standard port numbers for rpcbind (or portmapper) are 111/udp, 111/tcp and nfs are 2049/udp, 2049/tcp.
rpcinfo -p | sort -k 3
Restore the pre-nfs-firewall-rules now
iptables-restore < pre-nfs-firewall-rules-server
Write iptables rules for NFS server (Note: Loopback adapter has to allowed, else you will see packets dropped and also when you restart nfs service, it will spit ERROR {Starting NFS quotas: Cannot register service: RPC: Timed out rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp). [FAILED]} for rquotad daemon. You can check this by adding a rule with LOG jump target at the bottom of INPUT or OUTPUT chains of filter table)
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -A INPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p udp -m multiport --dports 10053,111,2049,32769,875,892 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p tcp -m multiport --dports 10053,111,2049,32803,875,892 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p udp -m multiport --sports 10053,111,2049,32769,875,892 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p tcp -m multiport --sports 10053,111,2049,32803,875,892 -m state --state ESTABLISHED -j ACCEPT
iptables -I INPUT -i lo -d 127.0.0.1 -j ACCEPT
iptables -I OUTPUT -o lo -s 127.0.0.1 -j ACCEPT
iptables -L -n --line-numbers
Configure NFS exports directory
vim /etc/exports
exportfs -av
showmount -e
rpcinfo -p
Stop and Start NFS and related Services in the following sequence
service rpcbind stop
service nfslock stop
service nfs stop
service rpcbind start
service nfslock start
service nfs start
NFS CLIENT:
Save current Iptables rules for later use. (if iptables-save is absent in your distribution, you may try iptables -S filename )
iptables-save > pre-nfs-firewall-rules-client
Flush and check Iptables rules
iptables -F
iptables -L
Obtain the firewalled NFS Server ports from the client machine and notedown the port numbers and the OSI layer 4 protcols.
rpcinfo -p 'ip-addr-nfs-server' | sort -k 3
Restore the pre-nfs-firewall-rules now
iptables-restore < pre-nfs-firewall-rules-client
Write iptables rules for NFS client (Note: Loopback adapter has to allowed, else you will see packets dropped and also when you restart nfs service, it will spit ERROR {Starting NFS quotas: Cannot register service: RPC: Timed out rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp). [FAILED]} for rquotad daemon. You can check this by adding a rule with LOG jump target at the bottom of INPUT or OUTPUT chains of filter table)
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -A INPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p udp -m multiport --sports 10053,111,2049,32769,875,892 -m state --state ESTABLISHED -j ACCEPT
iptables -A INPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p tcp -m multiport --sports 10053,111,2049,32803,875,892 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p udp -m multiport --dports 10053,111,2049,32769,875,892 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -p tcp -m multiport --dports 10053,111,2049,32803,875,892 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -I INPUT -i lo -d 127.0.0.1 -j ACCEPT
iptables -I OUTPUT -o lo -s 127.0.0.1 -j ACCEPT
iptables -L -n --line-numbers
Stop and Start NFS and related Services in the following sequence
service rpcbind stop
service nfslock stop
service nfs stop
service rpcbind start
service nfslock start
service nfs start
List NFS Server exports
showmount -e 'ip-addr-nfs-server'
Mount NFS Exports manually (persistent mounts can be configured using /etc/fstab)
mount -t nfs ip-addr-nfs-server:/exported-directory /mount-point -o rw,nfsvers=3
mount -t nfs ip-addr-nfs-server:/exported-directory /mount-point -o rw --> For NFS4 version
Configure autofs, if automounting is preferred for nfs exports and with ldap user home directories (Direct and Indirect Maps can be set)
vim /etc/auto.master -> specify the mount point and map-name (Eg: auto.nfs)
vim /etc/map-name
service autofs stop
service autofs start
Check mounted NFS Exports
df -h -F nfs
mount | grep nfs
List all pseudo root NFS-V4 export directories (NFS Lazy mount)
ls /net/ip-addr-nfs-server

Related

What's the right way to allow systemd-timesyncd through iptables firewall?

First, I set up my firewall like this to allow everything:
sudo iptables -P INPUT ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables --flush
Then, I check if NTP is working:
sudo systemctl daemon-reload
sudo systemctl restart systemd-timesyncd
timedatectl
and I can see that it says System clock synchronized: yes.
But then if I reboot and set up my firewall like this (reject everything except for NTP):
sudo iptables -P INPUT REJECT
sudo iptables -P OUTPUT REJECT
sudo iptables -P FORWARD REJECT
sudo iptables -A INPUT -p udp --dport 123 -j ACCEPT
sudo iptables -A OUTPUT -p udp --sport 123 -j ACCEPT
then I get System clock synchronized: no and the clock won't sync.
Based on the above steps, I'm convinced it's the firewall that's blocking timesyncd. I have read (for example, here) that perhaps it has to do with extra ports being opened by the service or the fact that is uses SNTP instead of NTP. I have tried different combinations of rules, but with no success yet as I am not an expert with iptables.
But there must be a way to set it up such that it works without altogether disabling the firewall.
Summary
--dport and --sport are switched.
Explanation
For the other services that I am allowing through the firewall, my machine is the server. For NTP, my machine is the client. Because the rest of my original configuration actually looked more like this:
...
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p udp --dport 5353 -j ACCEPT
...
sudo iptables -A OUTPUT -p tcp --sport 443 -j ACCEPT
sudo iptables -A OUTPUT -p tcp --sport 80 -j ACCEPT
sudo iptables -A OUTPUT -p udp --sport 5353 -j ACCEPT
...
I assumed that --dport was meant to be used with INPUT and --sport was used with OUTPUT. However, you have to think about what it means. To use NTP as a client, I need to allow INPUT packets that are coming from a source port of 123, not input packets that are coming to a destination port of 123. Likewise, I need to allow OUTPUT packets with destination port 123, not output with source 123.
So the answer to my question is to use this:
sudo iptables -P INPUT REJECT
sudo iptables -P OUTPUT REJECT
sudo iptables -P FORWARD REJECT
sudo iptables -A INPUT -p udp --sport 123 -j ACCEPT
sudo iptables -A OUTPUT -p udp --dport 123 -j ACCEPT

Proxmox-VE 6 / PFSense, Problems with the iptables

I have been trying for some time to configure my Proxmox with a PFSense VM filtering internet traffic to my other VMs.
So far I have managed to install PFSense and configure the Proxmox interfaces. I have also managed to go to the PFSense web interface. However, my VMs do not always have access to the internet, so I try to modify my iptables to manage to redirect all the traffic on PFSense.
Here are my interfaces:
interfaces
On the shell I did this operation :
cat > /root/pfsense-route.sh << EOF
#!/bin/sh
## IP forwarding activation echo 1 > /proc/sys/net/ipv4/ip_forward
## Rediriger les paquets destinés au LAN pour l'interface WAN de la PFSense ip route change 192.168.9.0/24 via 10.0.0.2 dev vmbr1 EOF
And I modified the file /etc/hosts :
[...]
auto vmbr2
iface vmbr2 inet static
address 192.168.9.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up /root/pfsense-route.sh
#LAN
And now the heart of the problem, the iptables. Here is my current file :
#!/bin/sh
# ---------
# VARIABLES
# ---------
## Proxmox bridge holding Public IP
PrxPubVBR="vmbr0"
## Proxmox bridge on VmWanNET (PFSense WAN side)
PrxVmWanVBR="vmbr1"
## Proxmox bridge on PrivNET (PFSense LAN side)
PrxVmPrivVBR="vmbr2"
## Network/Mask of VmWanNET
VmWanNET="10.0.0.0/30"
## Network/Mmask of PrivNET
PrivNET="192.168.9.0/24"
## Network/Mmask of VpnNET
VpnNET="10.2.2.0/24"
## Public IP => Your own public IP address
PublicIP="91.121.134.145"
## Proxmox IP on the same network than PFSense WAN (VmWanNET)
ProxVmWanIP="10.0.0.1"
## Proxmox IP on the same network than VMs
ProxVmPrivIP="192.168.9.1"
## PFSense IP used by the firewall (inside VM)
PfsVmWanIP="10.0.0.2"
# ---------------------
# CLEAN ALL & DROP IPV6
# ---------------------
### Delete all existing rules.
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X
### This policy does not handle IPv6 traffic except to drop it.
ip6tables -P INPUT DROP
ip6tables -P OUTPUT DROP
ip6tables -P FORWARD DROP
# --------------
# DEFAULT POLICY
# --------------
### Block ALL !
iptables -P OUTPUT DROP
iptables -P INPUT DROP
iptables -P FORWARD DROP
# ------
# CHAINS
# ------
### Creating chains
iptables -N TCP
iptables -N UDP
# UDP = ACCEPT / SEND TO THIS CHAIN
iptables -A INPUT -p udp -m conntrack --ctstate NEW -j UDP
# TCP = ACCEPT / SEND TO THIS CHAIN
iptables -A INPUT -p tcp --syn -m conntrack --ctstate NEW -j TCP
# ------------
# GLOBAL RULES
# ------------
# Allow localhost
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Don't break the current/active connections
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Allow Ping - Comment this to return timeout to ping request
iptables -A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT
# --------------------
# RULES FOR PrxPubVBR
# --------------------
### INPUT RULES
# ---------------
# Allow SSH server
iptables -A TCP -i \$PrxPubVBR -d \$PublicIP -p tcp --dport 22 -j ACCEPT
# Allow Proxmox WebUI
iptables -A TCP -i \$PrxPubVBR -d \$PublicIP -p tcp --dport 8006 -j ACCEPT
### OUTPUT RULES
# ---------------
# Allow ping out
iptables -A OUTPUT -p icmp -j ACCEPT
### Proxmox Host as CLIENT
# Allow HTTP/HTTPS
iptables -A OUTPUT -o \$PrxPubVBR -s \$PublicIP -p tcp --dport 80 -j ACCEPT
iptables -A OUTPUT -o \$PrxPubVBR -s \$PublicIP -p tcp --dport 443 -j ACCEPT
# Allow DNS
iptables -A OUTPUT -o \$PrxPubVBR -s \$PublicIP -p udp --dport 53 -j ACCEPT
### Proxmox Host as SERVER
# Allow SSH
iptables -A OUTPUT -o \$PrxPubVBR -s \$PublicIP -p tcp --sport 22 -j ACCEPT
# Allow PROXMOX WebUI
iptables -A OUTPUT -o \$PrxPubVBR -s \$PublicIP -p tcp --sport 8006 -j ACCEPT
### FORWARD RULES
# ----------------
### Redirect (NAT) traffic from internet
# All tcp to PFSense WAN except 22, 8006
iptables -A PREROUTING -t nat -i \$PrxPubVBR -p tcp --match multiport ! --dports 22,8006 -j DNAT --to \$PfsVmWanIP
# All udp to PFSense WAN
iptables -A PREROUTING -t nat -i \$PrxPubVBR -p udp -j DNAT --to \$PfsVmWanIP
# Allow request forwarding to PFSense WAN interface
iptables -A FORWARD -i \$PrxPubVBR -d \$PfsVmWanIP -o \$PrxVmWanVBR -p tcp -j ACCEPT
iptables -A FORWARD -i \$PrxPubVBR -d \$PfsVmWanIP -o \$PrxVmWanVBR -p udp -j ACCEPT
# Allow request forwarding from LAN
iptables -A FORWARD -i \$PrxVmWanVBR -s \$VmWanNET -j ACCEPT
### MASQUERADE MANDATORY
# Allow WAN network (PFSense) to use vmbr0 public adress to go out
iptables -t nat -A POSTROUTING -s \$VmWanNET -o \$PrxPubVBR -j MASQUERADE
# --------------------
# RULES FOR PrxVmWanVBR
# --------------------
### Allow being a client for the VMs
iptables -A OUTPUT -o \$PrxVmWanVBR -s \$ProxVmWanIP -p tcp -j ACCEPT
For now with this I still manage to go on my VMs in proxmox, but I’m not internet access on it. Moreover, the shell of my server is no longer accessible on proxmox and SSH connections are no longer accessible.
Some details:
I use port 22 as ssh port
My server ip is 91.121.134.145
My version of linux is Debian 10 (Buster)
Honestly I don’t know where the problem comes from, I’m a beginner and I find the majority of this configuration on the internet. If you see what is wrong I would be very happy to have the answer! In the meantime I thank you in advance for your reading and your answers!
Edit :
I tried to pass the iptables in legacy mode using these commands :
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy
Only this command to refuse to work :
update-alternatives --set arptables /usr/sbin/arptables-legacy
Moreover I don’t know why but my VMs have good access to the internet, the problem is therefore centered on the SSH port that no longer works (I can no longer go on the shell since proxmox)

Boot from NFS server with UBoot

I have a problem with an NFS server. I basically have to boot an embedded processor from NFS.
On an ubuntu machine I simply put the filesystem in /tftpboot,
added in /etc/exports this line:
/tftpboot *(rw,no_root_squash,no_all_squash,sync)
then I executed the commands:
sudo /usr/sbin/exportfs -av
sudo /etc/init.d/nfs-server restart
but on the embedded processor I get this error:
Looking up port of RPC 100003/2 on 192.168.2.11
Looking up port of RPC 100005/1 on 192.168.2.11
VFS: Unable to mount root fs via NFS, trying floppy.
VFS: Cannot open root device "nfs" or unknown-block(2,0)
Please append a correct "root=" boot option
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0)
in particular the lines
Looking up port of RPC 100003/2 on 192.168.2.11
Looking up port of RPC 100005/1 on 192.168.2.11
make me think that the problem is in the configuration of the NFS server, anybody can help me?
I had today exactly the same problem with an old embedded device and an NFS Server installed on SUSE Leap.
I sniffed the communication with Wireshark and it gave me an idea of what went wrong.
In my case the problem had to do with "iptable filter" and "NFS server version":
Configure iptables to open the NFS related ports at the NFS server side
My device only supported version 2 of NFS, and SUSE NFS server was configured by default to support v3 and v4.
To solve 1:
You can check post
Iptables Rules for NFS Server and NFS Client
sudo iptables -A INPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p udp -m multiport --dports 10053,111,2049,32769,875,892,20048,950 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p tcp -m multiport --dports 10053,111,2049,32803,875,892,20048,950 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p udp -m multiport --sports 10053,111,2049,32769,875,892,20048,950 -m state --state ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -s 172.17.200.26/16 -d 172.17.200.26/16 -p tcp -m multiport --sports 10053,111,2049,32803,875,892,20048,950 -m state --state ESTABLISHED -j ACCEPT
To solve 2 You can check:
https://documentation.suse.com/sles/15-SP1/html/SLES-all/cha-nfs.html#sec-nfs-configuring-nfs-server
Enable NFS version 2 on the server by modifying /etc/sysconfig/nfs by setting:
NFSD_OPTIONS="-V2"
MOUNTD_OPTIONS="-V2
I hope it helps someone, i lost some hours with this issue
I add a screenshot of problem 2 which was found because of wireshark capture:

Allow Redis connections from only localhost?

I'm running Redis on my webserver (Debian/Nginx/Gunicorn) for session storage and have reasons to believe my Redis server is being hacked. It's definitely possible because if I run the command "redis-cli -h (HOST IP)" on a different machine against the web server, I can get into the console and run commands. I have two questions. First, if I add a new section to my iptables files as shown below, will I be correctly blocking access to my Redis server from all machines except the webserver itself? Redis is running on the default port 6379.
*filter
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -s 127.0.0.0/8 -j REJECT
# Allow pings, SSH, and web access
-A INPUT -p icmp -m state --state NEW --icmp-type 8 -j ACCEPT
-A INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT
-A INPUT -p tcp --dport 80 -m state --state NEW -j ACCEPT
-A INPUT -p tcp --dport 443 -m state --state NEW -j ACCEPT
# NEW SECTION...
# IS THIS CORRECT?
-A INPUT -p tcp --dport 6379 -j DROP
-A INPUT -p tcp -s 127.0.0.1 --dport 6379 -m state --state NEW -j ACCEPT
# END NEW SECTION
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -j REJECT
-A FORWARD -j REJECT
COMMIT
Second, if the above is correct, can I still use 127.0.0.1 in the IPv6 version of my iptables or do I need to use "::1"?
Thanks.
You should be able to do this through the Redis configuration file:
# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
modify redis.conf file :
bind 127.0.0.1 ==>
redis instanse will accept connections only from localhost
bind 127.0.0.1 xxx.xx.xx.xxx ==>
if you want to accept connections from out server add ip of the server
#bind 127.0.0.1 ==> comment this line will make redis listens from any network interface

How to access Seafile server in a virtual machine through IPtables?

I have installed Seafile-server 3.0.4 64bit on a Ubuntu-server 14.04 with default ports settings (i.e. 8000, 8082, 10001, 12001) but fail to access the instance with the client.
Infrastructure
The Ubuntu-server is running as a KVM machine on a Gentoo host.
Iptables rules
After some time I add the following Iptables rules to the host machine (gentoo), that seems to match the Seafile's requirements:
#Iptables-Rules for Seafile
iptables -A INPUT -p tcp -m multiport --dports 8000,8082,10001,12001 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A output -p tcp -m multiport --sports 8000,8082,10001,12001 -m state --state ESTABLISHED -j ACCEPT
However I'm still unable to connect even with telnet to the seafile-server either from Internet or the host machine.
Update: issue might be related to fail2ban
As I'm using NAT to link my virtual machine to my host, I had to edit the rules as follow to get it to work:
#Iptables-Rules for Seafile
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 10001 -j DNAT --to 192.168.8.8:10001
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 12001 -j DNAT --to 192.168.8.8:12001
References
Linux Firewall Tutorial: IPTables Tables, Chains, Rules Fundamentals