Wrapping one's head around port forwarding with iptables - iptables

Honestly, I get part of what's going on. Like I need to enter rules, to forward with specific filters. But do I need one rule, two, three? Why do some people do FORWARD and others also OUT and yet some others even IN. Do I need separate rules for SYN, ESTABLISHED, RELATED? Is conntrack a separate package? Why does one guide do -t nat and all the others don't?
It's really painful, since everybody delivers almost copy&pastable guides, but not enough explanation of what they are actually providing as a solution, or how to get help if the reader's setup (oh surprise) is not 100% the same.
What I basically want to achieve is:
accept connections from everybody on *:443
send all the requests to 1.2.3.4:443 (nobody but me can reach 1.2.3.4)
enable requesters to receive the response from 1.2.3.4 as well
see in dmesg whether or not stuff works, but not more if not necessary
Please explain why you are doing or not doing something. I really want to grasp this stuff. Thanks!

The best explanation I found is in archwiki, even with further references to more in depth descriptions and diagrams. One real in depth guide I found through archwiki is this iptables tutorial.
For instance, here (Simple stateful firewall) is a detailed example with explanation of all the decisions.
Because I'm a visual learner I also found this youtube video very helpful that shows and explains a running example with two VMs that pretty much anybody can reproduce at home.
Now I feel I'm at a level that I mostly just need to reference the following diagram, which shows how a package walks through the tables and chains:
Furhter reading:
discussion whether one should drop or reject
the internet protocols RFC
What's going on with NEW and SYN packages?
Why do some tutorials use conntrack and others use state?
Side notes:
I always got confused why some guides contain ESTABLISHED,RELATED rules and others don't. Whether or not these rules are there decides if already existing network traffic is cut or not. For instance if you are using an ssh session to connect to the machine, it would be nice if your ssh session wouldn't get killed by adding iptables rules, thus having a rule that allows your ESTABLISHED connection is nice. RELATED packages are for instances responses to ping or network information packages (ICMP).
The Simple stateful firewall example also explains the differences between different nmap tests.
Also a good overview

Related

What is the simplest way to emulate a bidirectional UDP connection between two ports on localhost?

I'm adapting code that used a direct connection between udp://localhost:9080 and udp://localhost:5554 to insert ports 19080 and 15554. On one side, 9080 now talks and listens to 19080 instead of directly to 5554. Similarly, 5554 now talks and listens to 15554. What's missing is a bidirectional connection between 19080 and 15554. All the socat examples I've seen seem to ignore this simplest of cases in favor of specialized ones of limited usefulness.
I previously seemed to have success with:
sudo socat UDP4:localhost:19080 UDP4:localhost:15554 &
but I found that it may have been due to a program bug that bypassed the connection. It no longer works.
I've also been given tentative suggestions to use a pair of more cryptic commands that likewise don't work:
sudo socat UDP4-RECVFROM:19080,fork UDP4-SENDTO:localhost:15554 &
sudo socat UDP4-RECVFROM:15554,fork UDP4-SENDTO:localhost:19080 &
and additionally seem to overcomplicate the manpage statement that "Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them."
I can see from Wireshark that both sides are correctly using their respective sides of the connection to send UDP packets, but neither side is receiving what the other side has sent, due to the opacity of socat used in either of these ways.
Has anyone implemented this simplest of cases simply, reproducibly, and unambiguously? It was suggested to me as a way around writing my own emulator to pass packets back and forth between the ports, but the time spent getting socat to cooperate could likewise be put to better use.
You use fix ports, and you do not specify if one direction is initiating the transfers.
Therefore the datagram addresses are to prefer. Something like the following command should do the trick:
socat \
UDP-DATAGRAM:localhost:9080,bind=localhost:19080,sourceport=9080 \
UDP-DATAGRAM:localhost:5554,bind=localhost:15554,sourceport=5554
Only the 5-digit port numbers belong in the socat commands. The connections from or to 9988, 9080, and 5554 are direct existing connections. I only need socat for the emulated connections that would exist if an actual appliance existed.
I haven't tested this but it appears possible that the two 'more cryptic' commands might cause a non-desirable loop... perhaps the destination ports could be modified as shown below and perhaps that may help achieve your objective. This may not be viable based on your application as you may need to adjust your receive sockets accordingly.
sudo socat UDP4-RECVFROM:19080,fork UDP4-SENDTO:localhost:5554 &
sudo socat UDP4-RECVFROM:15554,fork UDP4-SENDTO:localhost:9080 &

Logstash: Filter out heterogeneous logs on a single UDP input

I am taking over an infrastructure where ELK (ElasticSearch/Logstash/Kibana) has been designed as a PoC then turned into a production service.
There is currently a single UDP input, on which multiple remote hosts (mainly firewalls from various vendors) are sending their logs.
As there is no consistency on log format, I wonder what is the best practice (I know both solutions are possible) regarding this issue:
Create as much inputs in Logstash than I have of firewall devices, and ask my network administrator to kindly change the port where logs are forwarded to (e.g. port 10001 for Juniper, port 10002 for Cisco, ...).
Use many patterns in filter to identify which device type is talking to Logstash, then apply a type tag for the transformation and output.
PS: I know that UDP listener is not the best solution in order to keep all the logs, but I have to do with it right now.
Thanks a lot

Do all bindings require setting URLACL?

We set up a windows app that talks to a windows service for certain operations. It's set up and working using wsHttp but we need to add the url to the URLACL list for the service to run. Is this going to be an issue with other bindings as well, or are we basically just using the wrong one at this point?
In the future the service might be moved from the end user's local machine to a server on their network, so maybe we should use some other binding?
Seems like the "next" best option is netTcpBinding, but to use that you need to start the TCP Port Sharing service: http://msdn.microsoft.com/en-us/library/ms731810.aspx
Since the whole point was to simplify installation on client machines and reduce the number of changes to their configuration I'm not sure that gives any real advantage. I could convert to netTcpBinding for speed since it is the fastest protocol... but in this case it's fast enough to not justify more dev time.
Hate answering my own question, but hopefully this helps someone else down the line!

forcing glassfish to use UDP (config or application code)

Hey guys, I am not very familiar with glassfish, but am attempting to resolve a small issue we are having. It seems that when our application is sending packets over a certain size, glassfish is doing so using TCP. However, we want it to always use UDP.
I am having trouble finding any information regarding if there is some configuration parameter we can change so that it will not do this. Also, I am not sure at what point glassfish begins using TCP. If I was able to figure that out, I bet I could somehow alter the application so that the packets would not reach that size.
Any information would be greatly appreciated.
EDIT: I am using sailfin and SIP over UDP.
I ended up finding this information: http://java.net/jira/browse/SAILFIN-2088
It seems that it switches to TCP at 1250. This value is not currently configurable.

Can the SVN and HTTP protocols be used safely on the same repository simultaneously?

We would like to evaluate whether the SVN protocol works better for our team than HTTP, but we don't want to commit to a full switch just yet.
Right now we have an Apache sever serving up our main repository. Can we safely use svnserve.exe to with the same repository so that a few of our developers can test it? My initial guess is that we can, but we don't want to risk corrupting our repository.
Yes, it's possible. The official SVN book has a chapter devoted to this situation:
http://svnbook.red-bean.com/en/1.5/svn.serverconfig.multimethod.html . There are some pitfalls but they have more to do with permission settings.
Exactly, Subversion is designed to support concurrent access via multiple protocols, something which causes major problems with CVS. Not only can you use http:// and svn://, but also file:// (if you happen to be working locally on the machine, for example with a continuous integration tool or other post-commit hook) https://, svn+ssh://, etc.
In my experience, one method hasn't proven to be objectively "better" than the other, but there are certain benefits to each. For example, Apache is extremely adept at handling lots of accesses at one. On the other hand, if you're not already using Apache, or don't want to make it handle SVN traffic, the svnserve daemon is lightweight and quite performant. On my Macs, I set up svnserve using launchd to start up only when a request comes in, so it doesn't use any resources when there is no repository activity. What works best will largely be a factor of the access patterns you see in practice.