How to filter packets seen on unnumbered eth then dump raw filtered stream out another eth without using iptables - packet-capture

I can capture packets using tcpdump OK as the source eth1 port is connected to a cisco switch span port, and filter using tcpdump options (at this stage interested in DNS packets to and from a particualar IP only). Rather than writing to a file, I want to simply dump the filtered raw (DNS) packets onto eth2 (which could be unnumbered or numbered). The reason for this is that a 3rd party needs access to the raw data, but I need to filter non-DNS traffic (otherwise I'd just let them connect to the switch span port).
Preferably I also want to run the process continuously. Is there an easy way to direct the tcpdump output to an unnumbered eth interface, or is there a better way of achieving this?

Related

Can IPv6 multicasting work when one or more receivers are unable to bind to the program's well-known port?

Consider a simple IPv6 multicast application:
A "talker" program periodically sends out IPv6 UDP packets to a well-known multicast-group, sending them to a well-known port.
Zero or more "listener" programs bind themselves to that well-known port and join the well-known multicast group, and they all receive the UDP packets.
That all works pretty well, except in the case where one or more of the listener-programs is unable to bind to the well-known UDP port because a socket in some other (unrelated) program has already bound to that UDP port (and didn't set the SO_REUSEADDR and/or SO_REUSEPORT options to allow it to be shared with anyone else). AFAICT in that case, the listener program is simply out of luck, there is nothing it can do to receive the multicast data, short of asking the user to terminate the interfering program in order to free up the port.
Or is there? For example, is there some technique or approach that would allow a multicast listener to receive all the incoming multicast packets for a given multicast-group, regardless of which UDP port they are being sent to?
If you want to receive all multicast traffic regardless of port, you'd need to use raw sockets to get the complete IP datagram. You could then directly inspect the IP header, check if it's using UDP, then check the UDP header before reading the application layer data. Note that methods of doing this are OS specific and typically require administrative privileges.
Regarding SO_REUSEADDR and SO_REUSEPORT, apps that do this will allow multiple programs to receive multicast packets sent to a given port. However, if you also need to receive unicast packets this method has issues. Incoming unicast packets may be set to both sockets, may always be sent to one specific socket, or sent to each in an alternating fashion. This also differs based on the OS.

Logstash: Filter out heterogeneous logs on a single UDP input

I am taking over an infrastructure where ELK (ElasticSearch/Logstash/Kibana) has been designed as a PoC then turned into a production service.
There is currently a single UDP input, on which multiple remote hosts (mainly firewalls from various vendors) are sending their logs.
As there is no consistency on log format, I wonder what is the best practice (I know both solutions are possible) regarding this issue:
Create as much inputs in Logstash than I have of firewall devices, and ask my network administrator to kindly change the port where logs are forwarded to (e.g. port 10001 for Juniper, port 10002 for Cisco, ...).
Use many patterns in filter to identify which device type is talking to Logstash, then apply a type tag for the transformation and output.
PS: I know that UDP listener is not the best solution in order to keep all the logs, but I have to do with it right now.
Thanks a lot

Inaccurate packet counter in OpenvSwitch

I attempted to send a file from host A to B and capture the packet loss using OpenvSwitch. I connected host A and B to an OpenvSwitch VM separately and connect the two OpenvSwitch VMs. The topology looks like this:
A -- OVS_A -- OVS_B -- B
On each OpenvSwitch VM, I added two very simple flows using the commands below:
ovs-ofctl add-flow br0 in_port=1,actions=output:2
ovs-ofctl add-flow br0 in_port=2,actions=output:1
Then I sent a 10GB file between A and B and compared the packet counts of the egress flow on the sending switch and the ingress flow on the receiving switch. I found that the packet count on the receiving switch is much larger than the count on the sending switch, indicating more packets are received than being sent!
I tried to match more specific flows, e.g. a TCP flow from IP A.A.A.A to B.B.B.B on port C and got the same result. Is there anything wrong with my settings? Or this is a known bug in OpenvSwitch? Any ideas?
BTW, is there any other way to passively capture packet loss rate? Meaning measuring the loss rate w/o introducing any intrusive test flows, but simply use statistics available on the sending/receiving ends or switches.
Thanks in advance!
I just realized that it was not Open vSwitch's fault. I tested with a UDP stream and packet count was correct. I also used tcpdump to capture inbound TCP packets on the switches and the switch at the receiving end had more packets than that at the sending end. The result is consistent with that captured with Open vSwitch's flow counters. I guess I must have missed something important about TCP.

UDP Health Check

So we have an application that makes udp calls and sends packets. However, since responses are given for UDP calls, how could we ensure that the service is up and the port is open and that things are getting stored?
The only thought we have right now is to send in test packets and ensure they are getting saved out to the db.
So my over all question is, is there a better, easier way to ensure that our udp calls are succeeding?
On the listening host, you can validate that the port is open with netstat. For example, if your application uses UDP port 68, you could run:
# Grep for :<port> from netstat output.
$ netstat -lnu | grep :68
udp 0 0 0.0.0.0:68 0.0.0.0:*
You could also send some test data to your application, and then check your database to verify that the fixture data made it into your database. That doesn't mean it always will be, just that it's working at the time of the test.
Ultimately, the problem is that UDP packets are best-effort, and not guaranteed. So unless you can configure your logging platform to send some sort of acknowledgment after data is received and/or written, then you can't guarantee anything. The very nature of UDP is that it leaves acknowledgments (if any) to the application layer.
We took a different approach and we are checking to make sure the calls made it to the db. Its easy enough to query a table and ensure records are in there. If none recent, we know something is wrong. CodeGnome had a good idea, just not the route we went. Thanks!

Receive udp broadcast packets ios

I'm almost completely done with and iOS client for my REST service. The only thing I'm missing is the ability for the client to listen on the network for a UDP broadcast that receives the host display name and base URL for uploads. There could be multiple servers on the network broadcasting and waiting for uploads.
Asynchronous is preferred. The servers will be displayed to the user as the device discovers them and I want the user to be able to select a server at any point in time.
The broadcaster is sending to 255.255.255.255 and does not expect any data back.
I am a beginner in objective c so something simple and easy to use is best.
I recommend looking at CocoaAsyncSocket. It can handle UDP sockets well. I haven't tried listening to a broadcast with it, but it's probably your best bet.