tcpreplay traffic not being seen in localhost with netcat - udp

I have a pcap file that I modified with tcprewrite to set source and destination IP = 127.0.0.1, while the port numbers are different. I also set both mac addresses to 00:00:00:00:00:00 as I understand that comms over localhost ignore MAC. I made sure checksum was fixed.
When I run tcpreplay -i lo test-lo.pcap in one shell, and tcpdump -i lo -p udp port 50001 in another, I see the traffic. Yet, when I try to view the traffic with netcat -l -u 50001, it sees nothing. Wireshark is capturing the traffic correctly.
Side note: I'm seeing the following warning when running tcpreplay on localhost:
Warning: Unsupported physical layer type 0x0304 on lo. Maybe it works, maybe it won't. See tickets #123/318 That seems worrisome.
I'm asking because my own UDP listener code is also having the same problem as netcat and thought that maybe I'm missing something. Why would traffic be seen by tcpdump and wireshark, and not by netcat?

I'm asking because my own UDP listener code is also having the same problem as netcat and thought that maybe I'm missing something. Why would traffic be seen by tcpdump and wireshark, and not by netcat?
Look at this image of the kernel packet flow from wikipedia:
As you can see, there are different places along the path where packets can be accessed. Wireshark uses libpcap, which uses an AF_PACKET socket to see packets. Your UDP listener, like netcat, uses regular user-space sockets. Let's highlight both on this image. Wireshark obtains packets via the red path, netcat via the purple one:
As you can see, there is a whole sequence of steps packets have to go through in the kernel to get to a local process socket. These steps include bridging, routing, filtering etc. Which step drops your packets? I don't know. You can try tweaking the packets and maybe you'll get lucky.
If you want a more systematic approach, use a tool like dropwatch. It hooks into the kernel and shows you counters of where the kernel drops packets.

Related

Redirecting mirrored traffic from OpenBSD

I have a software router running OpenBSD and a DPI service on a VM on a separate physical machine.
I want all the passing traffic to be mirrored from the OpenBSD machine to the DPI machine online (not bulk-wise but with reasonably minimized lag). I know of a method to tunnel tcpdump output via SSH to the listening host and tcpreplaying it there, but it feels really hacky.
What a proper method would be for my setup? Something tunnely perhaps like GRE mirroring?
If i understand correctly this could be done with a bridge.
Lets say you have incoming traffic on if1 , outgoing on if2 and you have an if3 back to back with your DPI machine and ip forwarding is enabled.
then you could
ifconfig bridge0 create
ifconfig bridge0 add if1 add if2 up
ifconfig bridge0 addspan if3
then according to man 4 bridge
SPAN PORTS
The bridge can have interfaces added to it as span ports. Span ports
transmit a copy of every frame received by the bridge. This is most
useful for snooping a bridged network passively on another host connected
to one of the span ports of the bridge. Span ports cannot be bridge
members; instead, the addspan and delspan commands are used to add and
delete span ports to and from a bridge.

coturn: Need help configurating my server correctly

I am trying to set up a STUN/TURN server on my local computer for a webrtc application of me. I decided to use coturn. Note that my server is running behind a NAT.
So i fired up my Ubuntu VM and installed it. After reading through the wiki I got it working, atleast on my local network. For testing purposes, i use this site. Therefore, when i try it there with 192.168.178.25:3478, it works. When i try it with "public-ip":3478, it doesnt.
This told me, it is working locally and it should be a port/NAT issue. What i did:
1) I set the VM to Bridging
2) I opened the port 3478 on my router. To test if this is really working, i used telnet on a remote machine and it worked. Another test was that i set up a quick apache server on my local machine on port 3478 and it could be accessed from the outside. This told me that there is, or should be, not port/NAT issue and my turn server should be working.
Any ideas?
I am running my server with the following command:
"sudo turnserver -X "public-ip" -listening-port=3478 -v
The turnserver.conf looks something like this:
fingerprint
realm="myRealm"
lt-cred-mech
user=test:test
As telnet and apache server are both working, i am pretty sure i have a configuration issue. I basically spent the weekend trying and im really lost on what could be wrong.
Thanks for any help!
From the documentation of turnserver
-X, --external-ip <public-ip>[/private-ip] TURN Server public/private address mapping, if the server is behind NAT. In that situation, if a -X is used in form "-X " then that ip will be reported as relay IP address of all allocations. This scenario works only in a simple case when one single relay address is to be used, and no CHANGE_REQUEST STUN functionality is required. That single relay address must be mapped by NAT to the 'external' IP. The "external-ip" value, if not empty, is returned in XOR-RELAYED-ADDRESS field. For that 'external' IP, NAT must forward ports directly (relayed port 12345 must be always mapped to the same 'external' port 12345). In more complex case when more than one IP address is involved, that option must be used several times, each entry must have form "-X ", to map all involved addresses. CHANGE_REQUEST NAT discovery STUN functionality will work correctly, if the addresses are mapped properly, even when the TURN server itself is behind A NAT. By default, this value is empty, and no address mapping is used.
So, it is not enough that you expose only the listening port from the inside LAN to the public network but all ports that you are going to use to relay. Please, note what is said in the same documentation:
--min-port <port> Lower bound of the UDP port range for relay endpoints allocation. Default value is 49152, according to RFC 5766.
--max-port <port> Upper bound of the UDP port range for relay endpoints allocation. Default value is 65535, according to RFC 5766.
You should choose a range of ports in the server, configure with them the options --min-port and --max-port and create a NAT rule to expose those ports to the public side of the router without change.

tunneling using SSH

I'm tunneling all of my internet traffic through a remote computer hosting Debian using sshd. But my internet connection becomes so slow (something around 5 to 10 kbps!). Can be anything wrong with the default configuration to cause this problem?
Thanks in advance,
Tunneling TCP within another TCP stream can sometimes work -- but when things go wrong, they go wrong very quickly.
Consider what happens when the "real world" loses one of your TCP packets: after a certain amount of not getting an ACK packet back in response to new data packets, the sending side realizes a packet has gone missing and re-sends the data.
If that packet happens to be a TCP packet whose payload is another TCP packet, then you have two TCP stacks that are upset about their missing packet. The tunneled TCP layer will re-send packets and the outer TCP layer will also resend packets. This causes a giant pileup of duplicate packets that will eventually be delivered and must be dropped on the floor -- because the outer TCP reliably delivered the packet, eventually.
I believe you would be much better served by a more dedicated tunneling method such as GRE tunnels or IPSec.
Yes, tunelling traffic over tcp connection is not a good idea. See http://sites.inka.de/bigred/devel/tcp-tcp.html

UDP port 0.0.0.0

I have a system that is running on windows.
I have in that system a process that waits for another process on the same machine for a udp message. The message itself is not important (garbage), but the important thing is that I got the event of the message itself.
The problem is that it seems that I get from another local program a UDP message and I don't know from where. I added information about the sender in the recieved UDP message. I see that I get message from valid local port but also from the addres 0.0.0.0 .
I can't understand the 0.0.0.0 . Does anyone has an idea ?
A computer without an assigned IP address could send such packet, even across the network - see e.g. a similar mechanism in DHCP, where the DHCP discovery packet is sent with source address of 0.0.0.0
On a local computer, could this be that the packet is sent (and received) on an interface that is up but without an IP address?
Also, this can mean "broadcast" - if this article on e2 is correct, it is a deprecated method of making a broadcast packet, but apparently it was never removed.
Because it is a udp message and using async type, when reading messages that arrive from the other program I can't know when stop reading, when I get reading the message and I get 0.0.0.0 it means I read everything from the UDP buffer from OS.

Missing UDP fragments when monitoring traffic with tcpdump

I'm on a local LAN with only 8 connected computers using a netgear 24 port gigabit switch, network load is really low and send/receive buffers on all involved nodes(running slackware 11) have been set to 16mb. I'm also running tcpdump on each node to monitor the traffic.
A sending node sends a 10044byte large UDP packet which more often than not (3/4 times) does not end up in the receiving side application, in these cases I notice(using tcpdump) that the first x fragments are missing and only the last 3 (all with offsets > 0 and in order) are caught by tcpdump. The fragmented UDP package can therefore not be reassembled and is most likely thrown away.
I find the missing fragments strange since I have also tried a simple load test bursting out 10000 UDP messages of the same size, the receiving application sends a response and all tests so far gives 100% responses back.
Any clues or hints?
Update!
After resuming the testing of the above mentioned software I found a repeatable way of recreating the error.
Using windump on the sending windows machine, and tcpdump on the receiving machine, after having left the application idle for some time(~5 minutes), I tried sending the udp message but only end up with a single fragment caught by windump and tcpdump, the 3 remaining fragments are lost. Sending the same message one more time works fine and booth windump and tcpdump catches all 4 fragments and the application on the receiving side gets the message. The pattern is repeatable.
Started searching and found the following information, but to me, still not a clear answer.
http://www.eggheadcafe.com/software/aspnet/32856705/first-udp-message-to-a-sp.aspx
Re examining the logs I now notice the ARP request/reply being sent, which matches one of the ideas given in the link above.
NOTE! I filter windump on the sending side using: "dst host receivernode"
Capture from windump: first failed udp message, should be 4 fragments long
14:52:45.342266 arp who-has receivernode tell sendernode
14:52:45.342599 IP sendernode> receivernode : udp
Capture from windump: second udp message, exactly the same contents, all 4 fragments caught
14:52:54.132383 IP sendernode.10104 > receivernode .10113: UDP, length 6019
14:52:54.132397 IP sendernode> receivernode : udp
14:52:54.132406 IP sendernode> receivernode : udp
14:52:54.132414 IP sendernode> receivernode : udp
14:52:54.132422 IP sendernode> receivernode : udp
14:52:56.142421 arp reply sendernode is-at 00:11:11:XX:XX:fd (oui unknown)
Anyone who has a good idea about whats happening? please elaborate!