UDP multicast client does not see UDP multicast traffic generated by tcpreplay - udp

I have two programs:
server ... it generates UDP traffic on a chosen multicast
listener ... it prints UDP traffic on a chosen multicast
(it subscribes to a multicast and prints
whatever it receives).
When I run the server on one machine and listeners on some (other) machine(s), the listener sees UDP traffic and prints it correctly. So these programs should be in a good shape.
However, when I try to capture the traffic, on whatever machine, with tcpdump:
sudo tcpdump -i eth0 'dst 233.65.120.153' -w 0.pcap
and when I later try to replay it, on whatever machine, with tcpreplay:
sudo tcpreplay -i eth0 0.pcap
none of the listeners sees those captured packets:
09:38:40.975604 IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 32)
172.27.6.176.53507 > 233.65.120.153.64968: [udp sum ok] UDP, length 4
0x0000: 4500 0020 0000 4000 0111 6527 ac1b 06b0 E.....#...e'....
0x0010: e941 7899 d103 fdc8 000c 579c 6162 6364 .Ax.......W.abcd
0x0020: 0000 0000 0000 0000 0000 0000 0000 ..............
09:38:41.975709 IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 32)
172.27.6.176.53507 > 233.65.120.153.64968: [udp sum ok] UDP, length 4
0x0000: 4500 0020 0000 4000 0111 6527 ac1b 06b0 E.....#...e'....
0x0010: e941 7899 d103 fdc8 000c 579c 6162 6364 .Ax.......W.abcd
0x0020: 0000 0000 0000 0000 0000 0000 0000 ..............
09:38:42.975810 IP (tos 0x0, ttl 1, id 0, offset 0, flags [DF], proto UDP (17), length 32)
172.27.6.176.53507 > 233.65.120.153.64968: [udp sum ok] UDP, length 4
0x0000: 4500 0020 0000 4000 0111 6527 ac1b 06b0 E.....#...e'....
0x0010: e941 7899 d103 fdc8 000c 579c 6162 6364 .Ax.......W.abcd
0x0020: 0000 0000 0000 0000 0000 0000 0000 ..............
Note that even though none of the listeners sees UDP multicast traffic, I am still able to see it, on whatever machine, with tcpdump:
sudo tcpdump -i eth0 'dst 233.65.120.153' -X
My question: What should I do (differently) if I want to tcpreplay the UDP multicast traffic I am creating so that I can see it on application level (e.g. my listener program), not only by tcpdump?
$ cat sender.c
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <time.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#define PORT 64968
#define GROUP "233.65.120.153"
main(int argc, char *argv[])
{
struct sockaddr_in addr;
int fd, cnt;
struct ip_mreq mreq;
char *message="abcd";
/* Create what looks like an ordinary UDP socket:
AF_INET ... IPv4
SOCK_DGRAM ... UDP
0 ... required constant
*/
if ((fd=socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
perror("socket");
exit(1);
}
/* Set up destination address:
AF_INET ... IPv4
GROUP ... the IP-address of the multicast group
to which we want to multicast
PORT ... the UDP port that on which we want to multicast
*/
memset(&addr, 0, sizeof(addr));
addr.sin_family=AF_INET;
addr.sin_addr.s_addr=inet_addr(GROUP);
addr.sin_port=htons(PORT);
/* now just sendto() our destination! */
while (1) {
if (sendto(fd, message, strlen(message), 0, (struct sockaddr *) &addr, sizeof(addr)) < 0) {
perror("sendto");
exit(1);
}
sleep(1);
}
}
$ cat listener.c
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <time.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#define PORT 64968
#define GROUP "233.65.120.153"
#define MSGBUFSIZE 1000000
char msgbuf[MSGBUFSIZE];
main(int argc, char *argv[])
{
struct sockaddr_in addr;
int fd, nbytes,addrlen;
struct ip_mreq mreq;
u_int yes=1;
/* Create what looks like an ordinary UDP socket:
AF_INET ... IPv4
SOCK_DGRAM ... UDP
0 ... required constant
*/
if ((fd=socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
perror("socket");
exit(1);
}
/* Allow multiple sockets to use the same PORT number:
SOL_SOCKET ... manipulate properties of the socket API itself
SO_REUSEADDR ... Allow reuse of local addresses for bind
*/
if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes)) < 0) {
perror("Reusing ADDR failed");
exit(1);
}
/* set up destination address */
memset(&addr,0,sizeof(addr));
addr.sin_family=AF_INET;
addr.sin_addr.s_addr=htonl(INADDR_ANY); /* N.B.: differs from sender */
addr.sin_port=htons(PORT);
/* bind to receive address */
if (bind(fd,(struct sockaddr *) &addr,sizeof(addr)) < 0) {
perror("bind");
exit(1);
}
/* use setsockopt() to request that the kernel join a multicast group */
mreq.imr_multiaddr.s_addr=inet_addr(GROUP);
mreq.imr_interface.s_addr=htonl(INADDR_ANY);
if (setsockopt(fd,IPPROTO_IP,IP_ADD_MEMBERSHIP,&mreq,sizeof(mreq)) < 0) {
perror("setsockopt");
exit(1);
}
/* now just enter a read-print loop */
while (1) {
addrlen=sizeof(addr);
memset(msgbuf, 0, MSGBUFSIZE);
if ((nbytes=recvfrom(fd, msgbuf, MSGBUFSIZE,0,
(struct sockaddr *) &addr, &addrlen)) < 0) {
perror("recvfrom");
exit(1);
}
printf("Incoming message size = %d\n", nbytes);
int i;
for (i=0; i < nbytes; i++)
printf("%02x ", ((unsigned char) msgbuf[i]));
printf("\n");
}
}

We had the same problem. With tcpdump we saw the data; however, the multicast client/listener was not picking up the data. Then we realized that the Reverse Path Filter (rp_filter) was rejecting the packets.
After disabling the rp-filter, the client/listener application started picking up the packets. Use the below command to disable rp_filter:
echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
In the above, replace 'eth0' with the interface receiving the multicast if other than eth0

To my knowledge, you can't do this on the same box ,tcpreplay bypasses the host's
routing table and sends traffic out the interface.
you have to start your listener on a different box. and make sure multicast is enabled. because by default, switch discards multicast traffic.

In my case I needed to adjust the pcap file by setting the correct destination MAC address. Also the checksum should be recalculated. And yes, 2 hosts are required for "tcpreplay". Without these I was fighting for a long time but only "tcpdump" showed the replayed stream, not my multicast listening app :(

This is just a theory, but it might be that the packets are discarded by the receiving side due to their checksums being wrong.
That could happen if the machine where you run tcpdump has IP or UDP checksum offloading enabled. That means the packages you capture locally haven't their checksums calculated yet, which the hardware does before sending them out. When you then tcpreplay those packets, the checksums are not calculated, as tcpreplay works on a lower level than the socket API you used to generate the packets.
In order to verify the correctness of the checksums (both those of the dump file as well as those of the packets spit out by the subsequent tcpreplay), tcpdump -v ... will warn you about wrong checksums. wireshark also colors wrongly checksummed frames differently (unless turned off in the wireshark settings).
Did you try to tcpdump the packets only on the sending host, or also on the receiving host? The latter would remove the checksum errors, if that is indeed your problem.

In Windows (I write it because in topic name you not specify that is not Windows) there is problem like this with different programs. But this program works fine Colasoft Packet Player. First time you should start it with administrative privileges.
OR (for all possible systems) you can try check this list.

Can I join the party?
Now, it is mentioned clearly on the FAQ page.
https://tcpreplay.appneta.com/wiki/faq.html#can-i-send-packets-on-the-same-computer-running-tcpreplay
Q: Can I send packets on the same computer running tcpreplay?
Generally speaking no. When tcpreplay sends packets, it injects them
between the TCP/IP stack of the system and the device driver of the
network card. The result is the TCP/IP stack system running tcpreplay
never sees the packets.
One suggestion that has been made is using something like VMWare,
Parallels or Xen. Running tcpreplay in the virtual machine (guest)
would allow packets to be seen by the host operating system.

Related

Switching USB DWC3 controller from host to device mode

I need to use an embedded Linux platform as a USB device in order to stream audio and video from a smartphone. The platform has a USB A receptacle and doesn't support OTG (USB_ID pin is not connected on the host controller).
Now I try to switch from host to device mode using DWC3 controller and the debugfs interface. Therefore I activated DWC3 controller in the kernel configuration and set it to "Dual Role Mode". After mounting the file system I checked the current mode in /sys/kernel/debug/xxxxxxxx.usb3/mode with cat mode and got host as expected. But unfortunately I can't write device to the mode file. After entering the command echo device > mode it remains host and does not change. Does anyone know what could be causing it?
I know that this used to work out of the box, but with the kernel version I'm using in an embedded platform (5.4.0) it doesn't. In order to make it work I had to patch around a bug in the kernel:
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index bf1a7a9da..a78990664 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
## -110,9 +110,6 ## static void __dwc3_set_mode(struct work_struct *work)
unsigned long flags;
int ret;
- if (dwc->dr_mode != USB_DR_MODE_OTG)
- return;
-
if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
dwc3_otg_update(dwc, 0);
## -192,6 +189,12 ## void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
dwc->desired_dr_role = mode;
spin_unlock_irqrestore(&dwc->lock, flags);
+ if (dwc->dr_mode != USB_DR_MODE_OTG)
+ {
+ __dwc3_set_mode(&dwc->drd_work);
+ return;
+ }
+
queue_work(system_freezable_wq, &dwc->drd_work);
}
After applying the above patch, rebooting to the new kernel, and running:
echo "host" | sudo tee /sys/kernel/debug/*.usb3/mode
I was able to switch modes like I expect (note: I had *.dwc3 instead of *.usb3 in the above command).

How to send one line at a packet using Gstreamer command line

I am trying to stream a raw video to ethernet via RTP Protocol (RFC4175), using Gstreamer 1.0 in Windows.
I don't want my data to be compressed, so I use rtpvrawpay element
I have the following gstreamer line
gst-launch-1.0 -v filesrc location=%FILENAME% ! videoparse width=%WIDTH% height=%HEIGHT% framerate=50/1 format=GST_VIDEO_FORMAT_GRAY16_BE ! videoconvert ! video/x-raw,media=(string)video,encoding-name=(string)RAW,sampling=(string)YCbCr-4:2:2,witdh=640,height=512 ! rtpvrawpay pt=96 ! udpsink async=true host=%HOST% port=%PORT%
I have another system decoding this rtp video. However, that system is restricted to process 1 line of video for each UDP packet. Morever, the system eliminates any packet has a length different than 1342 bytes.
(1 line: 640(width)x2 bytes + 20 bytes of RTP Header + 42 bytes of UDP header)
So, I have to tell the gstreamer pipe to send 1 line at a packet. My first attempt was to set "mtu" property of the rtpvrawdepay element. When I set mtu to 1300, my UDP packets are 1400 bytes of length (?)
Then I set it to 1302, UDP packets are 1403 bytes. There has to be a way to tell gstreamer never use any packet as a continuation packet in RTP.
some things to d0: first, upload the video to an FTP. Then, in JavaScript/html:
<embed src="myftpsie/mycoolvideo.mp4"></embed>
make sure its in a format the html can comprehend

HyperV Gen2 VM not booting over PXE

I have two VMs in HyperV, both on the same virtual switch (internal), on the same subnet. I am trying to set up one as a DHCP and TFTP server for PXE boot. With Gen1 machine, it's working fine with pxelinux. Gen2 with UEFI does not unfortunately work.
DHCP & TFTP Server
IP 192.168.1.2
VLAN identification is disabled
DHCP - ISC DHCP Server running in a docker container with "host" network type with the following configuration:
set vendorclass = option vendor-class-identifier;
option pxe-system-type code 93 = unsigned integer 16;
set pxetype = option pxe-system-type;
authoritative;
default-lease-time 7200;
max-lease-time 7200;
option tftp-server-name "192.168.1.2";
option bootfile-name "efi/core.efi";
subnet 192.168.1.0 netmask 255.255.255.0 {
interface "eth0:0";
option routers 192.168.1.1;
option subnet-mask 255.255.255.0;
range 192.168.1.100 192.168.1.150;
option broadcast-address 192.168.1.255;
option domain-name-servers 8.8.8.8, 8.8.4.4;
option domain-name "ad.lholota.net";
option domain-search "ad.lholota.net";
if substring(vendorclass, 0, 9)="PXEClient" {
if pxetype=00:06 or pxetype=00:07 {
filename "efi/core.efi";
} else {
filename "pxelinux/pxelinux.0";
}
}
next-server 192.168.1.2;
}
TFTP - tftp-hpa running in a docker container on a "host" type network. I can download the efi files manually through a standard tftp client.
Booting machine
HyperV Gen2
No virtual HDD or DVD
Firmware tab has only one item in the boot sequence - network
Secure boot is disabled
VLAN identification is disabled
Network adapter pointing into the same internal switch as the first VM
Enable virtual machine queue - checked
Enable IPsec task offloading - checked, maximum number: 512
MAC Address dynamic
Enable DHCP guard - NOT checked
Enable router advertisement guard - NOT checked
Procted network - NOT checked
Mirroring mode - None
Enable device naming - NOT checked
The trouble is that the machine doesn't even get to the TFTP server because it doesn't finish the DHCP Discover-Offer-Request-Ack flow. It gets stuck on offer as shown in the dhcpdump below. The booting machine never sends the request message. Funny enough, BIOS based Gen1 HyperV machine boots without any issue so the DHCP flow works there.
Can you please give me a hint of what might be wrong?
TIME: 2018-07-11 19:49:37.641
IP: 0.0.0.0 (0:15:5d:0:50:d0) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)
OP: 1 (BOOTPREQUEST)
HTYPE: 1 (Ethernet)
HLEN: 6
HOPS: 0
XID: 8bf1c250
SECS: 0
FLAGS: 7f80
CIADDR: 0.0.0.0
YIADDR: 0.0.0.0
SIADDR: 0.0.0.0
GIADDR: 0.0.0.0
CHADDR: 00:15:5d:00:50:d0:00:00:00:00:00:00:00:00:00:00
SNAME: .
FNAME: .
OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER)
OPTION: 57 ( 2) Maximum DHCP message size 1472
OPTION: 55 ( 35) Parameter Request List 1 (Subnet mask)
2 (Time offset)
3 (Routers)
4 (Time server)
5 (Name server)
6 (DNS server)
12 (Host name)
13 (Boot file size)
15 (Domainname)
17 (Root path)
18 (Extensions path)
22 (Maximum datagram reassembly size)
23 (Default IP TTL)
28 (Broadcast address)
40 (NIS domain)
41 (NIS servers)
42 (NTP servers)
43 (Vendor specific info)
50 (Request IP address)
51 (IP address leasetime)
54 (Server identifier)
58 (T1)
59 (T2)
60 (Vendor class identifier)
66 (TFTP server name)
67 (Bootfile name)
97 (UUID/GUID)
128 (???)
129 (???)
130 (???)
131 (???)
132 (???)
133 (???)
134 (???)
135 (???)
OPTION: 97 ( 17) UUID/GUID 008c0c7ab81331a0 ...z..1.
4297445b2e41610e B.D[.Aa.
a8 .
OPTION: 94 ( 3) Client NDI 010300 ...
OPTION: 93 ( 2) Client System 0007 ..
OPTION: 60 ( 32) Vendor class identifier PXEClient:Arch:00007:UNDI:003000
---------------------------------------------------------------------------
TIME: 2018-07-11 19:49:37.641
IP: 0.0.0.0 (0:15:5d:0:50:12) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)
OP: 2 (BOOTPREPLY)
HTYPE: 1 (Ethernet)
HLEN: 6
HOPS: 0
XID: 8bf1c250
SECS: 0
FLAGS: 7f80
CIADDR: 0.0.0.0
YIADDR: 192.168.1.105
SIADDR: 192.168.1.2
GIADDR: 0.0.0.0
CHADDR: 00:15:5d:00:50:d0:00:00:00:00:00:00:00:00:00:00
SNAME: .
FNAME: efi/core.efi.
OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER)
OPTION: 51 ( 4) IP address leasetime 7200 (2h)
OPTION: 1 ( 4) Subnet mask 255.255.255.0
OPTION: 3 ( 4) Routers 192.168.1.1
OPTION: 6 ( 8) DNS server 8.8.8.8,8.8.4.4
OPTION: 15 ( 14) Domainname ad.lholota.net
OPTION: 28 ( 4) Broadcast address 192.168.1.255
I have had what i believe is the same issue when booting HyperV virtual machines on win10 2004(19041.685): gen 1 works, gen 2 times out without ever asking for the boot file.
I strongly suspect this is an issue with the GEN2 UEFI PXE implementation. Because as soon as I have at least two entries to choose from in the pxe boot menu it requests files and downloads as expected.
I run dnsmasq for tftp and DHCP and my config file below works if and only if at least one of the last two rows are uncommented. (pxe-service=x86-64_EFI and pxe-service=7 are equal)
config context: https://linuxconfig.org/how-to-configure-a-raspberry-pi-as-a-pxe-boot-server
# /etc/dnsmasq.d/03-tftpboot.conf
enable-tftp
tftp-lowercase
tftp-root=/mnt/data/netboot
pxe-prompt="Choose:"
pxe-service=x86PC,"PXELINUX (BIOS)",bios/pxelinux.0
pxe-service=x86PC,"WinPE (BIOS)",boot/pxeboot.n12
pxe-service=x86-64_EFI,"PXELINUX (EFI)",efi64/syslinux.efi
pxe-service=x86-64_EFI,"winpe (EFI)",boot/wdsmgfw.efi
#pxe-service=7,"PXELINUX (EFI-7)",efi64/syslinux.efi
I think I am experiencing the same problem when using digital rebar provisioner. Works great on Gen 1 but not on Gen 2. Have followed the same configuration as well.
Looking at the digital rebar code it seems like it should work but does not: https://github.com/digitalrebar/provision/blob/8269e1c7ff12a82854c19eccd114d064e2278211/midlayer/pxe.go#L252
I think this could be related:
https://wiki.fogproject.org/wiki/index.php/BIOS_and_UEFI_Co-Existence
https://serverfault.com/questions/739138/hyper-v-2016-gen2-vm-pxe-dhcp-timeout-wireshark-dhcp-discover-offer

What type of asio resolver object should I use?

I am a little confused about which type of resolver I should use for a side project I am working on. I am not finding the answer in the asio documentation.
I know that DNS can work with both UDP or TCP and that larger responses are generally sent over TCP.
asio offers both ip::tcp::resolver and ip::udp::resolver.
Can I use them interchangeably?
After I have resolved the name to an endpoint, I plan to connect with
a TCP socket. Does that mean I have to use a ip::tcp::resolver?
If there are in fact interchangeable:
Is there a performance benefit to using the UDP resolver?
Is there a some other benefit to using the TCP resolver?
If I use UDP resolver, do I need to deal with the response being too large for the UDP lookup and retry with TCP? (I expect to connect to a CDN that will resolve to multiple IP addresses per host)
Use the resolver that has the same protocol as the socket. For example, tcp::socket::connect() expects a tcp::endpoint, and the endpoint type provided via udp::resolver::iterator is udp::endpoint. Attempting to directly use the result of the query from a different protocol will result in a compilation error:
boost::asio::io_service io_service;
boost::asio::ip::tcp::socket socket(io_service);
boost::asio::ip::udp::resolver::iterator iterator = ...;
socket.connect(iterator->endpoint());
// ~~~^~~~~~~ no matching function call to `tcp::socket::connect(udp::endpoint)`
// no known conversion from `udp::endpoint` to `tcp::endpoint`
Neither tcp::resolver nor udp::resolver dictate the transport layer protocol the name resolution will use. The DNS client will use TCP when either it become necessary or it has been explicitly configured to use TCP.
On systems where service name resolution is supported, when performing service-name resolution with a descriptive service name, the type of resolver can affect the results. For example, in the IANA Service Name and Transport Protocol Port Number Registry:
the daytime service uses port 13 on UDP and TCP
the shell service uses port 514 only on TCP
the syslog service uses port 514 only on UDP
Hence, one can use tcp::resolver to resolver the daytime and shell service, but not syslog. On the other hand, udp::resolver can resolve daytime and syslog, but not shell. The following example demonstrates this distinction:
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io_service;
using tcp = boost::asio::ip::tcp;
using udp = boost::asio::ip::udp;
boost::system::error_code error;
tcp::resolver tcp_resolver(io_service);
udp::resolver udp_resolver(io_service);
// daytime is 13/tcp and 13/udp
tcp_resolver.resolve(tcp::resolver::query("daytime"), error);
assert(!error);
udp_resolver.resolve(udp::resolver::query("daytime"), error);
assert(!error);
// shell is 514/tcp
tcp_resolver.resolve(tcp::resolver::query("shell"), error);
assert(!error);
udp_resolver.resolve(udp::resolver::query("shell"), error);
assert(error);
// syslog is 514/udp
tcp_resolver.resolve(tcp::resolver::query("syslog"), error);
assert(error);
udp_resolver.resolve(udp::resolver::query("syslog"), error);
assert(!error);
tcp_resolver.resolve(tcp::resolver::query("514"), error);
assert(!error);
udp_resolver.resolve(udp::resolver::query("514"), error);
assert(!error);
}

Force 401 response with no https

I set up mod_ossl directives in a virtual host (*:4444) to enable ssl on that particular port.
This works wonders when using https://example.com:4444, however when using http://example.com:4444 I get the ascii
NAK ETX SOH NUL STX STX
or
0x15 0x03 0x01 0x00 0x02 0x02
Would it be possible to force a 401 or something similar instead?