We are developing one project where we are sending UDP Packets, we are successfully able to send it. But we further want to fragment our packet if they exceed some limit. The listener with whom we have to communicate is expecting any packet of 1024 and it happens that depending on the content packet may get bigger then expected, so when it goes it should be fragmented and in the wireshark it should show as 2 messages fragments and should be reassembled at the end. I am developing in vb.net.
Related
I want to simulate a simple client/server application streaming 2-3 4k 25 fps videos within 5G network using OMNeT++ stack. After capturing all incoming video flows with opencv and encoding with h264 codec I have roughly 20 kilobytes for each frame encoded as bytes -- uint8_t. Of course capturing for each of flow happens in a separate thread.
Now I want to send it to some clients over 5G using UDP protocol. If you look almost at every open source implementation of video streaming, the process of transmission is presented very simple:
uint_8* buffer; // encoded frame
int size; // buffer size
send(clientSocket, buffer, size, 0); // send to client
and on the client side a loop with an appropriate recv pulls. The same basically happens in every OMNeT++ simualtion.
Of course a UDP packet with around 20Kb payload will be fragmented on its way to pass good old IPv4 1500 bytes MTU at the backhaul part of a standard 5G architecture. So here comes my question: could I benefit something if I try to reduce my maximal UDP payload to, let's say, 1280 bytes to avoid IP fragmentation and to fit IPv6 minimum reassembly buffer size?
I'm afraid that if I just cluelessly send the encoded frame as in the code above, some fragmented packets may be lost and my decoder (h264 as well) on the client side can fail to decode the frame. Well however the same could happen if I send mine fragmented 1280 bytes packets... so here the question is pretty general, considering the fact that it happens in OMNeT++ simulation but with real video files: is there any advantage of controlling the packet size before sending or you just can cluelessly send any less than 64 Kb UDP datagram and just chill?
I have a simple application where I send OPUS packets from one client to other say A to B.
A reads one packet from a OPUS file, and send to B.
Again after 20ms or 30ms reads one more packet and send to B, so on..
Till now I was using RTP over UDP, so on receiving side at B, when I receive the packet, I receive complete packet. After receiving complete packet I write to a new file.
This works fine.
Now I am planning to support RTP over TCP.
A will read a complete packet from OPUS file and send to B.
When packet is received at B, it may be received as a single packet or multiple packet (tcp behaviour). My requirement is, I should buffer the data till I receive complete packet. Once I receive complete packet, I will write it to a file.
Now my question is, how do I determine the length of OPUS packet at B while I receiving, so that I can buffer it.
Do not want to use libopus etc if somehow I can avoid it. If by any means from received data, can I find out length of packet?
TCP is a stream protocol. You have two primary choices: add a length word (16 bits is enough) before each Opus packet (read length, then read packet, dealing with buffering (wait to get enough bytes for the read), or pad every Opus packet to a specific size. Opus doesn't use fixed-size packets; they depend on the content and the bitrate and quality settings.
I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)
I have an embedded device (source) which is sending out a stream of (audio) data in chunks of 20 ms (= about 330 bytes) by means of a UDP packets. The network volume is thus fairly low at about 16kBps (practically somewhat more due to UDP/IP overhead). The device is running the lwIP stack (v1.3.2) and connects to a WiFi network using a WiFi solution from H&D Wireless (HDG104, WiFi G-mode). The destination (sink) is a Windows Vista PC which is also connected to the WiFi network using a USB WiFi dongle (WiFi G-mode). A program is running on the PC which allows me to monitor the amount of dropped packets. I am also running Wireshark to analyze the network traffic directly. No other clients are actively sending data over the network at this point.
When I send the data using broadcast or multicast many packets are dropped, sometimes upto 15%. However, when I switch to using UDP unicast, the amount of packets dropped is negligible (< 2%).
Using UDP I expect packets to be dropped (which is OK in my Audio application), but why do I see such a big difference in performance between Broadcast/Multicast and unicast?
My router is a WRT54GS (FW v7.50.2) and the PC (sink) is using a trendnet TEW-648UB network adapter, running in WiFi G-mode.
This looks like it is a well known WiFi issue:
Quoted from http://www.wi-fiplanet.com/tutorials/article.php/3433451
The 802.11 (Wi-Fi) standards specify support for multicasting as part of asynchronous services. An 802.11 client station, such as a wireless laptop or PDA (not an access point), begins a multicast delivery by sending multicast packets in 802.11 unicast data frames directed to only the access point. The access point responds with an 802.11 acknowledgement frame sent to the source station if no errors are found in the data frame.
If the client sending the frame doesnt receive an acknowledgement, then the client will retransmit the frame. With multicasting, the leg of the data path from the wireless client to the access point includes transmission error recovery. The 802.11 protocols ensure reliability between stations in both infrastructure and ad hoc configurations when using unicast data frame transmissions.
After receiving the unicast data frame from the client, the access point transmits the data (that the originating client wants to multicast) as a multicast frame, which contains a group address as the destination for the intended recipients. Each of the destination stations can receive the frame; however, they do not respond with acknowledgements. As a result, multicasting doesnt ensure a complete, reliable flow of data.
The lack of acknowledgments with multicasting means that some of the data your application is sending may not make it to all of the destinations, and theres no indication of a successful reception. This may be okay, though, for some applications, especially ones where its okay to have gaps in data. For instance, the continual streaming of telemetry from a control valve monitor can likely miss status updates from time-to-time.
This article has more information:
http://hal.archives-ouvertes.fr/docs/00/08/44/57/PDF/RR-5947.pdf
One very interesting side-effect of the multicast implementation (at the WiFi MAC layer) is that as long as your receivers are wired, you will not experience any issues (due to the acknowledgement on the receiver side, which is really a unicast connection). However, with WiFi receivers (as in my case), packet loss is enormous and completely unacceptable for audio.
Multicast does not have ack packets and so there is no retransmission of lost packets. This makes perfect sense as there are many receivers and it's not like they can all reply at the same time (the air is shared like coax Ethernet). If they were all to send acks in sequence using some backoff scheme it would eat all your bandwidth.
UDP streaming with packet loss is a well known challenge and is usually solved using some type of forward error correction. Recently a class known as fountain codes, such as Raptor-Q, shows promise for packet loss problem in particular when there are several unreliable sources for the data at the same time. (example: multiple wifi access points covering an area)
We use an embedded device to send packets from a serial port over a serial-to-Ethernet converter to a server. One manufacturer we use, Moxa, will always send the packets in the same manner which they are constructed. Meaning, if we construct a packet size of 255, it will always send the packet in a 255 length. The other manufacturer, Tibbo, if we send the packet size 255, it will break the packet up if it is greater than 128. This is the answer I received from the Tibbo engineers at the time:
"From what I understand and what the
engineers said, even if the other
devices provide you with the right
packet size now does not guarantee
that when implemented in other
networks the same will happen. This
is the reason why we feel that packet
size based data transfer through TCP
is not reliable as it was not the way
TCP was designed to be used."
I understand that this may not be how TCP was designed to be used, but if I create a packet of 255 bytes and TCP allows it, then how is this outside of how TCP works? I understand that at some point the packet may get broken up but if the server is expecting a certain packet size and Moxa's offering does not have the same problem as the Tibbo device.
So, is it possible to guarantee a reasonable TCP packet size?
No. TCP is not a packet protocol, it is a stream protocol. It guarantees that the bytes you send will all arrive, and in the right order, but nothing else. In particular, TCP does not give you any kind of message or packet boundaries. If you want such things, they need to be implemented at a higher level by your protocol.