UDP Client and Server Buffer Agreement - udp

Hi I am writing a program that will send a file from client to server using UDP socket using different packet sizes for example 512B, 1KB and 2KB and i don't want use fixed buffer size in the receiver(server).I need some codes in Java that will allow both server and client to agree upon a packet size before transfer start. Many thanks

Don't you forget that UDP packets may be fragmented, duplicated and lost? There is a whole bunch of things to take care of, starting with lost packet retransmissions.
I hate to give a "don't do this" kind of answers, but for this one, just use TCP. And if you want some user-level "packets", you can have them with TCP also (prefix each one with its length, that's enough).

Related

How do I detect the ideal UDP payload size?

I heard a UDP payload of 508 bytes will be safe from fragments. I heard the real MTU is 1500 but people should use a payload of 1400 because headers will eat the rest of the bytes, I heard many packets will be fragmented so using around 64K is fine. But I want to forget about all of these and programmatically detect what's gets me good latency and throughput from my local machine to my server.
I was thinking about implementing something like a sliding window that TCP has. I'll send a few UDP packets then more and more until packets are lost. I'm not exactly sure how to tell if a packet was delayed VS lost and I'm not sure how to slide by down without going to far back. Is there an algorithm typically used for this? If I know the average hop between my machine and server or the average ping is there a way to estimate the maximum delay time of a packet?

What is the correct method to receive UDP data from several clients synchronously?

I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)

File Transfer Using Pjsip

I want to develop a program in c using pjsip for peer to peer file transfer. As pjsip uses ice and in ICE UDP is used, so do I need to handle the packet delivery assurance.
And as I would be sending the file by breaking it into several parts and them re assemble all the parts at the receiver's end, so do I have to maintain the sequence of the packets or can i assume that packets are delivered in the correct sequence??
With UDP you can neither assume that packets are delivered in order nor that they are delivered exactly once nor that they are delivered at all! So you need to come up with a protocol that does a lot of things which normally TCP would take care of. It has to reassemble the original data stream and handle the things I listed above.
Additionally, with UDP you can have the problem that you cause congestion. TCP can avoid that with its congestion avoidance algorithms, with UDP you can easily send packets too fast causing them to drop at the overloaded router.
All these are non trivial problems to solve so I suggest you read up on the topic. I'd start with a good book about TCP.

Having difficulty sending small lwip packets immediately using the lwip API

I am creating a server on a ST Cortex M3 device. I am using the lwip API and FreeRTOS. All is working, but the response time is way off. I am currently using lwip 1.3.2 and FreeRTOS 7.3.
A single client connects to the server and must have some time-critical data sent frequently. These packets are on the order of 6 or so bytes. Other times, I am sending upwards of 20K.
The problem I am having is that these smaller packets seem to be taking forever to be sent. I assume this is because lwip is waiting for more data to be enqueued to make more efficient transmissions. I cannot wait around for 2 or 3 seconds for the data to be sent; the client is expecting the data nominally in a few micro-seconds or milli-seconds.
I have tried using lwip_send and lwip_write. (I understand that one is the same as the other with a flag passed at the end. Just had to try...) I have tried setting TCP_NODELAY on the socket to no avail. I tried to set SO_SNDLOWAT to '1', but this always returned -1, so I do not think it is supported.
I do not want to redo all of my code using TCP RAW. Is there a way to invoke the tcp_output() function outside of TCP RAW mode? Is there any way to speed things up or is this just how slow lwip TCP with small packets is?
Any and all suggestions are welcome. Thanks.
--EDIT--
I would also like to add that once I am ready to transmit, I make sure that my TX task in FreeRTOS is at the highest priority. There are no other tasks running up to the point at which I call lwip_send/write.
I'm fairly experienced with bare metal lwIP on xilinx and lwip does not wait to send things out. It will pump packets out as fast as your interrupts are acknowledged based on the ethernet hardware. I've been using UDP only. What is coming to mind though, is your problem might be on the receive end. If you are doing TCP, maybe those small packets are coming out late because you are having receive issues. What you need to do is find in the code the lowest level point at which ethernet is transmit, put a general purpose output toggle on that. Then also put a general purpose output toggle on when a ethernet packet is received. Look at the signals on a scope. If it confirms your hypothesis, then move the output toggles around to narrow down the issue. Wash, rinse and repeat until you are down to where the issue its. It's crude and time consuming, but oftentimes this brute force approach solves many "impossible" embedded software problems, due to pure determination. Good luck!

UDP Packet size and fragements

Let's say I am trying to send data using udp socket. If the data is big then I think the data is going to be divided into several packets and sent to the destination.
At the destination, if there is more than one incoming packets then how to I combined those separated packets into the original packet? Do I need to have a data structure that save all the incoming udp based on the sender ? Thanks in advance..
If you are simply sending the data in one datagram, using a single send() call, then the fragmentation and reassembly will be done for you, by the transport layer. All you need to do is supply a large enough buffer to recv(), and if all the fragments have arrived, then they will be reassembled and presented to you as a single datagram.
Basically, this is the service that UDP provides you (where a "datagram" is a single block of data sent by a single send() call):
The datagram may not arrive at all;
The datagram may arrive out-of-order with respect to other datagrams;
The datagram may arrive more than once;
If the datagram does arrive, it will be complete and correct1.
However, if you are performing the division of the data into several UDP datagrams yourself, at the application layer, then you will of course be responsible for reassembling it too.
1. Correct with the probability implied by the UDP checksum, anyway.
You should use TCP for this. TCP is for structured data that needs to arrive in a certain order without being dropped.
On the other hand, UDP is used when the packet becomes irrelevant after ~500 ms. This is used in games, telephony, and so on.
If your problem requires UDP, then you need to handle any lost, duplicate, or out-of-order packets yourself, or at least write code that is resilient to that possibility.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
If you can't afford lost packets, then TCP is probably a better option than UDP, since it provides that guarantee out of the box.