Having difficulty sending small lwip packets immediately using the lwip API - api

I am creating a server on a ST Cortex M3 device. I am using the lwip API and FreeRTOS. All is working, but the response time is way off. I am currently using lwip 1.3.2 and FreeRTOS 7.3.
A single client connects to the server and must have some time-critical data sent frequently. These packets are on the order of 6 or so bytes. Other times, I am sending upwards of 20K.
The problem I am having is that these smaller packets seem to be taking forever to be sent. I assume this is because lwip is waiting for more data to be enqueued to make more efficient transmissions. I cannot wait around for 2 or 3 seconds for the data to be sent; the client is expecting the data nominally in a few micro-seconds or milli-seconds.
I have tried using lwip_send and lwip_write. (I understand that one is the same as the other with a flag passed at the end. Just had to try...) I have tried setting TCP_NODELAY on the socket to no avail. I tried to set SO_SNDLOWAT to '1', but this always returned -1, so I do not think it is supported.
I do not want to redo all of my code using TCP RAW. Is there a way to invoke the tcp_output() function outside of TCP RAW mode? Is there any way to speed things up or is this just how slow lwip TCP with small packets is?
Any and all suggestions are welcome. Thanks.
--EDIT--
I would also like to add that once I am ready to transmit, I make sure that my TX task in FreeRTOS is at the highest priority. There are no other tasks running up to the point at which I call lwip_send/write.

I'm fairly experienced with bare metal lwIP on xilinx and lwip does not wait to send things out. It will pump packets out as fast as your interrupts are acknowledged based on the ethernet hardware. I've been using UDP only. What is coming to mind though, is your problem might be on the receive end. If you are doing TCP, maybe those small packets are coming out late because you are having receive issues. What you need to do is find in the code the lowest level point at which ethernet is transmit, put a general purpose output toggle on that. Then also put a general purpose output toggle on when a ethernet packet is received. Look at the signals on a scope. If it confirms your hypothesis, then move the output toggles around to narrow down the issue. Wash, rinse and repeat until you are down to where the issue its. It's crude and time consuming, but oftentimes this brute force approach solves many "impossible" embedded software problems, due to pure determination. Good luck!

Related

Losing data with UDP over WiFi when multicasting

I'm currently working a network protocol which includes a client-to-client system with auto-discovering of clients on the current local network.
Right now, I'm periodically broadsting over 255.255.255.255 and if a client doesn't emit for 30 seconds I consider it dead (then offline). The goal is to keep an up-to-date list of clients runing. It's working well using UDP, but UDP does not ensure that the packets have been sucessfully delivered. So when it comes to the WiFi parts of the network, I sometimes have "false postivives" of dead clients. Currently I've reduced the time between 2 broadcasts to solve the issue (still not working well), but I don't find this clean.
Is there anything I can do to keep a list of "online" clients without this risk of "false positives" ?
To minimize the false positives, due to dropped packets you should alter a little bit the logic of your heartbeat protocol.
Rather than relying on a single packet broadcast per N seconds, you can send a burst 3 or more packets immediately one after the other every N seconds. This is an approach that ping and traceroute tools follow. With this method you decrease significantly the probability of a lost announcement from a peer.
Furthermore, you can specify a certain number of lost announcements that your application can afford. Also, in order to minimize the possibility of packet loss using wireless network, try to minimize as much as possible the size of the broadcast UDP packet.
You can turn this over, so you will broadcast "ServerIsUp" message
and every client than can register on server. When client is going offline it will unregister, otherwise you can consider it alive.

Using WinPCap for UDP receiving

I would like to use WinPCap library for "reliable" UDP receiving in my C++ application. All examples that I found, using this library for capturing and then proceding. Is there any way (example) how to configure PCap for streaming mode and receive UDP only and on uder defined port or how to solve this. In this time I have reliable UDP server able to receiving 0.5Gb/s. But on slower PC I have a packet lose I can see packets in ethereal but not in application.
thanks
vsm
I assume that you have already tried all of the more standard methods of increasing the number of datagrams that you are able to process? Things like increasing the recv buffer size, speeding up the processing that you do per datagram and using IOCP to allow you to bring more threads to bear on the problem or using RIO if you can target Windows 8?
If so then using WinPCap might work but it sounds like a bit of an extreme solution.
What you need to do is create a filter so that you only capture the datagrams that you are interested in... The docs include examples which use filters.
I have server from here: http://www.gamedev.net/topic/533159-article-using-udp-with-iocp/. This code working with IOCP. Its working fine on WIndows XP. There is no problem with receiving 0.5Gb/s. But now on Win7 is little unreliable. Sometimes there are packets positions error. (my device generating udp packets and in its payload there is PacketNumber - number increasing with each packet. When error occured i write all packet numbers into file. I can see for exmaple: 10,11,290,13,14... ). Is there any known differences in WinXP and Win7 for IOCP and multi threading? Or do you konw any free UDP server with IOCP processing?
In procedding loop I only adding packets into buffer and checking their numbers.

multicast packet loss - running two instances of the same application

On Redhat Linux, I have a multicast listener listening to a very busy multicast data source. It runs perfectly by itself, no packet losses. However, once I start the second instance of the same application with the exactly same settings (same src/dst IP address, sock buffer size, user buffer size, etc.) I started to see very frequent packet losses from both instances. And they lost exact the same packets. If I stop the one of the instances, the remaining one returns to normal without any packet loss.
Initially, I though it is the CPU/kernel load issue, maybe it could not get the packets out of buffer quickly enough. So I did another test. I still keep one instance of the application running. But then started a totally different multicast listener on the same computer but use the second NIC card and listen to a different but even busier multicast source. Both applications run fine without any packet loss.
So it looks like one NIC card is not powerful enough to support two multicast applications, even though they listen to exact the same thing. The possible cause to the packet loss problem might be that, in this scenario, the NIC card driver needs to copy the incoming data to two sock buffers, and this extra copy task is too much for the ether card to handle so it drops packets. Any deeper analysis on this issue and any possible solutions?
Thanks
You are basically finding out that the kernel is inefficient at fan-out of multicast packets. Worst case scenario the code is for every incoming packet allocating two new buffers, the SKB object and packet payload, and copying the NIC buffer twice.
Pick the best case scenario, for every incoming packet a new SKB is allocated but the packet payload is shared between the two sockets with reference counting. Now imagine what happens when two applications, each on their own core and on separate sockets. Every reference to the packet payload is going to cause the memory bus to stall whilst both core caches have to flush and reload, and above that each application is having to kernel context switch back and forth to pass the socket payload. The result is terrible performance.
You aren't the first to encounter such a problem and many vendors have created solutions to it. The basic design is to limit the incoming data to one thread on one core on one socket, then have that thread distribute the data to all other interested threads, preferably using user space code building upon shared memory and lockless data structures.
Examples are TIBCO's Rendezvous and 29 West's Ultra Messaging showing a 660ns IPC bus:
http://www.globenewswire.com/newsroom/news.html?d=194703

MacOS: strange delay between UDP/TCP packets

I am developing an application that sends data per UDP using AsyncUDPSocket class to another client on Mac and Windows. It is very important that packets arrive instantly.
The problem is that every approx. 1000 packets I get a delay for about 2 seconds when receiving Packets. A delay of 100-200 ms would be OK, but 2 seconds produce bad user experience.
I have the UDP communication in a separate Thread, so it is little affected by user interaction with UI and such. I have already tried sending Packets faster, slower, different Packet sizes: the delay stays there. Tried using TCP instead of UDP - same result :(
It does not seem to happen on Windows Cliets.
Maybe there is some system buffer in MacOS that needs to be flushed every time it hast N packets or N bytes of data???
Has anyone an idea how can I prevent the delay from happening?
There are a lot of things that can slow down a network program temporarily, it's hard to know where to start. Have you tried this on multiple networks? Both wireless and ethernet networks? What kind of switch do you have? Does this happen on different OS X computers, or just on one? Can you reproduce the delay with a simpler command line program? Are you using garbage collection? (Grasping at straws here...)
Just out of curiosity, I tested the roundtrip time on UDP echo packets sent from my Mac to another computer on the same LAN. Out of over 60,000 UDP packets with a 1,000 byte payload, none of them took longer than 32 ms, the mean round trip was 0.6 ms, and the sample deviation was 0.21.
(I'm also curious what you need such low latency for.)

UDP using socket API

My server use UDP. It sends 900bytes/1ms to my program automatically after being acquired. I'm using socket API in Windows (VB6). I had made a test and I know that the message processing time (about 0.3ms) of my program is shorter than cycle time (1ms). So the cause should be socket internal buffer. I try calling setsockopt function to set the bigger buffer:
setsockopt(SockNum, SOL_SOCKET, SO_RCVBUF, SockBuffer(1), 1048576)
but I still lost data. How can I fix my problem?
I'm using recv function to receive data. Should recvfrom be better?
Futhermore, I need make a FIFO buffer for UDP. How I can do so (i.e. algorithms or examples)?
In your question you seem to be complaining about using UDP and losing data.
If you are using UDP, you are going to lose data. The way that you avoid losing data is to use TCP, not UDP. If you try to take the User Datagram Protocol and add reliable delivery of data to it, you will end up with something that has all of the flow-control and data windowing of TCP... except it won't be implemented as well as you want.
Remember, "Those who do not understand TCP are doomed to reinvent it.... poorly"