LWIP PBUF, extra bytes when sending UDP? - udp

I am using LWIP in an application that needs high data rates. So i allocate 4 pbufs once and store their address and with some hardware magic, fill them one after another and, tell the program that buffer is ready and the software sends it as UDP packets, how ever after some time when i sniff the packet I have about 60 extra bytes in my packet, they seem like extra UDP headers but in the payload.
any workaround/suggestion?

On my project at work, we had pbuf corruption that was causing a similar kind of issue. We were using multiple MACs of different types from xilinx and there was unhappiness in the pbuf department. What I would recommend you do is turn on full lwip debug on for IP layer and possibly UDP layer. Then manually trim the prints to something that is managable that reproduces the problem (lwip has a minimum print level - you can use this to help trim out things like warning versus serious prints). In our case we would get UDP or IP layer checksum errors and it was a sign of bad stuff. Also, it is helpful to test in only one direction at a time, to limit the possibilities of bad stuff in one direction. We used iperf examples from xilinx and expanded on them. These were helpful with troubleshooting the issue. BTW 4 pbufs is nothing... When I have looked at ethernet traffic - there is a ton of stuff going on, overhead etc... There is tons of potential problems, from too little ARP table entries and on... Four pbufs is rediculously low, if you are that trapped for memory, I feel sorry for you trying to use lwIP. That just sounds like a nightmare. Also, be careful that typically prints are blocking... so that will mess up performance. It's wise to replace the lwip debug prints with a non blocking routine that you know will not clog up your realtime performance.

Related

Moving from Qt to Boost - Networking

I's using Qt to build Udp and Tcp servers/clients. Now in recent project, Qt is not allowed and i'm supposed to use Boost. Having gone through boost::asio, i feel confused since there are so many features and usages are very different from Qt. For Udp (multicast,broadcast etc) i'm more or less clear now. But for Tcp i'm having problems understanding:
(have to do all async)
TCP is stream oriented so there is no way to know the number of bytes that have arrived. So we use asio::ip::tcp::socket::async_read_some(). However in Qt we have readAll() for a socket which can be made to read all data on readyRead() signal and return a QByteArray, the size of which we needn't specify as it can grow. How do i achieve this in Boost? readyRead() signal i think would map roughly to async_read_some(), what about the rest (readAll() and "growable" buffer/array?
disconnected() signal is emitted in Qt if the TCP connection disconnects (server/client off, lan cable unplugged etc). What would be the equivalent check in Boost? Do i need a timer to periodically send and expect heartbeat messages myself?
Use some overload of async_read free function with the appropriate completion-condition.
As you see there, this function accepts asio::streambuf, which is growing automatically.
Yes, the only portable & reliable way is to send some heartbeat. I don't know how Qt does that, but try cutting the network cable and see whether you get any notification from Qt...

Reliable multicast communication in LAN

I have a network application working on Linux. What I want to do is to make my application able to announce it's presence in LAN and then notify other applications about some changes. Because I don't know how many instances of my application already works on other hosts in LAN I cannot use SCTP, multicast communication is the only way (or maybe you know other solution?).
Structure I want to send over multicast address has fixed size (320 bytes) and contains binary data, which are in fact structure of numbers and bit flags.
I'm wondering if there are any well-know programming techniques, which can make UDP communication a bit more reliable. I only figured out two things:
I drop all packets received by recvmsg() which are smaller then 320 bytes.
I surround every packet with with well-know header and footer, and then check them each time when I receive new message, however packet still can be damaged somewhere in the middle right?
Edit:
I found a PGN protocol, but the only Linux implementation is known to work on x86. It's partial solution for me, because I want to run my program on ARM architecture too
You can try porting PGM to ARM, there are not that many requirements in OpenPGM, it already runs fine on IBM s390 mainframes for example.
Disclosure: I'm the author of OpenPGM \:D/

Simple robust error correction for transmission of ascii over serial (RS485)

I have a very low speed data connection over serial (RS485):
9600 baud
actual data transmission rate is about 25% of that.
The serial line is going through an area of extremely high EMR. Peak fluctuations can reach 3000 KV.
I am not in the position (yet) to force a change in the physical medium, but could easily offer to put in a simple robust forward error correction scheme. The scheme needs to be easy to implement on a PIC18 series micro.
Ideas?
This site claims to implement Reed-Solomon on the PIC18. I've never used it myself, but perhaps it could be a helpful reference?
Search for CRC algorithm used in MODBUS ASCII protocol.
I develop with PIC18 devices and currently use the MCC18 and PICC18 compilers. I noticed a few weeks ago that the peripheral headers for PICC18 incorrectly map the Busy2USART() macro to the TRMT bit instead of the TRMT2 bit. This caused me major headaches for short time before I discovered the problem. Example, a simple transmission:
putc2USART(*p_value++);
while Busy2USART();
putc2USART(*p_value);
When the Busy2USART() macro was incorrectly mapped to the TRMT bit, I was never waiting for bytes to leave the shift register because I was monitoring the wrong bit. Before I realized the inaccurate header file, the only way I was able to successfully transmit a byte over 485 was to wait 1 ms between bytes. My baud rate was 91912 and the delays between bytes killed my throughput.
I also suggest implementing a means of collision detection and checksums. Checksums are cheap, even on a PIC18. If you are able to listen to your own transmissions, do so, it will allow you to be aware of collisions that may result from duplicate addresses on the same loop and incorrect timings.

How to dynamically allocate buffer for receiving UDP socket (VB.Net)

A friend and I are working on a project where we're required to build a reliable UDPclient/server using VB.Net. We have things working well, but one thing that still eludes us is how to dynamically allocate a (byte) buffer for the incoming data. Right now we have to hard code a maximum value/MTU (or use a really large buffer size and resize it once we've finished receiving). Does anyone know of a way that this can be done without needing to specify the receive buffer size?
Basically, before calling the receive function on the socket with a buffer of size x, we want to know x so we can allocate an appropriately sized buffer. Perhaps this is a problem in all socket programing that you just have to deal with??
This is one of the burden's you'll have to take on when you use UDP. You'll have to consider Path MTU discovery. Then again, since you are making reliable UDP, you should be able to auto-detect this and dynamically switch to a smaller packet size. That will solve PMTUD problems as well.
Hopefully, this doesn't sound too much like: "those whose don't use TCP are doomed to reinvent it." Check out the RFCs that are linked in that article for ideas.

Protocols used to talk between an embedded CPU and a PC

I am building a small device with its own CPU (AVR Mega8) that is supposed to connect to a PC. Assuming that the physical connection and passing of bytes has been accomplished, what would be the best protocol to use on top of those bytes? The computer needs to be able to set certain voltages on the device, and read back certain other voltages.
At the moment, I am thinking a completely host-driven synchronous protocol: computer send requests, the embedded CPU answers. Any other ideas?
Modbus might be what you are looking for. It was designed for exactly the type of problem you have. There is lots of code/tools out there and adherence to a standard could mean easy reuse later. It also support human readable ASCII so it is still easy to understand/test.
See FreeModBus for windows and embedded source.
There's a lot to be said for client-server architecture and synchronous protocols. Simplicity and robustness, to start. If speed isn't an issue, you might consider a compact, human-readable protocol to help with debugging. I'm thinking along the lines of modem AT commands: a "wakeup" sequence followed by a set/get command, followed by a terminator.
Host --> [V02?] // Request voltage #2
AVR --> [V02=2.34] // Reply with voltage #2
Host --> [V06=3.12] // Set voltage #6
AVR --> [V06=3.15] // Reply with voltage #6
Each side might time out if it doesn't see the closing bracket, and they'd re-synchronize on the next open bracket, which cannot appear within the message itself.
Depending on speed and reliability requirements, you might encode the commands into one or two bytes and add a checksum.
It's always a good idea to reply with the actual voltage, rather than simply echoing the command, as it saves a subsequent read operation.
Also helpful to define error messages, in case you need to debug.
My vote is for the human readable.
But if you go binary, try to put a header byte at the beginning to mark the beginning of a packet. I've always had bad luck with serial protocols getting out of sync. The header byte allows the embedded system to re-sync with the PC. Also, add a checksum at the end.
I've done stuff like this with a simple binary format
struct PacketHdr
{
char syncByte1;
char syncByte2;
char packetType;
char bytesToFollow; //-or- totalPacketSize
};
struct VoltageSet
{
struct PacketHdr;
int16 channelId;
int16 voltageLevel;
uint16 crc;
};
struct VoltageResponse
{
struct PacketHdr;
int16 data[N]; //Num channels are fixed
uint16 crc;
}
The sync bytes are less critical in a synchronous protocol than in an asynchronous one, but they still help, especially when the embedded system is first powering up, and you don't know if the first byte it gets is the middle of a message or not.
The type should be an enum that tells how to intepret the packet. Size could be inferred from type, but if you send it explicitly, then the reciever can handle unknown types without choking. You can use 'total packet size', or 'bytes to follow'; the latter can make the reciever code a little cleaner.
The CRC at the end adds more assurance that you have valid data. Sometimes I've seen the CRC in the header, which makes declaring structures easier, but putting it at the end lets you avoid an extra pass over the data when sending the message.
The sender and reciever should both have timeouts starting after the first byte of a packet is recieved, in case a byte is dropped. The PC side also needs a timeout to handle the case when the embedded system is not connected and there is no response at all.
If you are sure that both platforms use IEEE-754 floats (PC's do) and have the same endianness, then you can use floats as the data type. Otherwise it's safer to use integers, either raw A/D bits, or a preset scale (i.e. 1 bit = .001V gives a +/-32.267 V range)
Adam Liss makes a lot of great points. Simplicity and robustness should be the focus. Human readable ASCII transfers help a LOT while debugging. Great suggestions.
They may be overkill for your needs, but HDLC and/or PPP add in the concept of a data link layer, and all the benefits (and costs) that come with a data link layer. Link management, framing, checksums, sequence numbers, re-transmissions, etc... all help ensure robust communications, but add complexity, processing and code size, and may not be necessary for your particular application.
USB bus will answer all your requirements. It might be very simple usb device with only control pipe to send request to your device or you can add an interrupt pipe that will allow you to notify host about changes in your device.
There is a number of simple usb controllers that can be used, for example Cypress or Microchip.
Protocol on top of the transfer is really about your requirements. From your description it seems that simple synchronous protocol is definitely enough. What make you wander and look for additional approach? Share your doubts and we will try to help :).
If I wasn't expecting to need to do efficient binary transfers, I'd go for the terminal-style interface already suggested.
If I do want to do a binary packet format, I tend to use something loosely based on the PPP byte-asnc HDLC format, which is extremely simple and easy to send receive, basically:
Packets start and end with 0x7e
You escape a char by prefixing it with 0x7d and toggling bit 5 (i.e. xor with 0x20)
So 0x7e becomes 0x7d 0x5e
and 0x7d becomes 0x7d 0x5d
Every time you see an 0x7e then if you've got any data stored, you can process it.
I usually do host-driven synchronous stuff unless I have a very good reason to do otherwise. It's a technique which extends from simple point-point RS232 to multidrop RS422/485 without hassle - often a bonus.
As you may have already determined from all the responses not directly directing you to a protocol, that a roll your own approach to be your best choice.
So, this got me thinking and well, here are a few of my thoughts --
Given that this chip has 6 ADC channels, most likely you are using Rs-232 serial comm (a guess from your question), and of course the limited code space, defining a simple command structure will help, as Adam points out -- You may wish to keep the input processing to a minimum at the chip, so binary sounds attractive but the trade off is in ease of development AND servicing (you may have to trouble shoot a dead input 6 months from now) -- hyperterminal is a powerful debug tool -- so, that got me thinking of how to implement a simple command structure with good reliability.
A few general considerations --
keep commands the same size -- makes decoding easier.
Framing the commands and optional check sum, as Adam points out can be easily wrapped around your commands. (with small commands, a simple XOR/ADD checksum is quick and painless)
I would recommend a start up announcement to the host with the firmware version at reset - e.g., "HELLO; Firmware Version 1.00z" -- would tell the host that the target just started and what's running.
If you are primarily monitoring, you may wish to consider a "free run" mode where the target would simply cycle through the analog and digital readings -- of course, this doesn't have to be continuous, it can be spaced at 1, 5, 10 seconds, or just on command. Your micro is always listening so sending an updated value is an independent task.
Terminating each output line with a CR (or other character) makes synchronization at the host straight forward.
for example your micro could simply output the strings;
V0=3.20
V1=3.21
V2= ...
D1=0
D2=1
D3=...
and then start over --
Also, commands could be really simple --
? - Read all values -- there's not that many of them, so get them all.
X=12.34 - To set a value, the first byte is the port, then the voltage and I would recommend keeping the "=" and the "." as framing to ensure a valid packet if you forgo the checksum.
Another possibility, if your outputs are within a set range, you could prescale them. For example, if the output doesn't have to be exact, you could send something like
5=0
6=9
2=5
which would set port 5 off, port 6 to full on, and port 2 to half value -- With this approach, ascii and binary data are just about on the same footing in regards to computing/decoding resources at the micro. Or for more precision, make the output 2 bytes, e.g., 2=54 -- OR, add an xref table and the values don't even have to be linear where the data byte is an index into a look-up table ...
As I like to say; simple is usually better, unless it's not.
Hope this helps a bit.
Had another thought while re-reading; adding a "*" command could request the data wrapped with html tags and now your host app could simply redirect the output from your micro to a browser and wala, browser ready --
:)