In our application, buffer is allocated to receive snmp trap which is:
unsigned char buffer[65536 - 60 - 8];
But these numbers tell me about the IP packet header, footer and total length.
Can you please explain, why do we need this size of buffer for snmp trap?
SNMP allows PDUs sized up to the MTU of the network. The buffer should be as big as the largest anticipated packet, so it should probably correspond to the MTU, if possible.
For example, Ethernet allows up to 1500 byte frame payloads.
Edit: Ok, here is the formal definition from RFC 3416:
The maximum size of an SNMP message is limited to the minimum of:
(1) the maximum message size which the destination SNMP entity can
accept; and,
(2) the maximum message size which the source SNMP entity can
generate.
I interpreted that to be related to the MTU of the network, but of course, if the packets are re-assembled properly after fragmentation, there is no problem in receiving even larger traps.
Maybe, if you are asking "why is the number 65536 in my code", you should ask the person who wrote it?
Related
I've been experimenting with isochronous USB transfers using WinUsb, and it turns out that WinUsb always sends data as fast as possible:
WinUsb_WriteIsochPipe packetizes the transfer buffer so that in each interval, the host can send the maximum bytes allowed per interval.
However for the kernel drivers you can apparently send shorter packets:
The MaximumPacketSize value indicates the maximum permitted size of the isochronous packet. The client driver can set the size of each isochronous packet to any value less than the MaximumPacketSize value.
I wondered how USB audio handles this. As far as I can see in the spec they just two alternative configurations for the interface - a zero bandwidth one, and a non-zero bandwidth one. There is a flag that says whether the endpoint requires full-size packets or not.
So my questions are:
a) What is the best way to handle sending less than full-speed data. Should I have a whole array of alternate configurations with different max packet sizes?
b) Should I expect to be able to send shorter-than-maximum packets? If so why doesn't WinUsb allow this?
Maybe you have to call WinUsb_WriteIsochPipe once for each packet you want to send. Make sure to use asynchronous I/O so you can queue up dozens or hundreds of requests ahead of time.
Would sending lots a small packets by UDP take more resources (cpu, compression by zlib, etc...). I read here that sending one big packet of ~65kBYTEs by UDP would probably fail so I'm thought that sending lots of smaller packets would succeed more often, but then comes the computational overhead of using more processing power (or at least thats what I'm assuming). The question is basically this; what is the best scenario for sending the maximum successful packets and keeping computation down to a minimum? Is there a specific size that works most of the time? I'm using Erlang for a server and Enet for the client (written in c++). Using Zlib compression also and I send the same packets to every client (broadcasting is the term I guess).
The maximum size of UDP payload that, most of the time, will not cause ip fragmentation is
MTU size of the host handling the PDU (most of the case it will be 1500) -
size of the IP header (20 bytes) -
size of UDP header (8 bytes)
1500 MTU - 20 IP hdr - 8 UDP hdr = 1472 bytes
#EJP talked about 534 bytes but I would fix it to 508. This is the number of bytes that FOR SURE will not cause fragmentation, because the minimum MTU size that an host can set is 576 and IP header max size can be 60 bytes (508 = 576 MTU - 60 IP - 8 UDP)
By the way i'd try to go with 1472 bytes because 1500 is a standard-enough value.
Use 1492 instead of 1500 for calculation if you're passing through a PPPoE connection.
Would sending lots a small packets by UDP take more resources ?
Yes, it would, definitely! I just did an experiment with a streaming app. The app sends 2000 frames of data each second, precisely timed. The data payload for each frame is 24 bytes. I used UDP with sendto() to send this data to a listener app on another node.
What I found was interesting. This level of activity took my sending CPU to its knees! I went from having about 64% free CPU time, to having about 5%! That was disastrous for my application, so I had to fix that. I decided to experiment with variations.
First, I simply commented out the sendto() call, to see what the packet assembly overhead looked like. About a 1% hit on CPU time. Not bad. OK... must be the sendto() call!
Then, I did a quick fakeout test... I called the sendto() API only once in every 10 iterations, but I padded the data record to 10 times its previous length, to simulate the effect of assembling a collection of smaller records into a larger one, sent less often. The results were quite satisfactory: 7% CPU hit, as compared to 59% previously. It would seem that, at least on my *NIX-like system, the operation of sending a packet is costly just in the overhead of making the call.
Just in case anyone doubts whether the test was working properly, I verified all the results with Wireshark observation of the actual UDP transmissions to confirm all was working as it should.
Conclusion: it uses MUCH less CPU time to send larger packets less often, then the same amount of data in the form of smaller packets sent more frequently. Admittedly, I do not know what happens if UDP starts fragging your overly-large UDP datagram... I mean, I don't know how much CPU overhead this adds. I will try to find out (I'd like to know myself) and update this answer.
534 bytes. That is required to be transmitted without fragmentation. It can still be lost altogether of course. The overheads due to retransmission of lost packets and the network overheads themselves are several orders of magnitude more significant than any CPU cost.
You're probably using the wrong protocol. UDP is almost always a poor choice for data you care about transmitting. You wind up layering sequencing, retry, and integrity logic atop it, and then you have TCP.
I am using the GCDAsyncUdpSocket to send/receive data to a multicast group. In the GCDAsyncUdpSocket.m file, I found the setting bellow and changed the value to 32768 for example. But I can't still receive any packet that is larger than 9216 bytes.
max4ReceiveSize = 9216;
max6ReceiveSize = 9216;
Is there another setting?
Edit:
I discovered that the GCDAsyncUdpSocket class did provide a method to set this value called setMaxReceiveIPv4BufferSize. Tried that but it still only received at around 9216 bytes.
It would help to know exactly which operating system you are on, as the settings vary. On OS X 10.6, look at:
# sysctl net.inet.udp.maxdgram
net.inet.udp.maxdgram: 9216
However, you must keep in mind that the maximum transmit unit (MTU) of any data path will be determined by the smallest value supported by any device in the path. In other words, if just one device or software rule refuses to handle datagrams larger than a particular size, then that will be the limit for that path. Thus there could be many settings on many devices which affect this. Also note that the MTU rules for IPv4 and IPv6 are radically different, and some routers have different rules for multicast versus unicast.
In general, it is not safe to assume that any IP datagram larger than a total of 576 bytes (including all protocol headers) will be allowed through, as 576 the maximum IP packet size which IPv4 guarantees will be supported. For IPv6, the guaranteed size is 1280. Most devices will support larger packets, but they are not required to.
I am writing code for a USB device. Suppose the USB host starts a control read transfer to read some data from the device, and the amount of data requested (wLength in the Setup Packet) is a multiple of the Endpoint 0 max packet size. Then after the host has received all the data (in the form of several IN transactions with maximum-sized data packets), will it initiate another IN transaction to see if there is more data even though there can't be more?
Here's an example sequence of events that I am wondering about:
USB enumeration process: max packet size on endpoint 0 is reported to be 64.
SETUP-DATA-ACK transaction starts a control read transfer, wLength = 128.
IN-DATA-ACK transaction delivers first 64 bytes of data to host.
IN-DATA-ACK transaction delivers last 64 bytes of data to host.
IN-DATA-ACK with zero-length DATA packet? Does this transaction ever happen?
OUT-DATA-ACK transaction completes Status Phase of the transfer; transfer is over.
I tested this on my computer (Windows Vista, if it matters) and the answer was no: the host was smart enough to know that no more data can be received from the device, even though all the packets sent by the device were full (maximum size allowed on Endpoint 0). I'm wondering if there are any hosts that are not smart enough, and will try to perform another IN transaction and expect to receive a zero-length data packet.
I think I read the relevant parts of the USB 2.0 and USB 3.0 specifications from usb.org but I did not find this issue addressed. I would appreciate it if someone can point me to the right section in either of those documents.
I know that a zero-length packet can be necessary if the device chooses to send less data than the host requested in wLength.
I know that I could make my code flexible enough to handle either case, but I'm hoping I don't have to.
Thanks to anyone who can answer this question!
Read carefully USB specification:
The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of
the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
So, in your case, when wLength == transfer size, answer is NO, you don't need ZLP.
In case wLength > transfer size, and (transfer size % ep0 size) == 0 answer is YES, you need ZLP.
In general, USB uses a less-than-max-length packet to demarcate an end-of-transfer. So in the case of a transfer which is an integer multiple of max-packet-length, a ZLP is used for demarcation.
You see this in bulk pipes a lot. For example, if you have a 4096 byte transfer, that will be broken down into an integer number of max-length packets plus one zero-length-packet. If the SW driver has a big enough receive buffer set up, higher-level SW receives the entire transfer at once, when the ZLP occurs.
Control transfers are a special case because they have the wLength field, so ZLP isn't strictly necessary.
But I'd strongly suggest SW be flexible to both, as you may see variations with different USB host silicon or low-level HCD drivers.
I would like to expand on MBR's answer. The USB specification 2.0, in section 5.5.3, says:
The Data stage of a control transfer from an endpoint to the host is
complete when the endpoint does one of the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
When a Data stage is complete, the Host Controller advances to the
Status stage instead of continuing on with another data transaction.
If the Host Controller does not advance to the Status stage when the
Data stage is complete, the endpoint halts the pipe as was outlined in
Section 5.3.2. If a larger-than-expected data payload is received from
the endpoint, the IRP for the control transfer will be
aborted/retired.
I added emphasis to one of the sentences in that quote because it seems to specifically say what the device should do: it should "halt" the pipe if the host tries to continue the data phase after it was done, and it is done if all the requested data has been transmitted (i.e. the number of bytes transferred is greater than or equal to wLength). I think halting refers to sending a STALL packet.
In other words, the device does not need a zero-length packet in this situation and in fact the USB specification says it should not provide one.
You don't have to. (*)
The whole point of wLength is to tell the host the maximum number of bytes it should attempt to read (but it might read less !)
(*) I have seen devices that crash when IN/OUT requests were made at incorrect time during control transfers (when debugging our host solution). So any host doing what you are worried about, would of killed those devices and is hopefully not in the market.
I'm implementing USB on a PIC 18F2550 using a generic HID interface. I've set up the HID profile configuation to have a single 64 byte message for both inputs and outputs.
Now it's basically working. The device registers OK with windows. I can find it in my program on the PC and can send and receive data to it. The problem is this though - messages from the PC to the PIC are truncated to the size of the EP0 endpoint buffer.
Before I go debugging too much further I want to try to clarify my understanding of the USB protocols here and check I got it right.
Assume that the EP0 input buffer is 8 bytes. It's my understanding that the PC end will send a control packet which is 8 bytes. In there is the length in bytes of the data to follow. And then it will send a sequence of 8 byte data packets and the PIC end has to acknowledge each one.
It's my understanding that the PC end knows how big each packet may be by looking in the maximum packet size field in the device descriptor and will divide up the message accordingly into multiple data packets.
Before I go looking for more hours at the code, can anyone confirm that this is basically correct? That if the EP0 buffer size is 8 bytes then the PC should know this because of the configuration field I mentioned above and send multiple data packets?
If I make my receive buffer on the PIC 64 bytes then I get 64 bytes of the message which is sufficient for my needs, but I don't like not understanding why it doesn't work with small buffers, and one day I'll probably need them anyway.
Any advice or information would be welcome.
There is something called Endpoint Descriptor, which, among other things, defines the wMaxPacketSize - which is what the Host Controller Interface drivers use to subdivide a large USB transfer into smaller packets.
This is entirely different from the EP0 buffer size - which however, is always required to be larger than the wMaxPacketSize. My guess is (try posting your usb_config.h and usb_descriptors.c, if you use Microchip USB stack), that you're either trying to use 8-byte long EP0 with 64-byte long wMaxPacketSize, which is truncating the transfer.
Also, be aware that in USB 1.1 Low Speed, the wMaxPacketSize cannot exceed 8, and in USB 1.1 Full Speed it cannot exceed 64.
0x07,/*sizeof(USB_EP_DSC)*/
USB_DESCRIPTOR_ENDPOINT, //Endpoint Descriptor
HID_EP | _EP_IN, //EndpointAddress
_INTERRUPT, //Attributes
DESC_CONFIG_WORD(9), //size
0x01, //Interval
/* Endpoint Descriptor */
0x07,/*sizeof(USB_EP_DSC)*/
USB_DESCRIPTOR_ENDPOINT, //Endpoint Descriptor
HID_EP | _EP_OUT, //EndpointAddress
_INTERRUPT, //Attributes
DESC_CONFIG_WORD(9), //size
0x01 //Interval