getting message length in udp before recvfrom() - udp

I am coding an application for linux in C, it will receive thousands of UDP messages which may be of variable size. Somehow, I have to get the message size, before reading it into the buffer with recvfrom() syscall. I can not allocate memory for the maximum possible message, because since I use MTU of 9,000 and get thousands of messages a lot of memory will be wasted.
I checked in google, it is possible to find out what is the size of the message with SO_NREAD option for getsockopt syscall, however this works only on BSD. I know the message is located somewhere in Linux kernel because my sockets are working in non-blocking mode and I am being notified by kernel events that the data is available, so, somewhere it has to store the message length, but how can I get it ?
Thanks in advance

You probably want to use ioctl with FIONREAD. It's available for both Linux and BSD.
FIONREAD int Get the number of bytes that are immediately available for reading.
if(ioctl(s, FIONREAD, &bytes) != -1)
printf("%d bytes available", bytes);

Related

Clear WebRTC Data Channel queue

I have been trying to use WebRTC Data Channel for a game, however, I am unable to consistently send live player data without hitting the queue size limit (8KB) after 50-70 secs of playing.
Sine the data is required to be real-time, I have no use for data that comes out of order. I have initialized the data channel with the following attributes:
negotiated: true,
id: id,
ordered: true,
maxRetransmits: 0,
maxPacketLifetime: 66
The MDN Docs said that the buffer cannot be altered in any way.
Is there anyway I can consistently send data without exceeding the buffer space? I don't mind purging the buffer space as it only contains data that has been clogged up over time.
NOTE: The data is transmitting until the buffer size exceeds the 8KB space.
EDIT: I forgot to add that this issue is only occurring when the two sides are on different networks. When both are within the same LAN, there is no buffering (since higher bandwidth, I presume). I tried to add multiple Data Channels (8 in parallel). However, this only increased the time before the failure occurred again. All 8 buffers were full. I also tried creating a new channel each time the buffer was close to being full and switched to the new DC while closing the previous one that was full, but I found out the hard way (reading Note in MDN Docs) that the buffer space is not released immediately, rather tries to transmit all data in the buffer taking away precious bandwidth.
Thanks in advance.
The maxRetransmits value is ignored if the maxPacketLifetime value is set; thus, you've configured your channel to resend packets for up to 66ms. For your application, it is probably better to use a pure unreliable channel by setting maxPacketLifetime to 0.
As Sean said, there is no way to flush the queue. What you can do is to drop packets before sending them if the channel is congested:
if(dc.bufferedAmount > 0)
return;
dc.send(data);
Finally, you should realise that buffering may happen in the network as well as at the sender: any router can buffer packets when it is congested, and many routers have very large buffers (this is called BufferBloat). The WebRTC stack should prevent you from buffering too much data in the network, but if WebRTC's behaviour is not aggressive enough for your needs, you will need to add explicit feedback from the sender to the receiver in order to avoid having too many packets in flight.
I don't believe you can flush the outbound buffer, you will probably need to watch the bufferedAmount and adjust what you are sending if it grows.
Maybe handle the retransmissions yourselves and discard old data if needed? WebRTC doesn't surface the SACKs from SCTP. So I think you will need to implement something yourself.
It's an interesting problem. Would love to hear the WebRTC W3C WorkGroup takes on it if exposing more info would make things easier for you.

How to find how much RAM memory is used on my micro:bit using makecode?

I wrote code for my micro:bit in makecode.microbit.org. Idea is to use radio communication and send each micro:bit message. So my question at first was how many messages can get just one micro:bit?
Internet say this:
Messages received are read from a queue of configurable size (the larger the queue the more RAM is used). If the queue is full, new messages are ignored. Reading a message removes it from the queue.
So my question is: how to find how much RAM memory is used on my micro:bit using makecode?
edit: https://forum.makecode.com/t/how-to-find-how-much-ram-memory-is-used-on-my-micro-bit-using-makecode/1303/2
How now find length of micro:bit queue?

Query on snmp trap size

In our application, buffer is allocated to receive snmp trap which is:
unsigned char buffer[65536 - 60 - 8];
But these numbers tell me about the IP packet header, footer and total length.
Can you please explain, why do we need this size of buffer for snmp trap?
SNMP allows PDUs sized up to the MTU of the network. The buffer should be as big as the largest anticipated packet, so it should probably correspond to the MTU, if possible.
For example, Ethernet allows up to 1500 byte frame payloads.
Edit: Ok, here is the formal definition from RFC 3416:
The maximum size of an SNMP message is limited to the minimum of:
(1) the maximum message size which the destination SNMP entity can
accept; and,
(2) the maximum message size which the source SNMP entity can
generate.
I interpreted that to be related to the MTU of the network, but of course, if the packets are re-assembled properly after fragmentation, there is no problem in receiving even larger traps.
Maybe, if you are asking "why is the number 65536 in my code", you should ask the person who wrote it?

GCDAsyncUdpSocket dropping packets and creating lots of DISPATCH_WORKER_THREADs

I'm building a multicast client with GCDAsyncUdpSocket and I'm facing a lot of packet loss.
I have monitored the server with Wireshark as well as captured the WiFi packets in the air with AirCap, and I'm sure the packets are transmitted properly. I also looked at the debug traces from the GCDAsyncUdpSocket library and I see that sometimes socket4FDBytesAvailable: is called with a large argument, like 4000, but when it reads the socket it read fewer bytes -- maybe 500 -- and that's where the packets are lost. I increased the socket buffer but that doesn't help.
Last, I noticed using Instruments' time profiler that, coincidence or not, each time I lose packets one new instance of a DISPATCH_WORKER_THREAD is created. Is this normal?

When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

I am writing code for a USB device. Suppose the USB host starts a control read transfer to read some data from the device, and the amount of data requested (wLength in the Setup Packet) is a multiple of the Endpoint 0 max packet size. Then after the host has received all the data (in the form of several IN transactions with maximum-sized data packets), will it initiate another IN transaction to see if there is more data even though there can't be more?
Here's an example sequence of events that I am wondering about:
USB enumeration process: max packet size on endpoint 0 is reported to be 64.
SETUP-DATA-ACK transaction starts a control read transfer, wLength = 128.
IN-DATA-ACK transaction delivers first 64 bytes of data to host.
IN-DATA-ACK transaction delivers last 64 bytes of data to host.
IN-DATA-ACK with zero-length DATA packet? Does this transaction ever happen?
OUT-DATA-ACK transaction completes Status Phase of the transfer; transfer is over.
I tested this on my computer (Windows Vista, if it matters) and the answer was no: the host was smart enough to know that no more data can be received from the device, even though all the packets sent by the device were full (maximum size allowed on Endpoint 0). I'm wondering if there are any hosts that are not smart enough, and will try to perform another IN transaction and expect to receive a zero-length data packet.
I think I read the relevant parts of the USB 2.0 and USB 3.0 specifications from usb.org but I did not find this issue addressed. I would appreciate it if someone can point me to the right section in either of those documents.
I know that a zero-length packet can be necessary if the device chooses to send less data than the host requested in wLength.
I know that I could make my code flexible enough to handle either case, but I'm hoping I don't have to.
Thanks to anyone who can answer this question!
Read carefully USB specification:
The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of
the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
So, in your case, when wLength == transfer size, answer is NO, you don't need ZLP.
In case wLength > transfer size, and (transfer size % ep0 size) == 0 answer is YES, you need ZLP.
In general, USB uses a less-than-max-length packet to demarcate an end-of-transfer. So in the case of a transfer which is an integer multiple of max-packet-length, a ZLP is used for demarcation.
You see this in bulk pipes a lot. For example, if you have a 4096 byte transfer, that will be broken down into an integer number of max-length packets plus one zero-length-packet. If the SW driver has a big enough receive buffer set up, higher-level SW receives the entire transfer at once, when the ZLP occurs.
Control transfers are a special case because they have the wLength field, so ZLP isn't strictly necessary.
But I'd strongly suggest SW be flexible to both, as you may see variations with different USB host silicon or low-level HCD drivers.
I would like to expand on MBR's answer. The USB specification 2.0, in section 5.5.3, says:
The Data stage of a control transfer from an endpoint to the host is
complete when the endpoint does one of the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
When a Data stage is complete, the Host Controller advances to the
Status stage instead of continuing on with another data transaction.
If the Host Controller does not advance to the Status stage when the
Data stage is complete, the endpoint halts the pipe as was outlined in
Section 5.3.2. If a larger-than-expected data payload is received from
the endpoint, the IRP for the control transfer will be
aborted/retired.
I added emphasis to one of the sentences in that quote because it seems to specifically say what the device should do: it should "halt" the pipe if the host tries to continue the data phase after it was done, and it is done if all the requested data has been transmitted (i.e. the number of bytes transferred is greater than or equal to wLength). I think halting refers to sending a STALL packet.
In other words, the device does not need a zero-length packet in this situation and in fact the USB specification says it should not provide one.
You don't have to. (*)
The whole point of wLength is to tell the host the maximum number of bytes it should attempt to read (but it might read less !)
(*) I have seen devices that crash when IN/OUT requests were made at incorrect time during control transfers (when debugging our host solution). So any host doing what you are worried about, would of killed those devices and is hopefully not in the market.