I'm currently studying how USB works. I read, that there are transactions, which are build from smaller pieces - packets. I read about all kinds of packets.
I can't understand one thing. As the book says - every transaction consists of 3 packets: token, data and hanshake.
The way I understand my book is depicted in the schema below.
In my opinion:
I think the first transaction should contain only token IN and data packet, but no hanshake packet (handshake for what?).
I think, that response should only contain ACK hanshake packet (that the data is written properly to the device).
Please, help me understand it in a proper way.
Best regards,
Tom.
A transaction is a series of one or more packets.
A typical IN transaction with no data looks like this:
The host sends an IN token.
The device sends a NAK handshake packet, which means it doesn't have any data to send.
A typical IN transaction with data looks like this:
The host sends an IN token.
The device sends a DATA0 or DATA1 packet with data.
The host sends an ACK handshake.
A typical OUT transaction looks like this:
The host sends an OUT token.
The host sends a DATA0 or DATA1 packet with data.
The device sends a NAK or ACK handshake depending on whether it accepted the data.
Note that I am just talking about full-speed (12 Mbps) USB 2.0 devices, and things can get a bit more complicated for the higher-speed devices.
Note that any of these packets could be lost due to noise issues. The USB specification specifically accounts for this, ensuring that packet loss doesn't result in incorrect operation of the device or host.
Related
When transmitting a message, you have payload + control data.
While control data is there to help a receiver filter the right data, payload is just there as a state in time.
Given that there are uncountable amounts of data being broadcasted from all kinds of devices, how does a receiver know at any time that this data is meant for it and not another device?
Example:
time1: control
time2: payload
or even
time1: control-part1
time2: control-part2
...
timen: payloadn
How does a receiver of control-part2 know about control-part1 and is able to assign payloadn to control-part1?
In addition to this, communication might be encrypted and even if not, it probably looks like this:
time1: control-part1
time1.5: payload x by some random other device
time2: control-part2
...
timen: payloadn
timen+1: control by some random other device
And if it's encrypted how does it know where the next bit of information is.
An application I think of are light-modulations by distance measurement devices. The device basically has to wait for a full sequence to come back. But there is a lot of light diffraction going on, interfering with the receptor. Why does it just work?
If the broadcast message is not encrypted the receiver just see all the broadcast messages. In the case of ARP, when a device want to know the MAC address of other device send a arp request (broadcast) and the target device respond because in the ARP packet there is a field to identify it.
On the other hand, in a case of broadcast encryption there is a system to distribute keys and then the receiver device that have the that can decrypt the message just decrypted and can access the information (Broadcast Encryption).
So, when is done a broadcast and is not encrypted the packet has some information to identify the target host and when the message is encrypted there are first a distribution of keys and then only the devices that have the keys can see the information decrypted.
I hope that this help!
I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)
Depiction of state transitions with NYET, NAK and PING packets
What special purpose does NYET serve when the next transaction could be simply be avoided by a NAK packet from the device?
The reason for the introduction of the NYET handshake packet were bandwidth utilization efficiency considerations.
If a device responds with a NYET, the host knows that the device will very likely NAK the next OUT transaction which means that the whole frame time the data is being transmitted is wasted: The exact same data will have to be sent again.
That's why NAKing an OUT transaction wastes a lot of frame time since the OUT transaction occupies the bus without purpose and it competes with other transactions/devices as well, taking frame time from them.
Imagine the protocol without the NYET handshake: The host would have to send the same whole block of data (i.e. up to 512 bytes for bulk endpoints) every time the device NAKs just to inquiry if the device is ready.
If the host gets a NYET instead, it will start PINGing the device, asking if the device is ready to receive more data. A PING transaction is very short compared to a large data OUT transaction. Hence, if the device NAKs the PING, the host can use the rest of the frame for other transactions instead which leads to better utilization of the bus.
I have an embedded device (source) which is sending out a stream of (audio) data in chunks of 20 ms (= about 330 bytes) by means of a UDP packets. The network volume is thus fairly low at about 16kBps (practically somewhat more due to UDP/IP overhead). The device is running the lwIP stack (v1.3.2) and connects to a WiFi network using a WiFi solution from H&D Wireless (HDG104, WiFi G-mode). The destination (sink) is a Windows Vista PC which is also connected to the WiFi network using a USB WiFi dongle (WiFi G-mode). A program is running on the PC which allows me to monitor the amount of dropped packets. I am also running Wireshark to analyze the network traffic directly. No other clients are actively sending data over the network at this point.
When I send the data using broadcast or multicast many packets are dropped, sometimes upto 15%. However, when I switch to using UDP unicast, the amount of packets dropped is negligible (< 2%).
Using UDP I expect packets to be dropped (which is OK in my Audio application), but why do I see such a big difference in performance between Broadcast/Multicast and unicast?
My router is a WRT54GS (FW v7.50.2) and the PC (sink) is using a trendnet TEW-648UB network adapter, running in WiFi G-mode.
This looks like it is a well known WiFi issue:
Quoted from http://www.wi-fiplanet.com/tutorials/article.php/3433451
The 802.11 (Wi-Fi) standards specify support for multicasting as part of asynchronous services. An 802.11 client station, such as a wireless laptop or PDA (not an access point), begins a multicast delivery by sending multicast packets in 802.11 unicast data frames directed to only the access point. The access point responds with an 802.11 acknowledgement frame sent to the source station if no errors are found in the data frame.
If the client sending the frame doesnt receive an acknowledgement, then the client will retransmit the frame. With multicasting, the leg of the data path from the wireless client to the access point includes transmission error recovery. The 802.11 protocols ensure reliability between stations in both infrastructure and ad hoc configurations when using unicast data frame transmissions.
After receiving the unicast data frame from the client, the access point transmits the data (that the originating client wants to multicast) as a multicast frame, which contains a group address as the destination for the intended recipients. Each of the destination stations can receive the frame; however, they do not respond with acknowledgements. As a result, multicasting doesnt ensure a complete, reliable flow of data.
The lack of acknowledgments with multicasting means that some of the data your application is sending may not make it to all of the destinations, and theres no indication of a successful reception. This may be okay, though, for some applications, especially ones where its okay to have gaps in data. For instance, the continual streaming of telemetry from a control valve monitor can likely miss status updates from time-to-time.
This article has more information:
http://hal.archives-ouvertes.fr/docs/00/08/44/57/PDF/RR-5947.pdf
One very interesting side-effect of the multicast implementation (at the WiFi MAC layer) is that as long as your receivers are wired, you will not experience any issues (due to the acknowledgement on the receiver side, which is really a unicast connection). However, with WiFi receivers (as in my case), packet loss is enormous and completely unacceptable for audio.
Multicast does not have ack packets and so there is no retransmission of lost packets. This makes perfect sense as there are many receivers and it's not like they can all reply at the same time (the air is shared like coax Ethernet). If they were all to send acks in sequence using some backoff scheme it would eat all your bandwidth.
UDP streaming with packet loss is a well known challenge and is usually solved using some type of forward error correction. Recently a class known as fountain codes, such as Raptor-Q, shows promise for packet loss problem in particular when there are several unreliable sources for the data at the same time. (example: multiple wifi access points covering an area)
I am writing code for a USB device. Suppose the USB host starts a control read transfer to read some data from the device, and the amount of data requested (wLength in the Setup Packet) is a multiple of the Endpoint 0 max packet size. Then after the host has received all the data (in the form of several IN transactions with maximum-sized data packets), will it initiate another IN transaction to see if there is more data even though there can't be more?
Here's an example sequence of events that I am wondering about:
USB enumeration process: max packet size on endpoint 0 is reported to be 64.
SETUP-DATA-ACK transaction starts a control read transfer, wLength = 128.
IN-DATA-ACK transaction delivers first 64 bytes of data to host.
IN-DATA-ACK transaction delivers last 64 bytes of data to host.
IN-DATA-ACK with zero-length DATA packet? Does this transaction ever happen?
OUT-DATA-ACK transaction completes Status Phase of the transfer; transfer is over.
I tested this on my computer (Windows Vista, if it matters) and the answer was no: the host was smart enough to know that no more data can be received from the device, even though all the packets sent by the device were full (maximum size allowed on Endpoint 0). I'm wondering if there are any hosts that are not smart enough, and will try to perform another IN transaction and expect to receive a zero-length data packet.
I think I read the relevant parts of the USB 2.0 and USB 3.0 specifications from usb.org but I did not find this issue addressed. I would appreciate it if someone can point me to the right section in either of those documents.
I know that a zero-length packet can be necessary if the device chooses to send less data than the host requested in wLength.
I know that I could make my code flexible enough to handle either case, but I'm hoping I don't have to.
Thanks to anyone who can answer this question!
Read carefully USB specification:
The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of
the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
So, in your case, when wLength == transfer size, answer is NO, you don't need ZLP.
In case wLength > transfer size, and (transfer size % ep0 size) == 0 answer is YES, you need ZLP.
In general, USB uses a less-than-max-length packet to demarcate an end-of-transfer. So in the case of a transfer which is an integer multiple of max-packet-length, a ZLP is used for demarcation.
You see this in bulk pipes a lot. For example, if you have a 4096 byte transfer, that will be broken down into an integer number of max-length packets plus one zero-length-packet. If the SW driver has a big enough receive buffer set up, higher-level SW receives the entire transfer at once, when the ZLP occurs.
Control transfers are a special case because they have the wLength field, so ZLP isn't strictly necessary.
But I'd strongly suggest SW be flexible to both, as you may see variations with different USB host silicon or low-level HCD drivers.
I would like to expand on MBR's answer. The USB specification 2.0, in section 5.5.3, says:
The Data stage of a control transfer from an endpoint to the host is
complete when the endpoint does one of the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
When a Data stage is complete, the Host Controller advances to the
Status stage instead of continuing on with another data transaction.
If the Host Controller does not advance to the Status stage when the
Data stage is complete, the endpoint halts the pipe as was outlined in
Section 5.3.2. If a larger-than-expected data payload is received from
the endpoint, the IRP for the control transfer will be
aborted/retired.
I added emphasis to one of the sentences in that quote because it seems to specifically say what the device should do: it should "halt" the pipe if the host tries to continue the data phase after it was done, and it is done if all the requested data has been transmitted (i.e. the number of bytes transferred is greater than or equal to wLength). I think halting refers to sending a STALL packet.
In other words, the device does not need a zero-length packet in this situation and in fact the USB specification says it should not provide one.
You don't have to. (*)
The whole point of wLength is to tell the host the maximum number of bytes it should attempt to read (but it might read less !)
(*) I have seen devices that crash when IN/OUT requests were made at incorrect time during control transfers (when debugging our host solution). So any host doing what you are worried about, would of killed those devices and is hopefully not in the market.