WebUSB `USBTransferInResult`s seem to contain partial interrupt transfers - usb

I'm using the WebUSB API in Chrome. I can pair with the device, claim an interface, and begin listening to an inbound interrupt endpoint that transfers three bytes every time I press a button, and three more when the button is released (it's a vendor-specific MIDI implementation).
The USBTransferInResult.data.buffer contains all of the bytes it should, except they are not provided transfer-wise. The bytes are being transferred one byte at a time, unless I do something to generate a bunch of data at the same time, in which case, there may be as many as three or four bytes in the same USBTransferInResult.
Note: The maximum packet size for this endpoint is 8. I've tried setting it to stuff like 1 and 256 with no effect.
If I concatenated all of the result buffers, I'd have the exact data I'm expecting, but surely the API should make each transfer (seemingly) atomic.
This could be the result of something funky that the vendor (Focusrite - it's a Novation product) does with their non-compliant MIDI implementation. I just assumed that the vendor would prefer to transfer each MIDI message as an atomic interrupt transfer (not three one-byte transfers in rapid succession), as it would simplify the driver and make it more robust. I cannot see the advantage of breaking these messages up.
Note: If I enable the experimental-usb-backend, my USB device stops appearing in the dialog (when requestDevice is invoked).
This is the code I'm testing it with:
let DEVICE = undefined;
const connect = async function() {
/* Initialize the device, assign it to the global variable,
claim Interface 1, then invoke `listen`. */
const filters = [{vendorId: 0x1235, productId: 0x0018}];
DEVICE = await navigator.usb.requestDevice({filters});
await DEVICE.open();
await DEVICE.selectConfiguration(1);
await DEVICE.claimInterface(1);
listen();
};
const listen = async function() {
/* Recursively, listen for each interrupt transfer from
Endpoint 4, asking for upto 8 bytes each time, and then
logging each transfer (as a regular array of numbers). */
const result = await DEVICE.transferIn(4, 8);
const data = new Uint8Array(result.data.buffer);
console.log(Array.from(data));
listen();
};
// Note: The are a few lines of UI code here that provide a
// button for invoking the `connect` function above, and
// another button that invokes the `close` method of
// the USB device.
Given this issue is not reproducible without the USB device, I don't want to report it as a bug, unless I'm sure that it is one. I was hoping somebody here could help me.
Have I misunderstood the way the WebUSB API works?
Is it reasonable to assume that the vendor may have intended to break MIDI messages into individual bytes?

On reflection, the way this works may be intentional.
The USB MIDI spec is very complicated, as it seeks to accommodate complex MIDI setups, which can constitute entire networks in their own right. The device I'm hacking (the Novation Twitch DJ controller) has no MIDI connectivity, so it would have been much easier for the Novation engineers to just pass each MIDI message as USB interrupt transfers.
As for way it streams the MIDI bytes as soon as they're ready, I'm assuming this simplified the hardware, and is intended to be interpreted like bytecode. Each MIDI message begins with a status byte that indicates the number of data bytes that will follow it (analogous to an opcode, followed by some immediates).
Note: Status bytes also have a leading 1, while data bytes have a leading 0, so they are easy to tell apart (and SysEx messages use specific start and end bytes).
In the end, it was simple enough to use the status bytes to indicate when to instantiate a new message, and what type it should be. I then implemented a set of MIDI message classes (NoteOn, Control, SysEx etc) that each know when they have the right number of bytes (to simplify the logic for each individual message).

Related

Shared receive buffer for USB endpoints?

I'm developing a USB device driver for a microcontroller (Atmel/Microchip SAMD21, but I think the question is a general one). I need multiple endpoints for control & data, and the USB hardware uses per-endpoint descriptors to (among other things) locate buffers for input and output data.
Since IN data is polled at the host's discretion it makes sense that each endpoint has its own IN buffer, so that any endpoint's data (if it has any to send) is immediately available when polled.
But as far as incoming data from SETUP & OUT transactions is concerned, it occurs to me that I can save memory by configuring all endpoints to use a shared buffer. It seems wasteful for each endpoint to have its own buffer when, given the nature of USB transactions, only one such transaction can occur at a time.
Obviously this approach requires that transaction interrupts are handled sufficiently quickly that the shared buffer is freed and prepared for a new transaction in time for whatever the next transaction might be - but this is already a requirement for the control endpoint, where some SETUP transactions are immediately followed by an OUT.
So, assuming the timing is feasible, is there any other reason why such an approach wouldn't work?
Probably not.
Normally, the USB module on a microcontroller handles OUT packets by keeping track of which packet buffers it has written data to, and it waits for your firmware to say it is done processing the buffer before accepting more data from the computer and overwriting the buffer. If an endpoint has no buffers available to receive more data, but the computer sends an OUT packet to the endpoint, the USB module typically responds to the computer with a NAK packet, which tells the computer it should retry later. In this situation, your firmware can take pretty much as long as it wants to handle the OUT packets.
By having multiple endpoints configured to use the same buffer, you mess up this system. When you receive an OUT packet on any of your endpoints, the USB module would (probably) not know that multiple endpoints use the same buffer, so it would not issue NAK packets on your other OUT endpoints. If it receives another OUT packet right away, it would write it to the same buffer, overwriting the previous packet. Therefore, whenever you receive a packet, your code would have to rush as fast as it can to do something like copying the data out of that buffer, disabling other OUT endpoints, or reassigning buffers.
Even if you can actually get this to work, it means that your scheme to save a little bit of memory turns the servicing of USB events into a real-time task (i.e. a task that requires responses from your code in a few microseconds). If you want to add another real-time task to your system later, it will be very difficult, because you always have to be ready to be interrupted by your USB-handling code.
The SAMD21 has tons of memory (32 KB) so you probably don't need to worry about optimizing this part of it.
I agree with David's Response. You didn't mention the speed of the device you are creating. A low-speed would need just a few 8-byte buffers. A full-speed, a few 64-byte buffers. High-speed, maybe eight 64-byte buffers, depending on your use. A super-speed device, your still only talking a few 512-byte buffers.
I would create a ring buffer for each endpoint. This way you are not moving data around. You are simply using a pointer that points to an entry within a memory ring. A full-speed device with a control endpoint, an interrupt endpoint, and two bulk endpoints, each endpoint having sixteen 64-byte entries per ring, is still only a total of 4k RAM, 1/8th of the total RAM.
However, I am not familiar with the SAMD21, so please check the specification to be sure this will work.

How to flush IN bulk endpoint buffer on device

I'd like to make sure that on the device connected to Chrome (via WebUSB) IN endpoint doesn't contain messages from previous bulk transmission. I checked the API for the WebUSB:
https://wicg.github.io/webusb/
and I don't see any kind of flush function that would allow emptying buffer. I was thinking about reading data until device returns NAK - something like this:
/* #1 Make sure that IN endpoint contains no more data */
while (true) {
let result = await data.transferIn(1, 6);
if (result.length === 0) {
break;
}
}
/* #2 Send request */
await data.transferOut(0x81, message);
/* #3 Receive valid request */
let result = await data.transferIn(1, 6);
but unfortunately it looks that there is no good solution:
when there is no more data to read the transferIn() becomes blocking
function - so we cannot relay on async calling transferIn()
when transferIn() is called in the promise with timeout we can end
with more than one promise waiting for incoming data (which is bad
since we don't know which promise would receive data)
What would be the best approach for making sure the device IN endpoint contains no data?
The concept of an "IN bulk endpoint buffer" doesn't exist in the USB specification. Whether a device responds to an IN PID with DATA or NACK is entirely up the device. The answer may be generated on the fly based on the device state, or be fed from an internal buffer. There is no way for the host to know that the buffer is "empty". This is something that has to be defined at a higher protocol layer between the host and device.
For example, the host or device may indicate the amount of data that is expected to be transfered. The host then knows how much data it can expect to read from the IN endpoint before the current operation is complete. This is how the USB Mass Storage protocol works.
If the protocol between the host and device doesn't define these kinds of message boundaries the best way to flush to buffer is not to try. Instead, always be reading from the IN endpoint and interpret data as it is received. This may include using setTimeout() to checking whether a response has been received to a particular request within a given deadline. Data received not in response to a request could be discarded if it is uninteresting.

usb hid: why should i write "null" to the control pipe in the out endpoint interrupt

Digging around with/for HID reports, I ran into a strange problem within a USB HID device. I'm implementing an HID class device and have based my program on the HID USB program supplied by Keil. Some codes have been changed in this project and it seems working fine with 32 bytes input and 32 bytes output reports. Somehow, after thousands times data transferring, the Endpoint 1 out would hang and become a bad pipe. Then I searched the google for some tips, a topic remind me that we should write a data length zero packet after sending a length of packet match what you defined in the report description. But it's not working for me. Then I write a data length zero to the control pipe after I receive a out packet and magically, it works! It would never hang after million times transferring!
Here is my question: Why does it works after writing a data length zero to a control pipe. The data transferring in the out pipe should have no relationship with the data in the control pipe. It confuses me!
If you transfer any data that is less than the expected payload size, you must send a Zero Length Packet to indicate that data has transferred.
But it depends heavily on the implementation on the host controller, and not all devices follow the specification to the point and may stall.
Source:
When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

8051 UART, Receiving bytes serially

I have to send file byte-by-byte to serially connected AT89s52 from computer (VB.NET).
Every sended byte have some job to do in microcontroller what require some time.
Here is relevant part of my C code to receiving bytes:
SCON = 0x50;
TMOD = 0x20; // timer 1, mode 2, 8-bit reload
TH1 = 0xFD; // reload value for 9600 baud
TR1 = 1;
TI = 1;
again:
while(RI!=0)
{
P1=SBUF; // show data on led's
RI=0;
receivedBytes++;
}
if (key1==0)
{
goto exitreceive; // break receiving
}
show_lcd_received_bytes(receivedBytes);
// here is one more loop
// with different duration for every byte
goto again;
And here is VB.NET code for sending bytes:
For a As Integer = 1 To 10
For t As Integer = 0 To 255
SerialPort1.Write(Chr(t))
Next t
Next a
Problem is that mC have some job to do after every received byte and VB.NET don't know for that and send bytes too fast so in mC finishes just a part of all bytes (about 10%).
I can incorporate "Sleep(20)" in VB loop ant then thing will work but I have many of wasted time because every byte need different time to process and that would be unacceptable slow communication.
Now, my question is if 8051 can set some busy status on UART which VB can read before sending to decide to send byte or not.
Or how otherwise to setup such communication as described?
I also try to receive bytes with serial interrupt on mC side with same results.
Hardware is surely OK because I can send data to computer well (as expected).
Your problem is architectural. Don't try to do processing on the received data in the interrupt that handles byte Rx. Have your byte Rx interrupt only copy the received byte to a separate Rx data buffer, and have a background task that does the actual processing of the incoming data without blocking the Rx interrupt handler. If you can't keep up due to overall throughput issue, then RTS/CTS flow control is the appropriate mechanism. For example, when your Rx buffer gets 90% full, deassert the flow control signal to pause the transmit side.
As #TJD mentions hardware flow control can be used to stop the PC from sending characters while the microcomputer is processing received bytes. In the past I have implemented hardware flow by using an available port line as an output. The output needs to be connected to an TTL to RS-232 driver(if you are currently using a RS-232 you may have and extra driver available). If you are using a USB virtual serial port or RS-422/485 you will need to implement software flow control. Typically a control-S is sent to tell the PC to stop sending and a control-Q to continue. In order to take full advantage of flow control you most likely will need to also implement a fully interrupt driven FIFO to receive/send characters.
If you would like additional information concerning hardware flow control, check out http://electronics.stackexchange.com.
Blast from the past, I remember using break out boxes to serial line tracers debugging this kind of stuff.
With serial communication, if you have all the pins/wires utililzed then there is flow control via the RTS (Ready To Send) and DTR (Data Terminal Ready) that are used to signal when it is OK to send more data. Do you have control over that in the device you are coding via C? IN VB.NET, there are events used to receive these signals, or they can be queried using properties on the SerialPort object.
A lot of these answers are suggesting hardware flow control, but you also have the option of enhancing your transmission to be more robust by using software flow control. Currently, your communication is strong, but if you start running a higher baud rate or a longer distance or even just have a noisy connection, characters could be received that are incorrect, or characters could be dropped.
You could add a simple two-byte ACK sequence upon completion of whatever action is set to happen. It could look something like this:
Host sends command byte: <0x00>
Device echoes command byte: <0x00>
Device executes whatever action is needed
Device sends ACK/NAK byte (based on result):
This would allow you to see on the host side if communication is breaking down. The echoed character may mismatch what was sent which would alert you to an issue. Additionally, if a character is not received by the host within some timeout, the host can try retransmitting. Finally, the ACK/NAK gives you the option of returning a status, but most importantly it will let the host know that you've completed the operation and that it can send another command.
This can be extended to include a checksum to give the device a way to verify that the command received was valid (A simple logical inverse sent alongside the command byte would be sufficient).
The advantage to this solution is that it does not require extra lines or UART support on either end for hardware flow control.

When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

I am writing code for a USB device. Suppose the USB host starts a control read transfer to read some data from the device, and the amount of data requested (wLength in the Setup Packet) is a multiple of the Endpoint 0 max packet size. Then after the host has received all the data (in the form of several IN transactions with maximum-sized data packets), will it initiate another IN transaction to see if there is more data even though there can't be more?
Here's an example sequence of events that I am wondering about:
USB enumeration process: max packet size on endpoint 0 is reported to be 64.
SETUP-DATA-ACK transaction starts a control read transfer, wLength = 128.
IN-DATA-ACK transaction delivers first 64 bytes of data to host.
IN-DATA-ACK transaction delivers last 64 bytes of data to host.
IN-DATA-ACK with zero-length DATA packet? Does this transaction ever happen?
OUT-DATA-ACK transaction completes Status Phase of the transfer; transfer is over.
I tested this on my computer (Windows Vista, if it matters) and the answer was no: the host was smart enough to know that no more data can be received from the device, even though all the packets sent by the device were full (maximum size allowed on Endpoint 0). I'm wondering if there are any hosts that are not smart enough, and will try to perform another IN transaction and expect to receive a zero-length data packet.
I think I read the relevant parts of the USB 2.0 and USB 3.0 specifications from usb.org but I did not find this issue addressed. I would appreciate it if someone can point me to the right section in either of those documents.
I know that a zero-length packet can be necessary if the device chooses to send less data than the host requested in wLength.
I know that I could make my code flexible enough to handle either case, but I'm hoping I don't have to.
Thanks to anyone who can answer this question!
Read carefully USB specification:
The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of
the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
So, in your case, when wLength == transfer size, answer is NO, you don't need ZLP.
In case wLength > transfer size, and (transfer size % ep0 size) == 0 answer is YES, you need ZLP.
In general, USB uses a less-than-max-length packet to demarcate an end-of-transfer. So in the case of a transfer which is an integer multiple of max-packet-length, a ZLP is used for demarcation.
You see this in bulk pipes a lot. For example, if you have a 4096 byte transfer, that will be broken down into an integer number of max-length packets plus one zero-length-packet. If the SW driver has a big enough receive buffer set up, higher-level SW receives the entire transfer at once, when the ZLP occurs.
Control transfers are a special case because they have the wLength field, so ZLP isn't strictly necessary.
But I'd strongly suggest SW be flexible to both, as you may see variations with different USB host silicon or low-level HCD drivers.
I would like to expand on MBR's answer. The USB specification 2.0, in section 5.5.3, says:
The Data stage of a control transfer from an endpoint to the host is
complete when the endpoint does one of the following:
Has transferred exactly the amount of data specified during the Setup stage
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet
When a Data stage is complete, the Host Controller advances to the
Status stage instead of continuing on with another data transaction.
If the Host Controller does not advance to the Status stage when the
Data stage is complete, the endpoint halts the pipe as was outlined in
Section 5.3.2. If a larger-than-expected data payload is received from
the endpoint, the IRP for the control transfer will be
aborted/retired.
I added emphasis to one of the sentences in that quote because it seems to specifically say what the device should do: it should "halt" the pipe if the host tries to continue the data phase after it was done, and it is done if all the requested data has been transmitted (i.e. the number of bytes transferred is greater than or equal to wLength). I think halting refers to sending a STALL packet.
In other words, the device does not need a zero-length packet in this situation and in fact the USB specification says it should not provide one.
You don't have to. (*)
The whole point of wLength is to tell the host the maximum number of bytes it should attempt to read (but it might read less !)
(*) I have seen devices that crash when IN/OUT requests were made at incorrect time during control transfers (when debugging our host solution). So any host doing what you are worried about, would of killed those devices and is hopefully not in the market.