I'm trying to send isochronous transfers to the microcontroller on an Arduino Due using the Libusb 1.0 library and the libusk driver installed using zadig_2.2.
Bulk transfers work perfectly, but when I'm trying to initiate an isochronous transfer I get the error code "error not supported". The way I understand it, libusb should support isochronous transfers for Windows now.
I'm using Visual Studio 2015.
Any ideas?
It can be two problems from the Arduino side. You should configure:
Endpoint configuration.
USB descriptor configuration (endpoint should be configured like an Isochronous Transfer Type)
For example:
===>Endpoint Descriptor<=== // <-------- This is the one I'm using.
bLength: 0x07
bDescriptorType: 0x05
bEndpointAddress: 0x81 -> Direction: IN - EndpointID: 1
bmAttributes: 0x01 -> Isochronous Transfer Type, Synchronization Type = No Synchronization, Usage Type = Data Endpoint
wMaxPacketSize: 0x0040 = 1 transactions per microframe, 0x40 max bytes
bInterval: 0x01
===>Endpoint Descriptor<===
bLength: 0x07
bDescriptorType: 0x05
bEndpointAddress: 0x02 -> Direction: OUT - EndpointID: 2
bmAttributes: 0x01 -> Isochronous Transfer Type, Synchronization Type = No Synchronization, Usage Type = Data Endpoint
wMaxPacketSize: 0x0040 = 1 transactions per microframe, 0x40 max bytes
bInterval: 0x01
Related
I'm playing around with the USB controller on the raspberry pi pico. The end goal is for the pico to be able to send keystrokes as a HID keyboard.
In any case, RP2040 datasheet says that the setup packet is always at the start of the usb controllers DPSRAM (0x50100000). I do get an interrupt signaling me that a setup packet has been received and when I read the setup packet I get the following data:
0x80 - seems to mean device to host, standard type, recipient is device
0x06 - seems to be GET_DESCRIPTOR
0x00 - low byte of wValue, means index zero
0x01 - high byte of wValue, means desc type device
0x00 - low byte of wIndex
0x00 - high byte of wIndex, means index zero
0x00 - low byte of wLength
0x00 - high byte of wLength
The first six bytes are completely what I would expect. But what does a wLength (last two bytes) of 0 mean? The device descriptor seems to have a length of 18 bytes, so I would have expected it to be 0x12, 0x00.
Is a wLength of zero when requesting a descriptor valid or is it more likely that I'm doing something wrong? If it is valid, how would I respond? With a zero length packet?
I'm trying to learn about USB protocol by analyzing Wireshark output of sniffing on my keyboard. For example, consider this frame:
Frame 29335: 72 bytes on wire (576 bits), 72 bytes captured (576 bits) on interface usbmon1, id 0
Interface id: 0 (usbmon1)
Encapsulation type: USB packets with Linux header and padding (115)
Arrival Time: Jan 4, 2022 17:44:50.003878000 CET
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1641314690.003878000 seconds
[Time delta from previous captured frame: 0.205081000 seconds]
[Time delta from previous displayed frame: 3.343982000 seconds]
[Time since reference or first frame: 342.817999000 seconds]
Frame Number: 29335
Frame Length: 72 bytes (576 bits)
Capture Length: 72 bytes (576 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: usb]
USB URB
[Source: 1.5.1]
[Destination: host]
URB id: 0xffff8cbe330fba80
URB type: URB_COMPLETE ('C')
URB transfer type: URB_INTERRUPT (0x01)
Endpoint: 0x81, Direction: IN
Device: 5
URB bus id: 1
Device setup request: not relevant ('-')
Data: present (0)
URB sec: 1641314690
URB usec: 3878
URB status: Success (0)
URB length [bytes]: 8
Data length [bytes]: 8
[Request in: 29167]
[Time from request: 3.343946000 seconds]
[bInterfaceClass: Unknown (0xffff)]
Unused Setup Header
Interval: 16
Start frame: 0
Copy of Transfer Flags: 0x00000204, No transfer DMA map, Dir IN
Number of ISO descriptors: 0
Leftover Capture Data: 0000290000000000
Here's related lsusb output:
> sudo lsusb -s 001:005 -vvvvv
Bus 001 Device 005: ID 046d:c312 Logitech, Inc. DeLuxe 250 Keyboard
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 1.10
bDeviceClass 0
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x046d Logitech, Inc.
idProduct 0xc312 DeLuxe 250 Keyboard
bcdDevice 1.01
iManufacturer 1 LITEON Technology
iProduct 2 USB Multimedia Keyboard
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 0x0022
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xa0
(Bus Powered)
Remote Wakeup
MaxPower 70mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 1 Boot Interface Subclass
bInterfaceProtocol 1 Keyboard
iInterface 0
HID Device Descriptor:
bLength 9
bDescriptorType 33
bcdHID 1.10
bCountryCode 0 Not supported
bNumDescriptors 1
bDescriptorType 34 Report
wDescriptorLength 65
Report Descriptors:
** UNAVAILABLE **
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0008 1x 8 bytes
bInterval 24
can't get debug descriptor: Resource temporarily unavailable
Device Status: 0x0000
(Bus Powered)
The "29" differs based on the key I press. How can I map it back to a specific key? Is there some more context needed in order to interpret this frame?
The USB payload represents scancodes (see here: https://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html). The interpretation requires a bit more context.
The eXtensible Host Controller Interface (xHCI) defines a register level interface to interact with USB on modern systems (https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/extensible-host-controler-interface-usb-xhci.pdf). As I read here and there, most computers (including ARM computers) support xHCI for their USB host controller. In general, it will be in the form of a PCI-Express device called an xHC (in Intel's terminology).
PCI devices are MMIO and DMA. To interact with PCI devices, you write in RAM at conventionnal positions specified by ACPI tables (the MCFG, more specifically). When you write in RAM at these conventionnal positions, it will write to the registers of the PCI device instead which allows to control the device. PCI devices also read/write in RAM directly.
The xHC has interrupt transfer rings. The software (the OS) will put TDs at an appropriate depth on the interrupt transfer ring of the keyboard interrupt endpoint. As stated in the xHCI specification (linked above):
If multiple Interrupt TDs are posted to an Interrupt endpoint Transfer Ring, the xHC should consume no more than one TD per ESIT.
Basically, you program ESIT with a value. The ESIT value tells the xHC to not consume a TD too often. This will allow to trigger transfers at specific intervals in time. The right interval of time is specified in the USB descriptor returned by the USB device.
Every time a transfer occurs, the software (OS) reads the values and determines if there was any change in the keys pressed. The keyboards are Human Interface Devices (HID) which are specified as part of the USB standard (https://wiki.osdev.org/USB_Human_Interface_Devices).
When asked to transfer data, the USB keyboards (HID) return 8 bytes (as seen in your example). As stated on osdev.org:
This report must be requested by the software using interrupt transfers once every interval milliseconds, and the interval is defined in the interrupt IN descriptor of the USB keyboard. The USB keyboard report may be up to 8 bytes in size, although not all these bytes are used and it's OK to implement a proper implementation using only the first three or four bytes (and this is how I do it.) Just for completion's sake, however, I will describe the full report mechanism of the keyboard. Notice that the report structure defined below applies to the boot protocol only.
Maybe read the HID article of osdev.org and that will bring you a long way. The USB payload can thus be interpreted to be the data that has been sent by the keyboard after an interval in milliseconds specified in the USB descriptor of the keyboard has passed. Every time new data comes in, the new data is the key(s) pressed at that moment on the keyboard when it was queried by the xHC every ESIT or so.
I'd like to add a string to my HID project (to store information about the firmware revision). I've read about string descriptors https://www.beyondlogic.org/usbnutshell/usb5.shtml and my understanding is that a Configuration descriptor or report descriptor lists an index that points to a string. The string is stored somewhere else. and the host can then request the string by the index via a "get"string"Descriptor" request.
I'm pretty mystified by the implementation though. I've been trawling through the STM32F04 example libraries (available for download from STM or duplicated here https://github.com/caozoux/arm/blob/master/stm32/STM32F0x2_USB-FS-Device_Lib%20V1.0.0/Libraries/STM32_USB_Device_Library/Class/dfu/src/usbd_dfu_core.c ) and found this:
/* USB DFU device Configuration Descriptor */
const uint8_t usbd_dfu_CfgDesc[USB_DFU_CONFIG_DESC_SIZ] =
{
0x09, /* bLength: Configuration Descriptor size */
USB_CONFIGURATION_DESCRIPTOR_TYPE, /* bDescriptorType: Configuration */
USB_DFU_CONFIG_DESC_SIZ,
/* wTotalLength: Bytes returned */
0x00,
0x01, /*bNumInterfaces: 1 interface*/
0x01, /*bConfigurationValue: Configuration value*/
0x02, /*iConfiguration: Index of string descriptor describing the configuration*/
0xC0, /*bmAttributes: bus powered and Supports Remote Wakeup */
0x32, /*MaxPower 100 mA: this current is used for detecting Vbus*/
/* 09 */
which gives the index of iConfiguration at 0x02. I then searched all the files of another reference to 0x02 or a configuration string and found nothing. I expected to find some sort of array of strings that could be indexed into by the 0x02 index or at least a configuration string. Possibly the example files are incomplete but it feels more likely I'm just not searching for the right things.
My questions are, first is my basic assumption of how string descriptors work correct? And if so, how and where are the strings generally stored? Any links to example implementations would be super helpful as well!
After a brief look at your code, it looks like the code returning string descriptors to the USB host in response to a "Get Descriptor" request is here:
https://github.com/caozoux/arm/blob/e19fc5a/stm32/STM32F0x2_USB-FS-Device_Lib%20V1.0.0/Libraries/STM32_USB_Device_Library/Core/src/usbd_req.c#L313
I have a FullSpeed USB Device that sends a Report Descriptor, whose relevant Endpoint Descriptor declares a bInterval of 8, meaning 8ms.
The following report extract is obtained from a USB Descriptor Dumper when the device's driver is HidUsb:
Interface Descriptor: // +several attributes
------------------------------
0x04 bDescriptorType
0x03 bInterfaceClass (Human Interface Device Class)
0x00 bInterfaceSubClass
0x00 bInterfaceProtocol
0x00 iInterface
HID Descriptor: // +bLength, bCountryCode
------------------------------
0x21 bDescriptorType
0x0110 bcdHID
0x01 bNumDescriptors
0x22 bDescriptorType (Report descriptor)
0x00D6 bDescriptorLength
Endpoint Descriptor: // + bLength, bEndpointAddress, wMaxPacketSize
------------------------------
0x05 bDescriptorType
0x03 bmAttributes (Transfer: Interrupt / Synch: None / Usage: Data)
0x08 bInterval (8 frames)
After switching the driver to WinUSB to be able to use it, if I repeatedly query IN interrupt transfers using libusb, and time the real time spent between 2 libusb calls and during the libusb call using this script :
for (int i = 0; i < n; i++) {
start = std::chrono::high_resolution_clock::now();
forTime = (double)((start - end).count()) / 1000000;
<libusb_interrupt_transfer on IN interrupt endpoint>
end = std::chrono::high_resolution_clock::now();
std::cout << "for " << forTime << std::endl;
transferTime = (double)((end - start).count()) / 1000000;
std::cout << "transfer " << transferTime << std::endl;
std::cout << "sum " << transferTime + forTime << std::endl << std::endl;
}
Here's a sample of obtained values :
for 2.60266
transfer 5.41087
sum 8.04307 //~8
for 3.04287
transfer 5.41087
sum 8.01353 //~8
for 6.42174
transfer 9.65907
sum 16.0808 //~16
for 2.27422
transfer 5.13271
sum 7.87691 //~8
for 3.29928
transfer 4.68676
sum 7.98604 //~8
The sum values consistently stay very close to 8ms, except when the time elapsed before initiating a new interrupt transfer call is too long (the threshold appear to be between 6 and 6.5 for my particular case) in which case it's equal to 16. I have once seen a "for" measure equal to 18ms, and the sum precisely equal to 24ms. Using an URB tracker (Microsoft Message Analyzer in my case), the time differences between Complete URB_FUNCTION_BULK_OR_INTERRUPT_TRANSFER message are also multiples of 8ms - usually 8ms. In short, they match the "sum" measures.
So, it is clear that the time elapsed between two returns of "libusb interrupt transfer calls" is a multiple of 8ms, which I assume is related to the bInterval value of 8 (FullSpeed -> *1ms -> 8ms).
But now that I have, I hope, made it clear what I'm talking about - where is that enforced ? Despite research, I cannot find a clear explanation of how the bInterval value affects things.
Apparently, this is enforced by the driver.
Therefore, is it :
The driver forbids the request from firing until 8ms have passed. Sounds like the most reasonable option to me, but from my URB Trace, Dispatch message events were raised several milliseconds before the request came back. This would mean the real time the data left the host is hidden from me/the message analyzer.
The driver hides the response from me and the analyzer until 8ms have passed since the last response.
If it is indeed handled by the driver, there is a lie somewhere to what's shown to me in the log of exchanged message. A response should come immediately after the a request, yet this is not the case. So, either the request is sent after the displayed time, or the response comes earlier than what's displayed.
How does the enforcement of the respect of the bInterval work ?
My ultimate goal is to disregard that bInterval value and poll the device more frequently than 8ms (I have good reason to believe it can be polled up to every 2ms, and a period of 8ms is unacceptable for its usage), but first I would like to know how its current limitation works, if what I'm seeking is possible at all, so I can understand what to study next (ex. writing a custom WinUSB driver)
I have a FullSpeed USB Device
Careful: Did you verify this? The 8ms are the limit for low speed USB devices - which many common mice or keyboards may still be using.
The 8ms scheduling is done inside the USB host driver (ehci/xhci) AFAIK. You could try to gamble this by releasing and reclaiming the interface - not tested, though. (Edit: Won't work, see comment).
An USB device cannot talk on its own, so it has to be the request that is delayed. Note that a device can also NAK any interrupt IN requests when there is no new data available. This simply adds another bInterval ms to the timing.
writing a custom WinUSB driver
Not recommended - replacing a windows supplied driver is quite a hassle. Our libusb-win32 replacer for an USB CDC device breaks on all big windows 10 upgrades - the device uses a COM port instead of libusb once the upgrade is finished.
I started to work on lpc2148 with xbee series 2.
On the transmitting side i am using lpc2148 with xbee coordinator in API mode,
and Rx side i am using xbee on Shield in router AT mode.
I want XBee to activate a D3 pin, which could be used to turn on relay on Rx side
API frame format as below code using c program .
enter code here
#define Delimeter 0x7E
void Init_UART1(void) //This function setups UART1
{
unsigned int Baud16;
U1LCR = 0x83; // DLAB = 1
Baud16 = (Fpclk / 16) / UART_BPS;
U1DLM = Baud16 / 256;
U1DLL = Baud16 % 256;
U1LCR = 0x03;
}
void main() {
Init_UART1();
LED1_ON();
setRemoteState(0x5);//AD3 config DOUT HIGH
Delay(25);
LED1_OFF();
setRemoteState(0x4);//AD3 config DOUT LOW
Delay(25);
void setRemoteState (char value) {
UART1_Write(Delimeter);//start byte
UART1_Write(0);//high part of length
UART1_Write(0X10);//low part of length
UART1_Write(0X17);//remote AT command
UART1_Write(0X0);//frame id 0 for no reply
UART1_Write(0X0);
UART1_Write(0X0);
UART1_Write(0X0);
UART1_Write(0X0);
UART1_Write(0X0);
UART1_Write(0X0);
UART1_Write(0XFF);// broadcast
UART1_Write(0XFF);// broadcast
UART1_Write(0XFF);
UART1_Write(0XFE);
UART1_Write(0X02);//apply changes immediately on remote
UART1_Write('D');//writing on AD3 pin
UART1_Write('3');
UART1_Write(value);
sum = 0x17 + 0xFF + 0xFF + 0xFF + 0xFE + 0x02 + 'D' + '3' + value;
UART1_Write(0xFF - (0xFF & sum));//checksum
Delay(25);
}
}
i am not able to get any communication or data on my Rx side. D3 pinout volage is still low.
Please guide on this point...
This program is working fine with arduino using Serial.write Function.
Regards,
Vijay
Are you using the correct baud rate? Are you sure you've connected TX/RX correctly and haven't crossed them? If you have hardware flow control enabled, is the RTS signal into the XBee asserted? Is the XBee module powering up and receiving enough current?
If you monitor the XBee transmit signal on another device (computer via FTDI's TTL-to-USB cable) are you seeing bytes at startup (I believe it sends a Modem Status during it's startup)? If you monitor the LPC2148 transmit signal, are you seeing the byte stream you think you're sending (confirming that you're driving UART1 correctly)?
Can you tell if the XBee module is receiving your requests, perhaps by toggling ATD0 between high and low output and checking with an LED or scope? Do you have any hardware you can use to monitor the serial stream between the two devices to see if it's sending the bytes you think you're sending? Are you sure it's calculating the correct checksum (dump the bytes somehow and try running them through X-CTU to see if they work).
If you're going to be doing a lot of communications between the LPC2148 and the XBee module, you might want to try porting this Open Source ANSI C XBee Host Library to the platform. It includes multiple layers of XBee frame processing that should reduce the amount of software you need to write.