How to interpret "0000290000000000" sent by my keyboard as USB payload? - usb

I'm trying to learn about USB protocol by analyzing Wireshark output of sniffing on my keyboard. For example, consider this frame:
Frame 29335: 72 bytes on wire (576 bits), 72 bytes captured (576 bits) on interface usbmon1, id 0
Interface id: 0 (usbmon1)
Encapsulation type: USB packets with Linux header and padding (115)
Arrival Time: Jan 4, 2022 17:44:50.003878000 CET
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1641314690.003878000 seconds
[Time delta from previous captured frame: 0.205081000 seconds]
[Time delta from previous displayed frame: 3.343982000 seconds]
[Time since reference or first frame: 342.817999000 seconds]
Frame Number: 29335
Frame Length: 72 bytes (576 bits)
Capture Length: 72 bytes (576 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: usb]
USB URB
[Source: 1.5.1]
[Destination: host]
URB id: 0xffff8cbe330fba80
URB type: URB_COMPLETE ('C')
URB transfer type: URB_INTERRUPT (0x01)
Endpoint: 0x81, Direction: IN
Device: 5
URB bus id: 1
Device setup request: not relevant ('-')
Data: present (0)
URB sec: 1641314690
URB usec: 3878
URB status: Success (0)
URB length [bytes]: 8
Data length [bytes]: 8
[Request in: 29167]
[Time from request: 3.343946000 seconds]
[bInterfaceClass: Unknown (0xffff)]
Unused Setup Header
Interval: 16
Start frame: 0
Copy of Transfer Flags: 0x00000204, No transfer DMA map, Dir IN
Number of ISO descriptors: 0
Leftover Capture Data: 0000290000000000
Here's related lsusb output:
> sudo lsusb -s 001:005 -vvvvv
Bus 001 Device 005: ID 046d:c312 Logitech, Inc. DeLuxe 250 Keyboard
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 1.10
bDeviceClass 0
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x046d Logitech, Inc.
idProduct 0xc312 DeLuxe 250 Keyboard
bcdDevice 1.01
iManufacturer 1 LITEON Technology
iProduct 2 USB Multimedia Keyboard
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 0x0022
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xa0
(Bus Powered)
Remote Wakeup
MaxPower 70mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 1 Boot Interface Subclass
bInterfaceProtocol 1 Keyboard
iInterface 0
HID Device Descriptor:
bLength 9
bDescriptorType 33
bcdHID 1.10
bCountryCode 0 Not supported
bNumDescriptors 1
bDescriptorType 34 Report
wDescriptorLength 65
Report Descriptors:
** UNAVAILABLE **
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0008 1x 8 bytes
bInterval 24
can't get debug descriptor: Resource temporarily unavailable
Device Status: 0x0000
(Bus Powered)
The "29" differs based on the key I press. How can I map it back to a specific key? Is there some more context needed in order to interpret this frame?

The USB payload represents scancodes (see here: https://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html). The interpretation requires a bit more context.
The eXtensible Host Controller Interface (xHCI) defines a register level interface to interact with USB on modern systems (https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/extensible-host-controler-interface-usb-xhci.pdf). As I read here and there, most computers (including ARM computers) support xHCI for their USB host controller. In general, it will be in the form of a PCI-Express device called an xHC (in Intel's terminology).
PCI devices are MMIO and DMA. To interact with PCI devices, you write in RAM at conventionnal positions specified by ACPI tables (the MCFG, more specifically). When you write in RAM at these conventionnal positions, it will write to the registers of the PCI device instead which allows to control the device. PCI devices also read/write in RAM directly.
The xHC has interrupt transfer rings. The software (the OS) will put TDs at an appropriate depth on the interrupt transfer ring of the keyboard interrupt endpoint. As stated in the xHCI specification (linked above):
If multiple Interrupt TDs are posted to an Interrupt endpoint Transfer Ring, the xHC should consume no more than one TD per ESIT.
Basically, you program ESIT with a value. The ESIT value tells the xHC to not consume a TD too often. This will allow to trigger transfers at specific intervals in time. The right interval of time is specified in the USB descriptor returned by the USB device.
Every time a transfer occurs, the software (OS) reads the values and determines if there was any change in the keys pressed. The keyboards are Human Interface Devices (HID) which are specified as part of the USB standard (https://wiki.osdev.org/USB_Human_Interface_Devices).
When asked to transfer data, the USB keyboards (HID) return 8 bytes (as seen in your example). As stated on osdev.org:
This report must be requested by the software using interrupt transfers once every interval milliseconds, and the interval is defined in the interrupt IN descriptor of the USB keyboard. The USB keyboard report may be up to 8 bytes in size, although not all these bytes are used and it's OK to implement a proper implementation using only the first three or four bytes (and this is how I do it.) Just for completion's sake, however, I will describe the full report mechanism of the keyboard. Notice that the report structure defined below applies to the boot protocol only.
Maybe read the HID article of osdev.org and that will bring you a long way. The USB payload can thus be interpreted to be the data that has been sent by the keyboard after an interval in milliseconds specified in the USB descriptor of the keyboard has passed. Every time new data comes in, the new data is the key(s) pressed at that moment on the keyboard when it was queried by the xHC every ESIT or so.

Related

XBEE 3 Zigbee 3.0 sometimes sends wrong message content

I am currently working on a SPI connection between a microcontroller (mbed LPC1768) and a XBEE 3 Zigbee 3.0. My goal is to send floats between my mbed and my Computer (wireless). I've got everything set up and received the data with another XBEE device, which is connected with my Computer. I am sending with unicast. I worked quite well and I wanted to test the result by sending a sinus wave and plotting it in Simulink
Here is the sinus curve
As you can see it kinda works, but there are some huge errors within the signal. These wrong values always appear at the same position (when I send the same values). Later I read the message with XCTU and I noticed that the message it received was somehow "manipulated". But this only appeared at specific values.
Here is the message I sent with my mbed:
uint8_t Message[23] {0x7E, 0x0, 0x13, 0x10, 0x1, 0x0, 0x13, 0xA2, 0x0, 0x41, 0xC1, 0x80, 0xD5, 0xFF, 0xFE, 0x0, 0x0, 0xBB, 0xBE, 0xDD, 0x7D, 0x3F};
Notice that 0xBB is the "header" byte for the 4 float bytes. (The checksum is calculated later in the programm).
Here we suddenly have 5 bytes which are within the received data frame field! The last value is the checksum
I know that the receive packet is different from the packet I am sending, but it shouldn't change the content of my message at specific values. Other values are being received with only 4 data bytes correctly. What is the problem here? Sorry for my bad English.
I tried sending a sinus wave without any errors, but some specific values are being changed somehow.
Try switching to API mode 1 (ATAP=1). With ATAP set to 2, the XBee "escapes" certain values with 0x7D.
Here's Digi's documentation on escaped API mode:
In your example, the escaped values are:
0x7D 0x31 -> 0x11
0x7D 0x33 -> 0x13
0x7D 0x5E -> 0x7E
0x7D 0x5D -> 0x7D
So the packet came from 00 13 A2 00 41 C1 7E 38, and had the correct payload of BB BE DD 7D 3F.
I have yet to run into an application that needs API mode 2. Switch to ATAP=1, make sure your software knows it's running in "unescaped" mode, and it should resolve your issue.

wLength is zero on get_descriptor setup packet

I'm playing around with the USB controller on the raspberry pi pico. The end goal is for the pico to be able to send keystrokes as a HID keyboard.
In any case, RP2040 datasheet says that the setup packet is always at the start of the usb controllers DPSRAM (0x50100000). I do get an interrupt signaling me that a setup packet has been received and when I read the setup packet I get the following data:
0x80 - seems to mean device to host, standard type, recipient is device
0x06 - seems to be GET_DESCRIPTOR
0x00 - low byte of wValue, means index zero
0x01 - high byte of wValue, means desc type device
0x00 - low byte of wIndex
0x00 - high byte of wIndex, means index zero
0x00 - low byte of wLength
0x00 - high byte of wLength
The first six bytes are completely what I would expect. But what does a wLength (last two bytes) of 0 mean? The device descriptor seems to have a length of 18 bytes, so I would have expected it to be 0x12, 0x00.
Is a wLength of zero when requesting a descriptor valid or is it more likely that I'm doing something wrong? If it is valid, how would I respond? With a zero length packet?

STM32 USB Custom HID only 1 byte per transaction

I know that maximum speed of USB HID device is 64 kbps, but on oscilloscope I get transactions every 1 ms, which contain only ONE byte. My HID report descriptor listed below. What i must change to achieve 64Kbps? Currently my bInterval = 0x01 (1 ms polling for interrupt endpoint), but actual speed is 65 bytes/s, because it add reportID byte to my 64-byte data. I think, USB should not divide single 64+1 packet to 65 singlebyte packets. For experiment I use reportID=1 (from STM32 to PC). From PC side I use hidapi.dll to interact.
__ALIGN_BEGIN static uint8_t CUSTOM_HID_ReportDesc_FS[USBD_CUSTOM_HID_REPORT_DESC_SIZE] __ALIGN_END =
{
/* USER CODE BEGIN 0 */
USAGE_PAGE(USAGE_PAGE_UNDEFINED)
USAGE(USAGE_UNDEFINED)
COLLECTION(APPLICATION)
REPORT_ID(1)
USAGE(1)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
INPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(2)
USAGE(2)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(3)
USAGE(3)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(4)
USAGE(4)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
/* USER CODE END 0 */
0xC0 /* END_COLLECTION */
};
HID uses interrupt IN/OUT to convey reports. In USB, Interrupt transfers are polled by host every 1 ms. Every time endpoint is polled, it may yield a 64-byte report (for Low/Full speed). That's probably where you get the 64kB/s figure from. Actually, limit is 1k report / second. Also note these limits are different for High-speed and Super-speed devices.
Report descriptor is one thing. What you actually send as interrupt-IN is something else. It should match, but this is not enforced by anything. You should probably look into the code that builds the interrupt IN transfer payload.
Side note: all you seem interested in is to send arbitrary chunks of data, then HID is probably not the relevant profile. Using bulk endpoints looks more appropriate (and you'll not be limited by interrupt endpoint polling rate).

How are USB peripherals' bIntervals enforced?

I have a FullSpeed USB Device that sends a Report Descriptor, whose relevant Endpoint Descriptor declares a bInterval of 8, meaning 8ms.
The following report extract is obtained from a USB Descriptor Dumper when the device's driver is HidUsb:
Interface Descriptor: // +several attributes
------------------------------
0x04 bDescriptorType
0x03 bInterfaceClass (Human Interface Device Class)
0x00 bInterfaceSubClass
0x00 bInterfaceProtocol
0x00 iInterface
HID Descriptor: // +bLength, bCountryCode
------------------------------
0x21 bDescriptorType
0x0110 bcdHID
0x01 bNumDescriptors
0x22 bDescriptorType (Report descriptor)
0x00D6 bDescriptorLength
Endpoint Descriptor: // + bLength, bEndpointAddress, wMaxPacketSize
------------------------------
0x05 bDescriptorType
0x03 bmAttributes (Transfer: Interrupt / Synch: None / Usage: Data)
0x08 bInterval (8 frames)
After switching the driver to WinUSB to be able to use it, if I repeatedly query IN interrupt transfers using libusb, and time the real time spent between 2 libusb calls and during the libusb call using this script :
for (int i = 0; i < n; i++) {
start = std::chrono::high_resolution_clock::now();
forTime = (double)((start - end).count()) / 1000000;
<libusb_interrupt_transfer on IN interrupt endpoint>
end = std::chrono::high_resolution_clock::now();
std::cout << "for " << forTime << std::endl;
transferTime = (double)((end - start).count()) / 1000000;
std::cout << "transfer " << transferTime << std::endl;
std::cout << "sum " << transferTime + forTime << std::endl << std::endl;
}
Here's a sample of obtained values :
for 2.60266
transfer 5.41087
sum 8.04307 //~8
for 3.04287
transfer 5.41087
sum 8.01353 //~8
for 6.42174
transfer 9.65907
sum 16.0808 //~16
for 2.27422
transfer 5.13271
sum 7.87691 //~8
for 3.29928
transfer 4.68676
sum 7.98604 //~8
The sum values consistently stay very close to 8ms, except when the time elapsed before initiating a new interrupt transfer call is too long (the threshold appear to be between 6 and 6.5 for my particular case) in which case it's equal to 16. I have once seen a "for" measure equal to 18ms, and the sum precisely equal to 24ms. Using an URB tracker (Microsoft Message Analyzer in my case), the time differences between Complete URB_FUNCTION_BULK_OR_INTERRUPT_TRANSFER message are also multiples of 8ms - usually 8ms. In short, they match the "sum" measures.
So, it is clear that the time elapsed between two returns of "libusb interrupt transfer calls" is a multiple of 8ms, which I assume is related to the bInterval value of 8 (FullSpeed -> *1ms -> 8ms).
But now that I have, I hope, made it clear what I'm talking about - where is that enforced ? Despite research, I cannot find a clear explanation of how the bInterval value affects things.
Apparently, this is enforced by the driver.
Therefore, is it :
The driver forbids the request from firing until 8ms have passed. Sounds like the most reasonable option to me, but from my URB Trace, Dispatch message events were raised several milliseconds before the request came back. This would mean the real time the data left the host is hidden from me/the message analyzer.
The driver hides the response from me and the analyzer until 8ms have passed since the last response.
If it is indeed handled by the driver, there is a lie somewhere to what's shown to me in the log of exchanged message. A response should come immediately after the a request, yet this is not the case. So, either the request is sent after the displayed time, or the response comes earlier than what's displayed.
How does the enforcement of the respect of the bInterval work ?
My ultimate goal is to disregard that bInterval value and poll the device more frequently than 8ms (I have good reason to believe it can be polled up to every 2ms, and a period of 8ms is unacceptable for its usage), but first I would like to know how its current limitation works, if what I'm seeking is possible at all, so I can understand what to study next (ex. writing a custom WinUSB driver)
I have a FullSpeed USB Device
Careful: Did you verify this? The 8ms are the limit for low speed USB devices - which many common mice or keyboards may still be using.
The 8ms scheduling is done inside the USB host driver (ehci/xhci) AFAIK. You could try to gamble this by releasing and reclaiming the interface - not tested, though. (Edit: Won't work, see comment).
An USB device cannot talk on its own, so it has to be the request that is delayed. Note that a device can also NAK any interrupt IN requests when there is no new data available. This simply adds another bInterval ms to the timing.
writing a custom WinUSB driver
Not recommended - replacing a windows supplied driver is quite a hassle. Our libusb-win32 replacer for an USB CDC device breaks on all big windows 10 upgrades - the device uses a COM port instead of libusb once the upgrade is finished.

Libusb error not supported

I'm trying to send isochronous transfers to the microcontroller on an Arduino Due using the Libusb 1.0 library and the libusk driver installed using zadig_2.2.
Bulk transfers work perfectly, but when I'm trying to initiate an isochronous transfer I get the error code "error not supported". The way I understand it, libusb should support isochronous transfers for Windows now.
I'm using Visual Studio 2015.
Any ideas?
It can be two problems from the Arduino side. You should configure:
Endpoint configuration.
USB descriptor configuration (endpoint should be configured like an Isochronous Transfer Type)
For example:
===>Endpoint Descriptor<=== // <-------- This is the one I'm using.
bLength: 0x07
bDescriptorType: 0x05
bEndpointAddress: 0x81 -> Direction: IN - EndpointID: 1
bmAttributes: 0x01 -> Isochronous Transfer Type, Synchronization Type = No Synchronization, Usage Type = Data Endpoint
wMaxPacketSize: 0x0040 = 1 transactions per microframe, 0x40 max bytes
bInterval: 0x01
===>Endpoint Descriptor<===
bLength: 0x07
bDescriptorType: 0x05
bEndpointAddress: 0x02 -> Direction: OUT - EndpointID: 2
bmAttributes: 0x01 -> Isochronous Transfer Type, Synchronization Type = No Synchronization, Usage Type = Data Endpoint
wMaxPacketSize: 0x0040 = 1 transactions per microframe, 0x40 max bytes
bInterval: 0x01