I'd like to add a string to my HID project (to store information about the firmware revision). I've read about string descriptors https://www.beyondlogic.org/usbnutshell/usb5.shtml and my understanding is that a Configuration descriptor or report descriptor lists an index that points to a string. The string is stored somewhere else. and the host can then request the string by the index via a "get"string"Descriptor" request.
I'm pretty mystified by the implementation though. I've been trawling through the STM32F04 example libraries (available for download from STM or duplicated here https://github.com/caozoux/arm/blob/master/stm32/STM32F0x2_USB-FS-Device_Lib%20V1.0.0/Libraries/STM32_USB_Device_Library/Class/dfu/src/usbd_dfu_core.c ) and found this:
/* USB DFU device Configuration Descriptor */
const uint8_t usbd_dfu_CfgDesc[USB_DFU_CONFIG_DESC_SIZ] =
{
0x09, /* bLength: Configuration Descriptor size */
USB_CONFIGURATION_DESCRIPTOR_TYPE, /* bDescriptorType: Configuration */
USB_DFU_CONFIG_DESC_SIZ,
/* wTotalLength: Bytes returned */
0x00,
0x01, /*bNumInterfaces: 1 interface*/
0x01, /*bConfigurationValue: Configuration value*/
0x02, /*iConfiguration: Index of string descriptor describing the configuration*/
0xC0, /*bmAttributes: bus powered and Supports Remote Wakeup */
0x32, /*MaxPower 100 mA: this current is used for detecting Vbus*/
/* 09 */
which gives the index of iConfiguration at 0x02. I then searched all the files of another reference to 0x02 or a configuration string and found nothing. I expected to find some sort of array of strings that could be indexed into by the 0x02 index or at least a configuration string. Possibly the example files are incomplete but it feels more likely I'm just not searching for the right things.
My questions are, first is my basic assumption of how string descriptors work correct? And if so, how and where are the strings generally stored? Any links to example implementations would be super helpful as well!
After a brief look at your code, it looks like the code returning string descriptors to the USB host in response to a "Get Descriptor" request is here:
https://github.com/caozoux/arm/blob/e19fc5a/stm32/STM32F0x2_USB-FS-Device_Lib%20V1.0.0/Libraries/STM32_USB_Device_Library/Core/src/usbd_req.c#L313
Related
I know that maximum speed of USB HID device is 64 kbps, but on oscilloscope I get transactions every 1 ms, which contain only ONE byte. My HID report descriptor listed below. What i must change to achieve 64Kbps? Currently my bInterval = 0x01 (1 ms polling for interrupt endpoint), but actual speed is 65 bytes/s, because it add reportID byte to my 64-byte data. I think, USB should not divide single 64+1 packet to 65 singlebyte packets. For experiment I use reportID=1 (from STM32 to PC). From PC side I use hidapi.dll to interact.
__ALIGN_BEGIN static uint8_t CUSTOM_HID_ReportDesc_FS[USBD_CUSTOM_HID_REPORT_DESC_SIZE] __ALIGN_END =
{
/* USER CODE BEGIN 0 */
USAGE_PAGE(USAGE_PAGE_UNDEFINED)
USAGE(USAGE_UNDEFINED)
COLLECTION(APPLICATION)
REPORT_ID(1)
USAGE(1)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
INPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(2)
USAGE(2)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(3)
USAGE(3)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
REPORT_ID(4)
USAGE(4)
LOGICAL_MIN(0)
LOGICAL_MAX(255)
REPORT_SIZE(8)
REPORT_COUNT(64)
OUTPUT(DATA | VARIABLE | ABSOLUTE)
/* USER CODE END 0 */
0xC0 /* END_COLLECTION */
};
HID uses interrupt IN/OUT to convey reports. In USB, Interrupt transfers are polled by host every 1 ms. Every time endpoint is polled, it may yield a 64-byte report (for Low/Full speed). That's probably where you get the 64kB/s figure from. Actually, limit is 1k report / second. Also note these limits are different for High-speed and Super-speed devices.
Report descriptor is one thing. What you actually send as interrupt-IN is something else. It should match, but this is not enforced by anything. You should probably look into the code that builds the interrupt IN transfer payload.
Side note: all you seem interested in is to send arbitrary chunks of data, then HID is probably not the relevant profile. Using bulk endpoints looks more appropriate (and you'll not be limited by interrupt endpoint polling rate).
I have a FullSpeed USB Device that sends a Report Descriptor, whose relevant Endpoint Descriptor declares a bInterval of 8, meaning 8ms.
The following report extract is obtained from a USB Descriptor Dumper when the device's driver is HidUsb:
Interface Descriptor: // +several attributes
------------------------------
0x04 bDescriptorType
0x03 bInterfaceClass (Human Interface Device Class)
0x00 bInterfaceSubClass
0x00 bInterfaceProtocol
0x00 iInterface
HID Descriptor: // +bLength, bCountryCode
------------------------------
0x21 bDescriptorType
0x0110 bcdHID
0x01 bNumDescriptors
0x22 bDescriptorType (Report descriptor)
0x00D6 bDescriptorLength
Endpoint Descriptor: // + bLength, bEndpointAddress, wMaxPacketSize
------------------------------
0x05 bDescriptorType
0x03 bmAttributes (Transfer: Interrupt / Synch: None / Usage: Data)
0x08 bInterval (8 frames)
After switching the driver to WinUSB to be able to use it, if I repeatedly query IN interrupt transfers using libusb, and time the real time spent between 2 libusb calls and during the libusb call using this script :
for (int i = 0; i < n; i++) {
start = std::chrono::high_resolution_clock::now();
forTime = (double)((start - end).count()) / 1000000;
<libusb_interrupt_transfer on IN interrupt endpoint>
end = std::chrono::high_resolution_clock::now();
std::cout << "for " << forTime << std::endl;
transferTime = (double)((end - start).count()) / 1000000;
std::cout << "transfer " << transferTime << std::endl;
std::cout << "sum " << transferTime + forTime << std::endl << std::endl;
}
Here's a sample of obtained values :
for 2.60266
transfer 5.41087
sum 8.04307 //~8
for 3.04287
transfer 5.41087
sum 8.01353 //~8
for 6.42174
transfer 9.65907
sum 16.0808 //~16
for 2.27422
transfer 5.13271
sum 7.87691 //~8
for 3.29928
transfer 4.68676
sum 7.98604 //~8
The sum values consistently stay very close to 8ms, except when the time elapsed before initiating a new interrupt transfer call is too long (the threshold appear to be between 6 and 6.5 for my particular case) in which case it's equal to 16. I have once seen a "for" measure equal to 18ms, and the sum precisely equal to 24ms. Using an URB tracker (Microsoft Message Analyzer in my case), the time differences between Complete URB_FUNCTION_BULK_OR_INTERRUPT_TRANSFER message are also multiples of 8ms - usually 8ms. In short, they match the "sum" measures.
So, it is clear that the time elapsed between two returns of "libusb interrupt transfer calls" is a multiple of 8ms, which I assume is related to the bInterval value of 8 (FullSpeed -> *1ms -> 8ms).
But now that I have, I hope, made it clear what I'm talking about - where is that enforced ? Despite research, I cannot find a clear explanation of how the bInterval value affects things.
Apparently, this is enforced by the driver.
Therefore, is it :
The driver forbids the request from firing until 8ms have passed. Sounds like the most reasonable option to me, but from my URB Trace, Dispatch message events were raised several milliseconds before the request came back. This would mean the real time the data left the host is hidden from me/the message analyzer.
The driver hides the response from me and the analyzer until 8ms have passed since the last response.
If it is indeed handled by the driver, there is a lie somewhere to what's shown to me in the log of exchanged message. A response should come immediately after the a request, yet this is not the case. So, either the request is sent after the displayed time, or the response comes earlier than what's displayed.
How does the enforcement of the respect of the bInterval work ?
My ultimate goal is to disregard that bInterval value and poll the device more frequently than 8ms (I have good reason to believe it can be polled up to every 2ms, and a period of 8ms is unacceptable for its usage), but first I would like to know how its current limitation works, if what I'm seeking is possible at all, so I can understand what to study next (ex. writing a custom WinUSB driver)
I have a FullSpeed USB Device
Careful: Did you verify this? The 8ms are the limit for low speed USB devices - which many common mice or keyboards may still be using.
The 8ms scheduling is done inside the USB host driver (ehci/xhci) AFAIK. You could try to gamble this by releasing and reclaiming the interface - not tested, though. (Edit: Won't work, see comment).
An USB device cannot talk on its own, so it has to be the request that is delayed. Note that a device can also NAK any interrupt IN requests when there is no new data available. This simply adds another bInterval ms to the timing.
writing a custom WinUSB driver
Not recommended - replacing a windows supplied driver is quite a hassle. Our libusb-win32 replacer for an USB CDC device breaks on all big windows 10 upgrades - the device uses a COM port instead of libusb once the upgrade is finished.
I'm trying to send isochronous transfers to the microcontroller on an Arduino Due using the Libusb 1.0 library and the libusk driver installed using zadig_2.2.
Bulk transfers work perfectly, but when I'm trying to initiate an isochronous transfer I get the error code "error not supported". The way I understand it, libusb should support isochronous transfers for Windows now.
I'm using Visual Studio 2015.
Any ideas?
It can be two problems from the Arduino side. You should configure:
Endpoint configuration.
USB descriptor configuration (endpoint should be configured like an Isochronous Transfer Type)
For example:
===>Endpoint Descriptor<=== // <-------- This is the one I'm using.
bLength: 0x07
bDescriptorType: 0x05
bEndpointAddress: 0x81 -> Direction: IN - EndpointID: 1
bmAttributes: 0x01 -> Isochronous Transfer Type, Synchronization Type = No Synchronization, Usage Type = Data Endpoint
wMaxPacketSize: 0x0040 = 1 transactions per microframe, 0x40 max bytes
bInterval: 0x01
===>Endpoint Descriptor<===
bLength: 0x07
bDescriptorType: 0x05
bEndpointAddress: 0x02 -> Direction: OUT - EndpointID: 2
bmAttributes: 0x01 -> Isochronous Transfer Type, Synchronization Type = No Synchronization, Usage Type = Data Endpoint
wMaxPacketSize: 0x0040 = 1 transactions per microframe, 0x40 max bytes
bInterval: 0x01
Recently i'm working on a sample code about the communication between kernel driver module and user space applications.
I have a question about the .read and .write interface in the file_operations().
According to LDD3:
ssize_t read(struct file *filp, char __user *buff, size_t count, loff_t *offp);
ssize_t write(struct file *filp, const char __user *buff,size_t count, loff_t *offp);
For both methods, filp is the file pointer and count is the size
of the requested data transfer. The buff argument points to the user
buffer holding the data to be written or the empty buffer where the
newly read data should be placed. Finally, offp is a pointer to a
“long offset type” object that indicates the file position the user is
accessing.
I'm wondering, why do we need the parameter loff_t *offp ? Since the element in the file descriptor:
filp->f_pos already indicates the current read and write position.
And according to my observation, after the read's or write's return, the system will automatically give filp->f_pos the value of offp.
Thanks a lot!
These interfaces are also used for pread/pwrite functions, which use its own offset instead of shared one filp->f_pos. That's why offp is passed to interfaces explicitely.
How to publish a stream using librtmp library?
I read the librtmp man page and for publishing , RTMP_Write() is used.
I am doing like this.
//Code
//Init RTMP code
RTMP *r;
char uri[]="rtmp://localhost:1935/live/desktop";
r= RTMP_Alloc();
RTMP_Init(r);
RTMP_SetupURL(r, (char*)uri);
RTMP_EnableWrite(r);
RTMP_Connect(r, NULL);
RTMP_ConnectStream(r,0);
Then to respond to ping/other messages from server, I am using a thread to respond like following:
//Thread
While (ThreadIsRunning && RTMP_IsConnected(r) && RTMP_ReadPacket(r, &packet))
{
if (RTMPPacket_IsReady(&packet))
{
if (!packet.m_nBodySize)
continue;
RTMP_ClientPacket(r, &packet); //This takes care of handling ping/other messages
RTMPPacket_Free(&packet);
}
}
After this I am stuck at how to use RTMP_Write() to publish a file to Wowza media server?
In my own experience, streaming video data to an RTMP server is actually pretty simple on the librtmp side. The tricky part is to correctly packetize video/audio data and read it at the correct rate.
Assuming you are using FLV video files, as long as you can correctly isolate each tag in the file and send each one using one RTMP_Write call, you don't even need to handle incoming packets.
The tricky part is to understand how FLV files are made.
The official specification is available here: http://www.adobe.com/devnet/f4v.html
First, there's a header, that is made of 9 bytes. This header must not be sent to the server, but only read through in order to make sure the file is really FLV.
Then there is a stream of tags. Each tag has a 11 bytes header that contains the tag type (video/audio/metadata), the body length, and the tag's timestamp, among other things.
The tag header can be described using this structure:
typedef struct __flv_tag {
uint8 type;
uint24_be body_length; /* in bytes, total tag size minus 11 */
uint24_be timestamp; /* milli-seconds */
uint8 timestamp_extended; /* timestamp extension */
uint24_be stream_id; /* reserved, must be "\0\0\0" */
/* body comes next */
} flv_tag;
The body length and timestamp are presented as 24-bit big endian integers, with a supplementary byte to extend the timestamp to 32 bits if necessary (that's approximatively around the 4 hours mark).
Once you have read the tag header, you can read the body itself as you now know its length (body_length).
After that there is a 32-bit big endian integer value that contains the complete length of the tag (11 bytes + body_length).
You must write the tag header + body + previous tag size in one RTMP_Write call (else it won't play).
Also, be careful to send packets at the nominal frame rate of the video, else playback will suffer greatly.
I have written a complete FLV file demuxer as part of my GPL project FLVmeta that you can use as reference.
In fact, RTMP_Write() seems to require that you already have the RTMP packet formed in buf.
RTMPPacket *pkt = &r->m_write;
...
pkt->m_packetType = *buf++;
So, you cannot just push the flv data there - you need to separate it to packets first.
There is a nice function, RTMP_ReadPacket(), but it reads from the network socket.
I have the same problem as you, hope to have a solution soon.
Edit:
There are certain bugs in RTMP_Write(). I've made a patch and now it works. I'm going to publish that.