How can I interleave Bulk transfer and Interrupt probing? - usb

I'm writing a huge bulk tranfer to a real time USB device, but I would like to receive some interrupts at the same time.
It looks like the bulk is stalling the interrupt probing, and even with a perpetual interrupt probing loop they get "stuck", and I get a few interrupts after each bulk, but not everything in the queue and all the new ones are stuck (generated after the end of the bulk).
At the theoretical level javascript is visibly mono-thread, and the bulk transfer seem to be in the main thread albeit its non-blockig-friendly API (it freezes the browser). So I don't even know what I'm supposed to do. Web workers can't access the "chrome" object.
Is there some demo code for that somewhere ? I can't find any non-trivial uses of the chrome USB api on google.
It's very difficult to post code do a custom device, but the thing is there: https://github.com/nraynaud/webgcode/blob/gh-pages/webapp/index.js#L110

Related

Can I poll my USB HID device without first sending a command

I was able to make a working HID USB stack on my "StartUSB for PIC" board for the 18F2550 microcontroller. I based it on one of the MLA libraries, which was made for the 18F45K50 (MLA 2018_11_26, hid_custom, picdem_fs_usb_k50.x), but I converted it to work with the 18F2550 (there might have been easier ways, but only learned to work with PIC about 1 month ago). On the host side, I'm using LibUsbDotNet (also here, there might be easier ways - the documentation on this library really sucks) on a Windows 10 machine.
I'm using the HID class, full speed, and all seems to work. Although, I get some random errors on the host PC (see below), but doing one close/re-open cycle on the host side when getting the error is kind of solving it. Dirty, but it works. So I kind of ignore this now.
Win32Error:Win32Error:GetOverlappedResult Ep 0x01
995:The I/O operation has been aborted because of either a thread exit or an application request.
I'm not an expert on USB (yet). But all examples I'm seeing are based on 1) you send first a command to the device and 2) then you retrieve the answer from the device. I did some performance tests, and see that this indeed shows that I can do about 500 cycles/second. I think that is correct, because each cycle, sending command and retrieving answer, each takes 1 msec.
But do I really need to send a command? Can't I just keep reading endlessly, and when the device has somthing to say, it does send the data in an IN transaction, and when not it ignores which creates a timeout on the host side. That would mean that I can poll at 1000 cycles/second? Unfortunately, I have tried it by changing my implementation on the PIC, but I get very weird results. I think I have issues with suspend mode. That brings me to another question - how can I make the device get out of suspend mode (means that not the host, but the device should be triggering this event). I have searched the MLA library for command such as "wakeup", "resume", ... but couldn't find anything.
So, to summarize, 2 questions:
Conceptual: Can I send data from device to host without being requested for it by a command from the host?
For PIC experts: How can I have a device trigger for a wakeup from suspend mode?
And indeed, the answer is Yes on the first question.
In the meantime, I found another link on the web that contains a Visual Studio C# implementation of a USB library including all the source files.
If you're interested, this is the link
This C# host implementation works as a charm. Without sending a command to the device, I get notified immediately if a button is pressed. Great!
It also proofs that my earlier device implementation based on the original MicroChip MLA, is 100% correct. I stress tested the implementation by sending a "toggle LED command" as fast as I could, and I reach 1000 commands/second. Again great!
I think that LibUsbDotNet isn't that perfect after all. As I wrote above, I get rather unstable communication (Win32Error). But with this implementation, I don't get a single error, even after running for half an hour # 1000 commands/second.
So for me, case closed.

Suspend operation of lwIP Raw API

I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.

Is there any internal timeout in Microsoft UIAutomation?

I am using the UI Automation COM-to-.NET Adapter to read the contents of the target Google Chrome browser that plays a FLASH content on Windows 7. It works.
I succeeded to get the content and elements. Everything works fine for some time but after few hours the elements become inaccessible.
The (AutomationElement).FindAll() returns 0 children.
Is there any internal undocumented Timeout used by UIAutomation ?
According to this IUIAutomation2 interface
There are 2 timeouts but they are not accessible from IUIAutomation interface.
IUIAutomation2 is supported only on Windows 8 (desktop apps only).
So I believe there is some timeout.
I made a workaround that restarts the searching and monitoring of elements from the beginning of the desktop tree but the elements are still not available.
After some time (not sure how much) the elements are available again.
My requirements are to read the values all the time as fast as possible but this behavior makes a damage to the whole architecture.
I read somewhere that there is some timeout of 3 minutes but not sure.
if there is a timeout, is it possible to change it ?
Is it possible to restart something or release/dispose something ?
I can't find anything on MSDN.
Does anybody have any idea what is happening and how to resolve ?
Thanks for this nicely put question. I have a similar issue with a much different setup. I'm on Win7, using UIAutomationCore.dll directly from C# to test our application-under-development. After running my sequence of actions & event subscriptions and all the other things, I intermittently observe that the UIA interface stops working (about 8-10min in my case, but I'm heavily using the UIA interface).
Many different things including dispatching the COM interface, sleeping at different places failed. The funny revelation was I managed to use the AccEvent.exe (part of SDK like inspect.exe) during the test and saw that events also stopped flowing to AccEvent, too. So it wasn't my client's interface that stopped, but it was rather the COM-server (or whatever the UIAutomationCore does) that stopped responding.
As a solution (that seems to work most of the time - or improve the situation a lot), I decided I should give the application-under-test some breathing point, since using the UIA puts additional load on it. This could be a smartly-put sleep points in your client, but instead of sleeping a set time, I'm monitoring the processor load of the application and waiting until it settles down.
One of the intermittent errors I receive when the problem manifests itself is "... was unable to call any of the subscribers..", and my search resulted in an msdn page saying they have improved things on CUIAutomation8 interface, but as this is Windows8 specific, I didn't have the chance to try that yet.
I should also add that I also reduced the number of calls to UIA by incorporating more ui caching (FindAllBuildCache), as the less the frequency of back-and-forth the better it is for the uia. Thanks to the answer of Guy in another question: UI Automation events stop being received after a while monitoring an application and then restart after some time

USB CDC device stalling

I'm writing a simple virtual serial port device to report an older serial port. By this point I'm able to enumerate the device and send/receive characters.
After a varying number of bulk-out transmissions from the host to the device the endpoint appears to give up and stop transferring data. On the PC side I receive a write error, and judging from a USBlyzer trace the music stops on a stall (USBD_STATUS_STALL_PID). However my code never intentionally issues a STALL condition on that endpoint and the status flag for having generated one never gets set though.
Given the short amount of time elapsed (<300 µs) between issuing the request and the STALL it would appear to be an invalid response of some sort, and not a time-out. On the device side the output endpoint is ready to go, with data in the buffer and proper DATA0/1 synchronization, but nothing further ever happens.
Note that the device appears to work fine even for long periods of time until I start sending "large" quantities of data. As near as I can tell the device enumeration/configuration also appears to complete successfully. Oh, and the bulk-in endpoint continues to work just fine after this.
For the record I'm using the standard Windows usbser.sys driver and an XMega128A4U µP. I'm also seeing the same behaviour across multiple Windows Vista and 7 machines.
Any ideas what I'm doing wrong or what further tests I might run to narrow things down?
USBlyzer log,
USB CDC stack,
test project
For the record this eventually turned out to be an oscillator problem. (Apparently the FLL's reference is always 1,024 Hz even when the 1,000 Hz USB frames are chosen. The slight clock error meant that a packet occasionally got rejected if it happened to contain one too many 1-bits in a row.)
I guess the moral of the story is to check the basics before assuming you've got a problem with the higher-level protocol. Also in retrospect a hardware USB analyzer would have been a worthwhile investment, the software alternatives mostly seems to spit out a generic error code or nothing at all when something goes awry.
Stalling the out-endpoint may happen on an overflow of the output buffer on the host side. Are you sure that the device does fetch the data it receives via out-endpoint - and if so does it fetch the data at least as fast as data is sent to the device?
Note that the device appears to work fine even for long periods of
time until I start sending "large" quantities of data.
This seems to be a hint for an overflow of the output-buffer.

HID input report queues on C8051F320

it seems that as soon as data is ready for the host (such as when I use WriteFile to send a command to the HID in which I tell the HID to give back some data such as the port value) and the in packet ready bit is set, the host reads it (as confirmed by another USB interrupt) before ReadFile ever is called. ReadFile is later used to put this data into a buffer on the host. Is this the way it should happen? I would have expected the ReadFile call to cause the in interrupt.
So here is my problem: I have a GUI and HID that work well together. The HID can do I2C to another IC, and the GUI can tell the HID to do I2C just fine. Upon startup, the GUI reads data from the HID and gets a correct value (say, 0x49). Opening a second GUI to the same HID does the same initial data read from the HID and gets the correct value (say, 0x49; it should be the same as the first GUI's read). Now, if I go to the first GUI, and do an I2C read, the readback value is 0x49, which was the value that the 2nd GUI had requested from the HID. It seems that the HID puts this value on the in endpoint for all devices attached to it. Thus the 1st GUI incorrectly thinks that this is the correct value.
Per Jan Axelson's HID FAQ, "every open handle to the HID has its own report queue. Every report a device sends goes into all of the queues so multiple applications can read the same report." I believe this is my problem. How do I purge this and clear the endpoint before the 1st GUI does its request so that the correct value (which the HID does send per the debugger) gets through? I tried HidD_FlushQueue, but it keeps returning False (not working; keep getting "handle is invalid" errors, although the handle is valid per WriteFile/ReadFile success with the same handles). Any ideas?
Thanks!
You might not like this suggestion, but one option would be to only allow one GUI at a time to have an open handle. Use your favorite resource allocation lock mechanism and make the GUIs ask for the HID resource before opening the handle and using it.