it seems that as soon as data is ready for the host (such as when I use WriteFile to send a command to the HID in which I tell the HID to give back some data such as the port value) and the in packet ready bit is set, the host reads it (as confirmed by another USB interrupt) before ReadFile ever is called. ReadFile is later used to put this data into a buffer on the host. Is this the way it should happen? I would have expected the ReadFile call to cause the in interrupt.
So here is my problem: I have a GUI and HID that work well together. The HID can do I2C to another IC, and the GUI can tell the HID to do I2C just fine. Upon startup, the GUI reads data from the HID and gets a correct value (say, 0x49). Opening a second GUI to the same HID does the same initial data read from the HID and gets the correct value (say, 0x49; it should be the same as the first GUI's read). Now, if I go to the first GUI, and do an I2C read, the readback value is 0x49, which was the value that the 2nd GUI had requested from the HID. It seems that the HID puts this value on the in endpoint for all devices attached to it. Thus the 1st GUI incorrectly thinks that this is the correct value.
Per Jan Axelson's HID FAQ, "every open handle to the HID has its own report queue. Every report a device sends goes into all of the queues so multiple applications can read the same report." I believe this is my problem. How do I purge this and clear the endpoint before the 1st GUI does its request so that the correct value (which the HID does send per the debugger) gets through? I tried HidD_FlushQueue, but it keeps returning False (not working; keep getting "handle is invalid" errors, although the handle is valid per WriteFile/ReadFile success with the same handles). Any ideas?
Thanks!
You might not like this suggestion, but one option would be to only allow one GUI at a time to have an open handle. Use your favorite resource allocation lock mechanism and make the GUIs ask for the HID resource before opening the handle and using it.
Related
I'm going through the process of coding up an xHC interface following the Intel xHCI guide. I'm at the point where I need to give the controller the memory buffer address for the command ring (using the CRCR_LO and CRCR_HI registers).
However, after I write these values, I am not able to read it back to verify. The documentation indicates that the value will be updated after the next doorbell, but that doesn't seem to happen either.
I do see the CRR bit go high (command ring running), so it's at least doing something. But I am also not getting command responses in my event ring (though I am getting port status change messages in the event ring).
Can someone clarify how this register works? Is there a way to verify that command ring buffer address?
With a control PC, I am addressing a R&S ESPI Receiver device to perform a frequency scan and return the measurement results back via BAT-EMC control software and a NI GPIB-USB controller in between. My target is to track the binary measurement data (Definite Length Block Data according to IEEE 488.2) sent to the control PC to understand how the device is deciding on the byte size of each binary block sent.
The trace shows that binary blocks are sent with no consistent pattern or rule!
E.g, running the same scan with the same frequency range and step twice may result in a different distribution of the measurement values' bytes on binary blocks (and possibly different total number of blocks sent), although the amount of data delivered is the same.
Any help to figure out how the device and control software are communicating the measurement data?
PS: The NI trace at the level of GPIB controller is not showing that the control software is specifying a byte size when querying for the next block, neither is the instrument sending this piece of info when it is issuing a service request so that it is queried for more available data by the software (according to the trace).
Make sure that you are giving enough time for the instrument to respond. Possibly you are sending commands from the PC which would assert the ATN line and interrupt the response. You should be able to configure the instrument to send one result. Configure the instrument as a listener and talker and set the instrument to send only one response per trigger. Then send the group execute trigger (GET) and read the results off the bus. When it’s done measure how long it took for that packet to get sent. If you are sending triggers before the full response you will be terminating the output stream. I suspect this because the streams are randomly different.
I’m just starting to learn GPIB so please write back what happened.
I was able to make a working HID USB stack on my "StartUSB for PIC" board for the 18F2550 microcontroller. I based it on one of the MLA libraries, which was made for the 18F45K50 (MLA 2018_11_26, hid_custom, picdem_fs_usb_k50.x), but I converted it to work with the 18F2550 (there might have been easier ways, but only learned to work with PIC about 1 month ago). On the host side, I'm using LibUsbDotNet (also here, there might be easier ways - the documentation on this library really sucks) on a Windows 10 machine.
I'm using the HID class, full speed, and all seems to work. Although, I get some random errors on the host PC (see below), but doing one close/re-open cycle on the host side when getting the error is kind of solving it. Dirty, but it works. So I kind of ignore this now.
Win32Error:Win32Error:GetOverlappedResult Ep 0x01
995:The I/O operation has been aborted because of either a thread exit or an application request.
I'm not an expert on USB (yet). But all examples I'm seeing are based on 1) you send first a command to the device and 2) then you retrieve the answer from the device. I did some performance tests, and see that this indeed shows that I can do about 500 cycles/second. I think that is correct, because each cycle, sending command and retrieving answer, each takes 1 msec.
But do I really need to send a command? Can't I just keep reading endlessly, and when the device has somthing to say, it does send the data in an IN transaction, and when not it ignores which creates a timeout on the host side. That would mean that I can poll at 1000 cycles/second? Unfortunately, I have tried it by changing my implementation on the PIC, but I get very weird results. I think I have issues with suspend mode. That brings me to another question - how can I make the device get out of suspend mode (means that not the host, but the device should be triggering this event). I have searched the MLA library for command such as "wakeup", "resume", ... but couldn't find anything.
So, to summarize, 2 questions:
Conceptual: Can I send data from device to host without being requested for it by a command from the host?
For PIC experts: How can I have a device trigger for a wakeup from suspend mode?
And indeed, the answer is Yes on the first question.
In the meantime, I found another link on the web that contains a Visual Studio C# implementation of a USB library including all the source files.
If you're interested, this is the link
This C# host implementation works as a charm. Without sending a command to the device, I get notified immediately if a button is pressed. Great!
It also proofs that my earlier device implementation based on the original MicroChip MLA, is 100% correct. I stress tested the implementation by sending a "toggle LED command" as fast as I could, and I reach 1000 commands/second. Again great!
I think that LibUsbDotNet isn't that perfect after all. As I wrote above, I get rather unstable communication (Win32Error). But with this implementation, I don't get a single error, even after running for half an hour # 1000 commands/second.
So for me, case closed.
I have built a simple PCI driver for reading and writing data to a PCI device. I have also added interrupt support, so when there is a PCI interrupt an ISR is called. This all seems to work.
I would like to inform an external application of the interrupt. So far I haven't found a suitable mechanism. The interrupt could come at any time, and is dependent on Sensors connected to the PCI device.
I have found the following:-
1 Event objects which can be passed to the KMDF driver via read, write, iocontrol commands (Overlapped object)
2 Plug and Play notifications, which can be use used by (Toaster example code) the driver to inform the app of PNP events.
A notification method would be ideal, however it doesn't look like one exists for my particular use case.
There are at least 2 ways to achieve what you are looking for
Inverted call model - send IOCTL(s) to the driver which the driver will keep pending and will complete them as and when it needs to notify the user mode about the occurrence of the event that it is interested in. You can read more about this approach here.
Use shared event handles. A user mode application communicates the event handle(s) to kernel mode using an IOCTL. The kernel mode increments the reference count to ensure that the handle remains valid when it needs to use it and then signals the event when necessary. You can read more about this approach here.
The first approach is more preferred for various reasons that you will find while reading the linked articles. If your use case requires the kernel mode to not only indicate the occurrence of an event but also send some data back to user mode then the second approach is not suitable for your requirement and you should focus on the first approach alone.
I'm writing a simple virtual serial port device to report an older serial port. By this point I'm able to enumerate the device and send/receive characters.
After a varying number of bulk-out transmissions from the host to the device the endpoint appears to give up and stop transferring data. On the PC side I receive a write error, and judging from a USBlyzer trace the music stops on a stall (USBD_STATUS_STALL_PID). However my code never intentionally issues a STALL condition on that endpoint and the status flag for having generated one never gets set though.
Given the short amount of time elapsed (<300 µs) between issuing the request and the STALL it would appear to be an invalid response of some sort, and not a time-out. On the device side the output endpoint is ready to go, with data in the buffer and proper DATA0/1 synchronization, but nothing further ever happens.
Note that the device appears to work fine even for long periods of time until I start sending "large" quantities of data. As near as I can tell the device enumeration/configuration also appears to complete successfully. Oh, and the bulk-in endpoint continues to work just fine after this.
For the record I'm using the standard Windows usbser.sys driver and an XMega128A4U µP. I'm also seeing the same behaviour across multiple Windows Vista and 7 machines.
Any ideas what I'm doing wrong or what further tests I might run to narrow things down?
USBlyzer log,
USB CDC stack,
test project
For the record this eventually turned out to be an oscillator problem. (Apparently the FLL's reference is always 1,024 Hz even when the 1,000 Hz USB frames are chosen. The slight clock error meant that a packet occasionally got rejected if it happened to contain one too many 1-bits in a row.)
I guess the moral of the story is to check the basics before assuming you've got a problem with the higher-level protocol. Also in retrospect a hardware USB analyzer would have been a worthwhile investment, the software alternatives mostly seems to spit out a generic error code or nothing at all when something goes awry.
Stalling the out-endpoint may happen on an overflow of the output buffer on the host side. Are you sure that the device does fetch the data it receives via out-endpoint - and if so does it fetch the data at least as fast as data is sent to the device?
Note that the device appears to work fine even for long periods of
time until I start sending "large" quantities of data.
This seems to be a hint for an overflow of the output-buffer.