alsa - managing non-blocking stream - usb

Working with a usb audio device (its a HID with multiple channels) that constantly outputs data.
What I'm hoping to achieve is to ignore the audio until a signal comes in from the device. At that point I would start monitoring the feed. A second signal from the device would indicate that I can go back to ignoring the data. I have opened said device in non-blocking mode so it won't interfere with other USB signals coming in.
This is working fine except that when I start reading the data (via snd_pcm_readi) I get an EPIPE error indicating a buffer overrun. This can be fixed by calling snd_pcm_prepare every time but I'm hoping there is a way to let the buffer empty while Im ignoring it.
I've looked at snd_pcm_drain and snd_pcm_drop but these stop the PCM and I'd rather keep it open.
Suggestions?

To ignore buffer overruns, change the PCM device's software parameters: set the stop threshold to the same value as the boundary.
With that configuration, overruns will not cause the device to stop, but will let it continue to fill the buffer.
(Other errors will still stop the device; it would be hard to continue when a USB device is unplugged ...)
When an overrun has happened, the buffer will contain more data than actually can fit into it, i.e., snd_pcm_avail will report more available frames than the buffer size.
When you want to actually start recording, you should call snd_pcm_forward to throw away all those invalid frames.

Related

Flickering and failing video streaming with uvc-gadget and g_webcam

Am using this commit of uvc-gadget together with g_webcam as of 4.4.143 for Rockchip. This version of uvc-gadget only transmits a static mjpeg image (and is much better written than earlier sources of uvc-gadget).
Observing interesting behavior on host laptop, which is receiving the stream from gadget with guvcview: after a while frames start to flicker like an old TV (V4L2_CORE: (jpeg decoder) error while decoding frame), and then eventually the stream breaks down on host: V4L2_CORE: Could not grab image (select timeout): Resource temporarily unavailable. Underneath host continues polling ([75290.124695] uvcvideo: uvc_v4l2_poll), there is no error neither in host's dmesg, nor in uvc-gadget on device. In fact, after re-opening guvcview streaming works again without uvc-gadget restart, but soon crashes in the same way.
I'm using stock USB3.0 cable, which is both for streaming and powering the device. AFAIK, there is no source of noise that may result in such kind of strange flickering on physical level.
Additionally, I've noticed with smaller USB packet sizes going down from 1024 to 256, stream survives for longer (up to 50,000 frames or so), but still finally crashes.
Any idea what's going on here?
UPDATE
After I switched from MJPEG-compressed to uncompressed stream, there is no longer flickering, but still always loss of contact after several seconds: V4L2_CORE: Could not grab image (select timeout): Resource temporarily unavailable

usb hid: why should i write "null" to the control pipe in the out endpoint interrupt

Digging around with/for HID reports, I ran into a strange problem within a USB HID device. I'm implementing an HID class device and have based my program on the HID USB program supplied by Keil. Some codes have been changed in this project and it seems working fine with 32 bytes input and 32 bytes output reports. Somehow, after thousands times data transferring, the Endpoint 1 out would hang and become a bad pipe. Then I searched the google for some tips, a topic remind me that we should write a data length zero packet after sending a length of packet match what you defined in the report description. But it's not working for me. Then I write a data length zero to the control pipe after I receive a out packet and magically, it works! It would never hang after million times transferring!
Here is my question: Why does it works after writing a data length zero to a control pipe. The data transferring in the out pipe should have no relationship with the data in the control pipe. It confuses me!
If you transfer any data that is less than the expected payload size, you must send a Zero Length Packet to indicate that data has transferred.
But it depends heavily on the implementation on the host controller, and not all devices follow the specification to the point and may stall.
Source:
When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

USB CDC device stalling

I'm writing a simple virtual serial port device to report an older serial port. By this point I'm able to enumerate the device and send/receive characters.
After a varying number of bulk-out transmissions from the host to the device the endpoint appears to give up and stop transferring data. On the PC side I receive a write error, and judging from a USBlyzer trace the music stops on a stall (USBD_STATUS_STALL_PID). However my code never intentionally issues a STALL condition on that endpoint and the status flag for having generated one never gets set though.
Given the short amount of time elapsed (<300 µs) between issuing the request and the STALL it would appear to be an invalid response of some sort, and not a time-out. On the device side the output endpoint is ready to go, with data in the buffer and proper DATA0/1 synchronization, but nothing further ever happens.
Note that the device appears to work fine even for long periods of time until I start sending "large" quantities of data. As near as I can tell the device enumeration/configuration also appears to complete successfully. Oh, and the bulk-in endpoint continues to work just fine after this.
For the record I'm using the standard Windows usbser.sys driver and an XMega128A4U µP. I'm also seeing the same behaviour across multiple Windows Vista and 7 machines.
Any ideas what I'm doing wrong or what further tests I might run to narrow things down?
USBlyzer log,
USB CDC stack,
test project
For the record this eventually turned out to be an oscillator problem. (Apparently the FLL's reference is always 1,024 Hz even when the 1,000 Hz USB frames are chosen. The slight clock error meant that a packet occasionally got rejected if it happened to contain one too many 1-bits in a row.)
I guess the moral of the story is to check the basics before assuming you've got a problem with the higher-level protocol. Also in retrospect a hardware USB analyzer would have been a worthwhile investment, the software alternatives mostly seems to spit out a generic error code or nothing at all when something goes awry.
Stalling the out-endpoint may happen on an overflow of the output buffer on the host side. Are you sure that the device does fetch the data it receives via out-endpoint - and if so does it fetch the data at least as fast as data is sent to the device?
Note that the device appears to work fine even for long periods of
time until I start sending "large" quantities of data.
This seems to be a hint for an overflow of the output-buffer.

Recording Audio on iPhone and Sending Over Network with NSOutputStream

I am writing an iPhone application that needs to record audio from the built-in microphone and then send that audio data to a server for processing.
The application uses a socket connection to connect to the server and Audio Queue Services to do the recording. What I am unsure of is when to actually send the data. Audio Queue Services fires a callback each time it has filled a buffer with some audio data. NSOutputStream fires an event each time it has space available.
My first thought was to send the data to the server on the Audio Queue callback. But it seems like this would run into a problem if the NSOutputStream does not have space available at that time.
Then I thought about buffering the data as it comes back from the Audio Queue and sending some each time the NSOutputStream fires a space available event. But this would seem to have a problem if the sending to the server gets ahead of the audio recording then there will be a situation where there is nothing to write on the space available event, so the event will not be fired again and the data transfer will effectivly be stalled.
So what is the best way to handle this? Should I have a timer that fires repeatedly and see if there is space available and there is data that needs to be sent? Also, I think I will need to do some thread synchronization so that I can take chunks of data out of my buffer to send across the network but also add chunks of data to the buffer as the recording proceeds without risking mangling my buffer.
You could use a ring buffer to hold a certain number of audio frames and drop frames if the buffer exceeds a certain size. When your stream-has-space-available callback gets called, pull a frame off the ring buffer and send it.
CHDataStructures provides a few ring-buffer (which it calls “circular buffer”) classes.

HID input report queues on C8051F320

it seems that as soon as data is ready for the host (such as when I use WriteFile to send a command to the HID in which I tell the HID to give back some data such as the port value) and the in packet ready bit is set, the host reads it (as confirmed by another USB interrupt) before ReadFile ever is called. ReadFile is later used to put this data into a buffer on the host. Is this the way it should happen? I would have expected the ReadFile call to cause the in interrupt.
So here is my problem: I have a GUI and HID that work well together. The HID can do I2C to another IC, and the GUI can tell the HID to do I2C just fine. Upon startup, the GUI reads data from the HID and gets a correct value (say, 0x49). Opening a second GUI to the same HID does the same initial data read from the HID and gets the correct value (say, 0x49; it should be the same as the first GUI's read). Now, if I go to the first GUI, and do an I2C read, the readback value is 0x49, which was the value that the 2nd GUI had requested from the HID. It seems that the HID puts this value on the in endpoint for all devices attached to it. Thus the 1st GUI incorrectly thinks that this is the correct value.
Per Jan Axelson's HID FAQ, "every open handle to the HID has its own report queue. Every report a device sends goes into all of the queues so multiple applications can read the same report." I believe this is my problem. How do I purge this and clear the endpoint before the 1st GUI does its request so that the correct value (which the HID does send per the debugger) gets through? I tried HidD_FlushQueue, but it keeps returning False (not working; keep getting "handle is invalid" errors, although the handle is valid per WriteFile/ReadFile success with the same handles). Any ideas?
Thanks!
You might not like this suggestion, but one option would be to only allow one GUI at a time to have an open handle. Use your favorite resource allocation lock mechanism and make the GUIs ask for the HID resource before opening the handle and using it.