usbmon: data rate analysis - usb

I wrote a small program to evaluate the textual interface of the kernel module usbmon. It is supposed to calculate the data rate towards or from a single bus device. Therefore I only look at callback (C) and Bulk output (Bo) events and add up the data length fields. Yet, when copying 100 MB of zeros with dd to a USB storage device, I get an overhead of roughly 1.3 % (stable), when counting all bytes together. As soon as the device is mounted there are events I can see via usbmon also when no file copying is ongoing.
Does somebody have an explanation for this overhead? Is it possible to get rid of it?

Related

labview daq-mx dynamic buffer size

I want to record large amount of data in continuous mode, using a PCI 6110 and DAQ-assistant VI. At this point, I'm thinking how to dynamically change the buffer size, but I'm not sure if this is possible or if it will affect how data will differ between different sizes of the buffer.
labVIEW diagram
At a high rate and high number of samples, after I start the VI, sometimes it returns a buffer overflow error, other times a not enough memory error. I'd want to know if dynamically changing the buffer size is achievable and how this could be done, or at least to determine a method to find a buffer size that is stable and won't overflow or throw errors during data acquisition.
For high-performance acquisitions, I recommend using the DAQmx API to configure the device to log directly to disk. NI calls this "Log to TDMS File" and more information is available here: TDMS Direct Integration in NI-DAQmx Logging.
With this approach, you can "stream data to disk reaching speeds up to 1.2 GB/s."

how to prevent cpu usage from changing timing in labview?

I'm trying to write a code in which every 1 ms a number plused one , should be replaced the old number . (something like a chronometer ! ) .
the problem is whenever the cpu usage increases because of some other programs running on the pc, this 1 milliseconds is also increased and timing in my program changes !
is there any way to prevent cpu load changes affecting timing in my program ?
It sounds as though you are trying to generate an analogue output waveform with a digital-to-analogue converter card using software timing, where your software is responsible for determining what value should be output at any given time and updating the output accordingly.
This is OK for stationary or low-speed signals but you are trying to do it at 1 ms intervals, in other words to output 1000 samples per second or 1 ks/s. You cannot do this reliably on a desktop operating system - there are too many other processes going on which can use CPU time and block your program from running for many milliseconds (or even seconds, e.g. for network access).
Here are a few ways you could solve this:
Use buffered, hardware-clocked output if your analogue output device supports it. Instead of writing one sample at a time, you send the device a waveform or array of samples and it outputs them at regular intervals using a timing signal generated in hardware. Unfortunately, low-end DAQ devices often don't support hardware-clocked output.
Instead of expecting the loop that writes your samples to the AO to run every millisecond, read LabVIEW's Tick Count (ms) value in the loop and use that as an index to your array of samples: rather than trying to output every sample, your code will now say 'what time is it now, and therefore what should the output be?' That won't give you a perfect signal out but at least now it should keep the correct frequency rather than be 'slowed down' - instead you will see glitches imposed on the signal whenever the loop can't keep up. This is easy to test and maybe it will be adequate for your needs.
Use a real-time operating system instead of a desktop OS. In the case of LabVIEW this would mean using the Real-Time software module and either a National Instruments hardware device that supports RT, such as the CompactRIO series, or installing the RT OS on a dedicated PC if the hardware is compatible. This is not a cheap option, obviously (unless it's strictly for personal, home use). In any case you would need to have an RT-compatible driver for your output device.
Use your computer's sound output as the output device. LabVIEW has functions for buffered sound output and you should be able to get reliable results. You'll need to upsample your signal to one of the sound output's available sample rates, probably 44.1 ks/s. The drawbacks are that the output level is limited in range and is not calibrated, and will probably be AC-coupled so you can't output a DC or very low-frequency signal. However if the level is OK for what you want to connect it to, or you can add suitable signal conditioning, this could be a neat solution. If you need the output level to be calibrated you could simultaneously measure it with your DAQ card and scale the sound waveform you're outputting to keep it correct.
The answer to your question is "not on a desktop computer." This is why products like LabVIEW Real-Time and dedicated deterministic hardware exist: you need a computer built around dedication to a particular process in order to consistently serve that process. Every application in a regular Windows/Mac/Linux desktop system has the problem you are seeing of potentially being interrupted by other system processes, particularly in its UI layer.
There is no way to prevent cpu load changes from affecting timing in your program unless the computer has a realtime clock.
If it doesn't have a realtime clock, there is no reason to expect it to behave deterministically. Do you need for your program to run at that pace?

USB performance issues

I am trying to transfer data over USB. These are packets of at most 64 bytes which get sent at a frequency of 4KHz. This gives a bit rate of about 2Mb/s.
The software task that picks up this data runs at 2.5 KHz.
Ideally we never want packets to get there slower than 2.5 KHz (so 2 KHz isn't very good).
Is anyone aware of any generic limits on what USB can achieve?
We are running on a main board which has a 1.33 GHz running QNX and a daughter board which is a TWR K60F120M tower system running MQX.
Apart from the details of the system, is USB supposed to be used in this kind of data transfers, i.e., high frequency and short packet sizes?
Many Thanks for your help
MG
USB, even at its slowest spec (1.1), can transfer data at up to 12MB/sec, provided you use the proper transfer mode. USB will process 1000 "frames" per second. The frames contain control and data information, and various portions of each frame are used for various purposes, and thus the total information content is "multiplexed" amongst these competing requirements.
Low speed devices will use just a few bytes in a frame to send or receive their data. Examples are modems, mice, keyboards, etc. The so-called Full Speed devices (in USB 1.1) can achieve up to 12 MB/sec by using isochronous mode transfers, meaning they get carved out a nice big chunk of each frame, and can send that much data (a fixed size) each time a frame comes along. This is the mode audio devices use to stream the relatively data-intensive music to USB speakers, for example.
If you are able to do a little bit of local buffering, you may be able to use isochronous mode to get your 64 bytes of data sent at 1 KHz, but with 2 or 3 periods (at 2.5KHz) worth of data in the USB frame transfer. You'd want to reserve 64 x 3 = 192 bytes of data (plus maybe a few extra bytes for control info, such as how many chunks are present: 2 or 3?). Then, as the USB frames come by, you'd put your 2 chunks or 3 chunks of data onto the wire, and the receiving end would then get that data, albeit in a more bursty way than just smoothly at a precise 2.5KHz rate. However, this way of transferring the data would more than keep up, even with USB 1.1, and still only use a fraction of the total available USB bandwidth.
The problem, as I see it, is whether your system design can tolerate a data delivery rate that is "bursty"... in other words, instead of getting 64 bytes at a rate of 2.5KHz, you'll be getting (on average) 160 bytes at a 1 KHz rate. You'll actually get something like this:
So, I think with USB that will be the best you can do -- get a somewhat bursty delivery of either 2 or 3 of your device's data packet per 1 mSec USB frame rep rate.
I am not an expert in USB, but I have done some work with it, including debugging a device-to-host tunneling protocol which used USB "interrupts", so I have seen this kind of implementation on other systems, to solve the problem of matching the USB frame rate to the device's data rate.

WasapiLoopbackCapture to WaveOut

I'm using WasapiLoopbackCapture to capture sound coming from my speakers and then using onDataAvailable to send it to another device and I'm attempting to play the data sent using the WaveOut class and a BufferedWaveProvider and just adding a sample everytime data is sent from my client using the onDataAvailable. I'm having problems sending sound. The most functioning I've managed to get it is:
Not syncing the Wave format of the client and the server, just sending data and adding it to the sample. Problem is this is stutters very much even though I checked the buffer stored size and it has 51 seconds. I even have to increase the buffer size which eventually overflows anyway.
I tried syncing the Wave format and I just get clicks but have no problem with buffer size. I also tried making sure that at least a second was stored in the buffer but that had zero effect.
If anyone could point me in the right direction that would be great.
Uncompressed audio takes up a lot of space on a network. On my machine the WasapiLoopbackCapture object produces 32-bit (IeeeFloat) stereo samples at 44100 samples per second, for around 2.7Mbit/sec total raw bandwidth. Once you factor in TCP packet overheads and so on, that's quite a lot of data you're transferring.
The first thing I would suggest though is that you plug in some profiling code at each step in the process to get an idea of where your bottlenecks are happening. How fast is data arriving from the capture device? How big are your packets? How long does it take to service each call to your OnDataAvailable event handler? How much data are you sending per second across the network? How fast is the data arriving at the client? Figure out where the bottlenecks are and you get a much better idea of what the bottlenecks are.
Try building a simulated server that reads data from a wave file in various WaveFormats (channels, bits per sample and sample rate) and simulates sending that data across the network to the client. You might find that the problem goes away at lower bandwidth. And if bandwidth is the issue, compression might be the solution.
If you're using a single-threaded model, and servicing each OnDataAvailable event takes longer than the recording frequency (ie: number of expected calls to OnDataAvailable per second) then there's going to be a data loss issue. Multiple threads can help with this - one to get the data from the audio system, another to process and send the data. But you can end up in the same position: losing data because you're not dealing with it quickly enough. When that happens it's handy to know about it, because it indicates a problem in the program. Find out when and where it happens - overflow in input, processing or output buffers all have different potential reasons and need different attention.

Logging 16-bit data to an SD card at the rate of 44 kHz

I am using the STM32F4 microcontroller with a microSD card. I am capturing analogue data via DMA.
I am using a double buffer, taking 1280 (10*128 - 10 FFTs) samples at a time.
When one buffer is full I am setting a flag and I then look at 128 samples at a time and run an FFT calculation on it. All of this is running well.
The data is being sampled at the rate I want and FFT calculation is as I would expect. If I just let the program run for one second, I see that it runs the FFT approximately 343 times (44000/128).
But the problem is I would like to save 64 values from this FFT to the SD card.
I am using the HCC fat file system library.
Each loop of the FFT calculation I am copy the 64 values into an array.
After every 10 calculations I write the contents of this array to file and start again.
The array stores 640 float_32 values (10*64).
This works perfectly for a one-second test run. I get 22,000 values stored to the SD card.
But as I increase the time I start losing samples as it take the SD card longer to write. I need the SD card to store over 87 kbit/s (4 bytes * 64 * 343 = 87808) consistently. I have tried increasing the DMA buffer sample size and then the number of times it writes, but didn't find it helped.
I am using an 8G microSD card, class 4. I formatted the SD card to the default FAT32 allocation unit size 2048.
How should I organize the buffering of data to allow for this? I thought using fewer writes might help. Would a queue help? How would I implement this and would anyone have an example?
I saw that clifford had a similar problem and he was using a queue, How can I use an SD card for logging 16-bit data at 48 ksamples/s?.
In my case I got it to work by trying a large number of different cards - they vary a great deal. If I had enough RAM available for a longer buffer that would have worked too.
If you are not using an RTOS, the queue buffering option may not be available to you, or at least would be non-trivial to implement.
Using an RTOS queue, I suggest that you create a queue of messages each of length 64*sizeof(float_32), the number of messages in the queue will be determined by the ammount of card latency you need to deal with; a length of 343 for example, will sustain a card stall of 1 second, and will require 87Kb of RAM. The application will then have a high priority thread performing the FFT and placing data in the queue, while a low priority thread takes data from the queue and writes to the file.
You might improve performance further by accumulating multiple message blocks in your DMA buffer before initiating a write, and there may be some benefit in carefully selecting an optimum DMA buffer length.
Flash is very, very sensitive to overwrites. Writing 3kB and then a further 3kB may count as an overwrite of the first 4 kB. In your case, there's no good reason why you'd want such small writes anyway. I'd advise 16 kB writes (32 frames/write * 64 samples/frame * 4 bytes/sample). You'd need 5 or 6 writes per second, which should be well in spec of any old SD card.
Now it's quite likely that you'd get another 1280 samples it while writing; you'll have to deal with that on another thread. Should be no problem as the writing should block without using CPU (it's a low-level Flash delay)
The most probable cause of the problem might be the way you are interfacing the card through the library.
SD cards over the SPI protocol (which I assume being used here) can be read or written in 512 byte sector units, some SD commands making it possible to stream (to perform sequential sector access faster). An important element of the SD card SPI protocol are various delays, where you have to poll the card whether you could start an operation (such as writing data to a sector).
You should read the library's API to discover how its writing process might work. You will need to perform some regular action which in the end would poll the card to know whether the writing process could continue. Some cards might require a set number of accesses before becoming ready for an operation, some others might use timeouts for state transitions. It might not work well to have the function called relatively rarely (such as once in 2-3 milliseconds) anticipating the card getting ready meanwhile. You have to keep on nagging it whether it completed already.
Just from own experiences with SD interfacing.