Recoding data into files N samples each with GNU Radio - gnuradio

I'm recording data with USRP X310 using GNU Radio. I need to record and process data for days (possibly weeks) non-stop at a very high sampling rate, so I cannot store all the data on HDD, I need to process it on a fly. I want to cut continuous stream of data into chunks N samples each and process these chunks with a python program I have. One way of doing this, as I see, is to store each chunk into a file on HDD on a fly, so I can simultaneously access it with another program and process it.
Is there a way of doing this in GNU Radio?
Thank you!

Related

Is there any way to automate the process in Labview

I have an application build on labview for sensors. It takes files and gives output as .cal files. It takes the maximum pressure and temperature values and output the sensor data with minimum errors. I want to automate this process
This application is very old written in Labview. Should I make the whole application in another platform or language, or is there any other way to automate the error reduction and giving file as output in Labview itself
I want to automate the whole process. I should be able to enter pressure and temperature and it should give me the sensor value in the end.

usbmon: data rate analysis

I wrote a small program to evaluate the textual interface of the kernel module usbmon. It is supposed to calculate the data rate towards or from a single bus device. Therefore I only look at callback (C) and Bulk output (Bo) events and add up the data length fields. Yet, when copying 100 MB of zeros with dd to a USB storage device, I get an overhead of roughly 1.3 % (stable), when counting all bytes together. As soon as the device is mounted there are events I can see via usbmon also when no file copying is ongoing.
Does somebody have an explanation for this overhead? Is it possible to get rid of it?

how to prevent cpu usage from changing timing in labview?

I'm trying to write a code in which every 1 ms a number plused one , should be replaced the old number . (something like a chronometer ! ) .
the problem is whenever the cpu usage increases because of some other programs running on the pc, this 1 milliseconds is also increased and timing in my program changes !
is there any way to prevent cpu load changes affecting timing in my program ?
It sounds as though you are trying to generate an analogue output waveform with a digital-to-analogue converter card using software timing, where your software is responsible for determining what value should be output at any given time and updating the output accordingly.
This is OK for stationary or low-speed signals but you are trying to do it at 1 ms intervals, in other words to output 1000 samples per second or 1 ks/s. You cannot do this reliably on a desktop operating system - there are too many other processes going on which can use CPU time and block your program from running for many milliseconds (or even seconds, e.g. for network access).
Here are a few ways you could solve this:
Use buffered, hardware-clocked output if your analogue output device supports it. Instead of writing one sample at a time, you send the device a waveform or array of samples and it outputs them at regular intervals using a timing signal generated in hardware. Unfortunately, low-end DAQ devices often don't support hardware-clocked output.
Instead of expecting the loop that writes your samples to the AO to run every millisecond, read LabVIEW's Tick Count (ms) value in the loop and use that as an index to your array of samples: rather than trying to output every sample, your code will now say 'what time is it now, and therefore what should the output be?' That won't give you a perfect signal out but at least now it should keep the correct frequency rather than be 'slowed down' - instead you will see glitches imposed on the signal whenever the loop can't keep up. This is easy to test and maybe it will be adequate for your needs.
Use a real-time operating system instead of a desktop OS. In the case of LabVIEW this would mean using the Real-Time software module and either a National Instruments hardware device that supports RT, such as the CompactRIO series, or installing the RT OS on a dedicated PC if the hardware is compatible. This is not a cheap option, obviously (unless it's strictly for personal, home use). In any case you would need to have an RT-compatible driver for your output device.
Use your computer's sound output as the output device. LabVIEW has functions for buffered sound output and you should be able to get reliable results. You'll need to upsample your signal to one of the sound output's available sample rates, probably 44.1 ks/s. The drawbacks are that the output level is limited in range and is not calibrated, and will probably be AC-coupled so you can't output a DC or very low-frequency signal. However if the level is OK for what you want to connect it to, or you can add suitable signal conditioning, this could be a neat solution. If you need the output level to be calibrated you could simultaneously measure it with your DAQ card and scale the sound waveform you're outputting to keep it correct.
The answer to your question is "not on a desktop computer." This is why products like LabVIEW Real-Time and dedicated deterministic hardware exist: you need a computer built around dedication to a particular process in order to consistently serve that process. Every application in a regular Windows/Mac/Linux desktop system has the problem you are seeing of potentially being interrupted by other system processes, particularly in its UI layer.
There is no way to prevent cpu load changes from affecting timing in your program unless the computer has a realtime clock.
If it doesn't have a realtime clock, there is no reason to expect it to behave deterministically. Do you need for your program to run at that pace?

How do I edit GNU Radio's file sink output?

I recorded a signal with GNU Radio using a file sink block which outputs a raw binary file that can be analyzed or used as a source of input into GNU Radio.
I want to edit this raw file so that when I use it as a source inside GNU Radio it transmits my changed file instead of the original. For example: The signal is very long and repeats a pattern, I want to edit the file to reduce the number of repeated signals and save it back to the raw format to transmit using gnuradio later.
I tried importing the file into Audacity as a raw file (selecting 32bit float with 1 channel and 48k as the sample rate). This works for me to see the signal as audio data and I can even edit it but I'm not sure if it's saving it correctly when I export it as raw data. Also, the time indices in audacity seem to be way off; the signal should only be microseconds but audacity is showing it as a total of several seconds!
Anyone have any luck with editing the raw file sink output from GNU Radio?
I was able to consistently make this work. There seemed to be 3 things preventing this from working properly.
1) I was doing it wrong! I needed to output both the Real and the Imaginary numbers to a 2 channel wav file.
2) Using a spectrum analyzer, I was able to see that audacity was doing something really weird with the wav file when you delete a section of audio, so to combat this I "silenced" the section of audio I wanted to delete.
3) There seems to be a bug with Gnuradio and the Osmocom Sink (yes, I have the latest version of both, from source). If you run your flow graph, start transmitting then stop the flow graph by clicking the red X in Gnuradio (Kill the flow graph) it keeps my device (HackRF) transmitting! If you try to transmit a new file or the same file again, it will not transmit that signal because it's already trying to transmit something. In order to stop the device from transmitting, just close the block popup window that appears when you run the flow graph.
The 3rd item might not be a bug because I might have been stopping my flow graphs incorrectly to begin with, but following Michael Ossmann's tutorial on using the HackRF with Gnuradio, he says to click the red X to properly shutdown the flow graph and clean everything up; this appears to NOT be the case.
In the gr-utils/octave folder of the GNU Radio source code there are several functions for Octave and Matlab. Some of them allow to retrieve and store raw binary files of the corresponding data type.
For example, if your signal is constructed from float samples you can use the read_float_binary function to import the samples stored by the file sink block into Octave/Matlab. Then make your modifications to the signal and store it back again using the write_float_binary function. The stored file can be the imported to your flowgraph using a file source block.

Using NAudio, How do I get the amplitude and rhythm of an MP3 file?

The wife asked for a device to make the xmas lights 'rock' with the best of music. I am going to use an Arduino micro-controller to control relays hooked up to the lights, sending down 6 signals from C# winforms to turn them off and on. I want to use NAduio to separate the amplitude and rhythm to send the six signals. For a specific range of hertz like an equalizer with six bars for the six signals, then the timing from the rhythm. I have seen the WPF demo, and the waveform seems like the answer. I want to know how to get those values real time while the song is playing.
I'm thinking ...
1. Create a simple mp3 player and load all my songs.
2. Start the songs playing.
3. Sample the current dynamics of the song and put that into an integer that I can send to which channel on the Arduino micro-controller via usb.
I'm not sure how to capture real time the current sound information and give integer values for that moment. I can read the e.MaxSampleValues[0] values real time while the song is playing, but I want to be able to distinguish what frequency range is active at that moment.
Any help or direction would be appreciated for this interesting project.
Thank you
Sounds like a fun signal processing project.
Using the NAudio.Wave.WasapiLoopbackCapture object you can get the audio data being produced from the sound card on the local computer. This lets you skip the 'create an MP3 player' step, although at the cost of a slight delay between sound and lights. To get better synchronization you can do the MP3 decoding and pre-calculate the beat patterns and output states during playback. This will let you adjust the delay between sending the outputs and playing the audio block those outputs were generated from, getting near perfect synchronization between lights and music.
Once you have the samples, the next step is to use an FFT to find the frequency components. Fortunately NAudio includes a class to help with this: NAudio.Dsp.FastFourierTransform. (Thank you Mark!) Take the output of the FFT() function and sum out the frequency ranges you want for each controlled light.
The next step is Beat Detection. There's an interesting article on this here. The main difference is that instead of doing energy detection on a stream of sample blocks you'll be using the data from your spectral analysis stage to feed the beat detection algorithm. Those ranges you summed become inputs into individual beat detection processors, giving you one output for each frequency range you defined. You might want to add individual scaling/threshold factors for each frequency group, with some sort of on-screen controls to adjust these for best effect.
At the end of the process you will have a stream of sample blocks, each with a set of output flags. Push the flags out to your Arduino and queue the samples to play, with a delay on either of those operations to achieve your synchronization.