I recorded a signal with GNU Radio using a file sink block which outputs a raw binary file that can be analyzed or used as a source of input into GNU Radio.
I want to edit this raw file so that when I use it as a source inside GNU Radio it transmits my changed file instead of the original. For example: The signal is very long and repeats a pattern, I want to edit the file to reduce the number of repeated signals and save it back to the raw format to transmit using gnuradio later.
I tried importing the file into Audacity as a raw file (selecting 32bit float with 1 channel and 48k as the sample rate). This works for me to see the signal as audio data and I can even edit it but I'm not sure if it's saving it correctly when I export it as raw data. Also, the time indices in audacity seem to be way off; the signal should only be microseconds but audacity is showing it as a total of several seconds!
Anyone have any luck with editing the raw file sink output from GNU Radio?
I was able to consistently make this work. There seemed to be 3 things preventing this from working properly.
1) I was doing it wrong! I needed to output both the Real and the Imaginary numbers to a 2 channel wav file.
2) Using a spectrum analyzer, I was able to see that audacity was doing something really weird with the wav file when you delete a section of audio, so to combat this I "silenced" the section of audio I wanted to delete.
3) There seems to be a bug with Gnuradio and the Osmocom Sink (yes, I have the latest version of both, from source). If you run your flow graph, start transmitting then stop the flow graph by clicking the red X in Gnuradio (Kill the flow graph) it keeps my device (HackRF) transmitting! If you try to transmit a new file or the same file again, it will not transmit that signal because it's already trying to transmit something. In order to stop the device from transmitting, just close the block popup window that appears when you run the flow graph.
The 3rd item might not be a bug because I might have been stopping my flow graphs incorrectly to begin with, but following Michael Ossmann's tutorial on using the HackRF with Gnuradio, he says to click the red X to properly shutdown the flow graph and clean everything up; this appears to NOT be the case.
In the gr-utils/octave folder of the GNU Radio source code there are several functions for Octave and Matlab. Some of them allow to retrieve and store raw binary files of the corresponding data type.
For example, if your signal is constructed from float samples you can use the read_float_binary function to import the samples stored by the file sink block into Octave/Matlab. Then make your modifications to the signal and store it back again using the write_float_binary function. The stored file can be the imported to your flowgraph using a file source block.
Related
I'm trying to tap the currently selected output audio device on macOS, so I basically have a pass through listener that can monitor the audio stream currently being output without affecting it.
I want to copy this data to a ring buffer in real time so I can operate on it separately.
The combination of Apple docs and (outdated?) SO answers are confusing as to whether I need to write a hacky kernel extension, can utilise CoreAudio for this, or need to interface with the HAL?
I would like to work in Swift if possible.
Many thanks
(ps. I had been looking at this and this)
I don't know about kernel extensions - their use of special "call us" signing certificates or the necessity of turning off SIP discourages casual exploration.
However you can use a combination of CoreAudio and HAL AudioServer plugins to do what you want, and you don't even need to write the plugin yourself, there are several open source versions to choose from.
CoreAudio doesn't give you a way to record from (or "tap") output devices - you can only record from input devices, so the way to get around this is to create a virtual "pass through" device (AudioServerPlugin), not associated with any hardware, that copies output through to input and then set this pass through device as default output and record from its input. I've done this using open source AudioServer Plugins like BackgroundMusic and BlackHole [TODO: add more].
To tap/record from the resulting device you can simply add an AudioDeviceIOProc callback to it or set the device as the kAudioOutputUnitProperty_CurrentDevice of an kAudioUnitSubType_HALOutput AudioUnit
There are two problems with the above virtual pass through device approach:
you can't your hear output anymore, because it's being consumed by the pass through device
changing default output device will switch away from your device and the tap will fall silent.
If 1. is a problem, then a simple is to create a Multi-Output device containing the pass through device and a real output device (see screenshot) & set this as the default output device. Volume controls stop working, but you can still change the real output device's volume in Audio MIDI Setup.app.
For 2. you can add a listener to the default output device and update the multi-output device above when it changes.
You can do most of the above in swift, although for ringbuffer-stowing from the buffer delivery callbacks you'll have to use C or some other language that can respect the realtime audio rules (no locks, no memory allocation, etc). You could maybe try AVAudioEngine to do the tap, but IIRC changing input device is a vale of tears.
I need a way to send some data to the ucontroller through Trace32. I heard that this is possible some way, but I have no idea where to start.
What I am actually trying to do is run a piece of code on a Aurix TC297 ucontroller to do some measurements (runtime, RAM, etc.). This piece of code is actually a Kalman filter that needs as input a vector of structs that I have too send from the computer through Trace32. Please help !
"A way to send some data to the ucontroller through Trace32" is a little bit vague. There are various possibilities depending on what your actually try to achieve and might also depend on the used CPU family and target OS. Anyhow one of the following might work:
Simply writing some raw data to the target memory can be achieved with the Data.Set command.
To transfer a big amount of data (or even a whole application) from a file to the target memory the Data.LOAD commands might be the right choice. E.g. Data.LOAD.Binary command for a raw binary file.
To set variables in your application or even initiate C-style data arrays use the Var.Set command.
To write data to NOR flash or onchip flash memory you'll need the FLASH.AUTO command in addition to the previously mentioned commands (after declaring the flash memory to TRACE32).
To write data to a NAND, SPI or other serial flash memory you probably should use the FLASHFILE.Set command (after initialization of the FLASHFILE programming system).
To transfer data from TRACE32 to your target while the CPU is running you might have to configure correctly SYStem.MemAccess and use the memory access class prefix "E". E.g. Data.Set E:<addr> <data> or Var.Set %E <expression>.
You can use FDX for a bidirectional data transfer between debugger and a running target application.
To enable the target application to open and read files from the computer running TRACE32, you have to compile your application with suitable semihosting code and initiate semihosting in TRACE32 with TERM.GATE command.
I'm using a USB-6356 DAQ board to control an IC via SPI.
I'm using parts of the NI SPI Digital Waveform library to create the digital waveform, then a small wrapper VI to transmit the code.
My IC measures temperature on an RTD, and currently the controlling VI has a 'push for single measurement' style button.
When I push it, the temperature is returned by a series of other VIs running the SPI communication.
After some number of pushes (clicking the button very quickly makes this happen more quickly in time, but not necessarily in number of clicks), the VI generates an error -200361, which is nominally FIFO buffer overflow on the DAQ board.
It's unclear to me if that could actually be the cause of the problem, but I don't think so...
An NI guide describing this error for USB-600{0,8,9} devices looks promising, but following the suggestions didn't help me. I substituted 'DI.UsbXferReqCount' for the analog equivalent, since my read task is digital. Reading the default returned 4, so I changed the property to write and selected '1', but this made no difference.
I tried uninstalling the DAQ board using the Device Manager, unplugging and replugging, but this also didn't change anything.
My guess is that additional clock samples are generated after the end of the 'Finite Samples' part for the Read and Write tasks, and that these might be adding blank data that overflows, but the temperatures returned don't indicate strange data, and I'd have assumed that if this were the case, my VIs would be unable to interpret the data read in as the correct temperature.
I've attached an image of the block diagram for the Transmit VI I'm using, but actually getting it to run would require an entire library of VIs.
The controlling VI is attached to a nearly identical forum post at NI forums.
I think USB-6356 don't have output buffers used for Digital signal. You can try it by NI-MAX, if you select the digital output, you may find that there is no parameters for Samples. It's only output a bool-value(0 or 1) in one time.
You can also use DAQ Assistant in LabVIEW, when you config Digital output, if you select N-Samples or Continuous samples, then push OK button, here comes a Dialog that tell you there is no buffer for lines that you selected.
I've been looking at the NAudio demo application "Audio file playback". What I'm missing from this demo is a way to get hold of the samples while the audio file is being played.
I figured that it would somehow be possible to fill a BufferedWaveProvider with samples using a callback whenever new samples are needed, but I can't figure out how.
My other (non-preferred) idea is to make a special version of e.g. DirectSoundOut where I can get hold of the samples before they are written to the sound card.
Any ideas?
With audio file playback in NAudio you construct an audio pipeline, starting with your audio file and going through various transformations (e.g. changing volume) along the way before ending up at your output device. The NAudioDemo does in fact show how the samples can be accessed along the way by drawing a waveform (pre-volume adjustment) and by showing a volume meter (post-volume adjustment).
You could, for example, create an implementer of IWaveProvider or ISampleProvider and insert it into the pipeline. Then, in the Read method, you read from your source, and then you can process or examine or write to disk the samples before passing them on to the next stage in the pipeline. Look at the AudioPlaybackPanel.CreateInputStream to see how this is done in the demo.
I am trying to record audio from a microphone/iSight camera from Mac to a NSData object.
I have tried to do it using QTKit, but I found out that you could only save it as a .mov file.
But the fact is that I want to recode the audio into a FLAC file. Is that posible, or I'll need to use another framework?.
Thanks.
Grab the source for VLC (if you can deal w/GPL -- it has limitations on use that many find onerous) and have a read. It does transcoding, amongst other things.
Beyond that, one dead simple approach is to save as AIFF and then use a command line tool (via NSTask) to do the conversion.
Or you could just go with Apple Lossless -- it is open source these days.
Of course, this also begs the question; why do you need lossless compression when recording voice [low bandwidth in the first place] via a relatively sub-par microphone?