Get samples from a wav file while audio is being played with NAudio - naudio

I've been looking at the NAudio demo application "Audio file playback". What I'm missing from this demo is a way to get hold of the samples while the audio file is being played.
I figured that it would somehow be possible to fill a BufferedWaveProvider with samples using a callback whenever new samples are needed, but I can't figure out how.
My other (non-preferred) idea is to make a special version of e.g. DirectSoundOut where I can get hold of the samples before they are written to the sound card.
Any ideas?

With audio file playback in NAudio you construct an audio pipeline, starting with your audio file and going through various transformations (e.g. changing volume) along the way before ending up at your output device. The NAudioDemo does in fact show how the samples can be accessed along the way by drawing a waveform (pre-volume adjustment) and by showing a volume meter (post-volume adjustment).
You could, for example, create an implementer of IWaveProvider or ISampleProvider and insert it into the pipeline. Then, in the Read method, you read from your source, and then you can process or examine or write to disk the samples before passing them on to the next stage in the pipeline. Look at the AudioPlaybackPanel.CreateInputStream to see how this is done in the demo.

Related

Is there a way to record multi-threaded the mixed result of JavaFX AudioClip sounds to disk?

My program launchs an arbitrary number of sounds streamed from FreeSound.org and plays them using javafx.scene.media AudioClip instances.
I was trying to figure out whether it could be possible to capture the generated output to disk, from within the same program? Any pointers?
Instead of using AudioClip you could use SourceDataLine for playback. This class allows you to progressively read the audio data, exposing it for handling. You would have to decode from bytes to PCM for each incoming line, then add the PCM from all the lines to be merged, and recode that back to bytes for your output.
I suspect with a little minor tweaking you could get the library I wrote AudioCue to work for you. It has an optional mixer that will handle multiple cue inputs. The inputs make use of SourceDataLine and the mixer uses the logic I described. You would have to tweak the code to output to disk. I could probably help with that if this project is still live for you.

How do I edit GNU Radio's file sink output?

I recorded a signal with GNU Radio using a file sink block which outputs a raw binary file that can be analyzed or used as a source of input into GNU Radio.
I want to edit this raw file so that when I use it as a source inside GNU Radio it transmits my changed file instead of the original. For example: The signal is very long and repeats a pattern, I want to edit the file to reduce the number of repeated signals and save it back to the raw format to transmit using gnuradio later.
I tried importing the file into Audacity as a raw file (selecting 32bit float with 1 channel and 48k as the sample rate). This works for me to see the signal as audio data and I can even edit it but I'm not sure if it's saving it correctly when I export it as raw data. Also, the time indices in audacity seem to be way off; the signal should only be microseconds but audacity is showing it as a total of several seconds!
Anyone have any luck with editing the raw file sink output from GNU Radio?
I was able to consistently make this work. There seemed to be 3 things preventing this from working properly.
1) I was doing it wrong! I needed to output both the Real and the Imaginary numbers to a 2 channel wav file.
2) Using a spectrum analyzer, I was able to see that audacity was doing something really weird with the wav file when you delete a section of audio, so to combat this I "silenced" the section of audio I wanted to delete.
3) There seems to be a bug with Gnuradio and the Osmocom Sink (yes, I have the latest version of both, from source). If you run your flow graph, start transmitting then stop the flow graph by clicking the red X in Gnuradio (Kill the flow graph) it keeps my device (HackRF) transmitting! If you try to transmit a new file or the same file again, it will not transmit that signal because it's already trying to transmit something. In order to stop the device from transmitting, just close the block popup window that appears when you run the flow graph.
The 3rd item might not be a bug because I might have been stopping my flow graphs incorrectly to begin with, but following Michael Ossmann's tutorial on using the HackRF with Gnuradio, he says to click the red X to properly shutdown the flow graph and clean everything up; this appears to NOT be the case.
In the gr-utils/octave folder of the GNU Radio source code there are several functions for Octave and Matlab. Some of them allow to retrieve and store raw binary files of the corresponding data type.
For example, if your signal is constructed from float samples you can use the read_float_binary function to import the samples stored by the file sink block into Octave/Matlab. Then make your modifications to the signal and store it back again using the write_float_binary function. The stored file can be the imported to your flowgraph using a file source block.

Using NAudio, How do I get the amplitude and rhythm of an MP3 file?

The wife asked for a device to make the xmas lights 'rock' with the best of music. I am going to use an Arduino micro-controller to control relays hooked up to the lights, sending down 6 signals from C# winforms to turn them off and on. I want to use NAduio to separate the amplitude and rhythm to send the six signals. For a specific range of hertz like an equalizer with six bars for the six signals, then the timing from the rhythm. I have seen the WPF demo, and the waveform seems like the answer. I want to know how to get those values real time while the song is playing.
I'm thinking ...
1. Create a simple mp3 player and load all my songs.
2. Start the songs playing.
3. Sample the current dynamics of the song and put that into an integer that I can send to which channel on the Arduino micro-controller via usb.
I'm not sure how to capture real time the current sound information and give integer values for that moment. I can read the e.MaxSampleValues[0] values real time while the song is playing, but I want to be able to distinguish what frequency range is active at that moment.
Any help or direction would be appreciated for this interesting project.
Thank you
Sounds like a fun signal processing project.
Using the NAudio.Wave.WasapiLoopbackCapture object you can get the audio data being produced from the sound card on the local computer. This lets you skip the 'create an MP3 player' step, although at the cost of a slight delay between sound and lights. To get better synchronization you can do the MP3 decoding and pre-calculate the beat patterns and output states during playback. This will let you adjust the delay between sending the outputs and playing the audio block those outputs were generated from, getting near perfect synchronization between lights and music.
Once you have the samples, the next step is to use an FFT to find the frequency components. Fortunately NAudio includes a class to help with this: NAudio.Dsp.FastFourierTransform. (Thank you Mark!) Take the output of the FFT() function and sum out the frequency ranges you want for each controlled light.
The next step is Beat Detection. There's an interesting article on this here. The main difference is that instead of doing energy detection on a stream of sample blocks you'll be using the data from your spectral analysis stage to feed the beat detection algorithm. Those ranges you summed become inputs into individual beat detection processors, giving you one output for each frequency range you defined. You might want to add individual scaling/threshold factors for each frequency group, with some sort of on-screen controls to adjust these for best effect.
At the end of the process you will have a stream of sample blocks, each with a set of output flags. Push the flags out to your Arduino and queue the samples to play, with a delay on either of those operations to achieve your synchronization.

Objective-C play sound

I know how to play mp3 files and whatnot in Xcode iOS. But how do I play a certain frequency, like if I just wanted to emit a C# note for 25 seconds; how might I do that? (The synth isn't as important to me as just the pitch of the note.)
You need to generate the PCM audio waveform that corresponds to the note you want to play and store that into a sample buffer in memory. Then you send that buffer to the audio hardware.
Here is a tutorial on generating waveforms of several types. The article goes into some details on the many aspects to a note you need to consider, including the frequency, volume, waveform shape, sampling rate, etc. The article comes with Flash source code, I think you should have no problem taking the concepts and adapting them to iOS.
If you also need a library that you can use to play the generated buffers on iOS, then I recommend the open source Finch.
I hope this helps!
You can synthesize waveforms of your desired frequency and feed them to the callbacks of either the Audio Queue or the RemoteIO Audio Unit API.
Here is a short tutorial on some of the code needed to create sine wave tones for iOS in C.

When reading audio file with ExtAudioFile read, is it possible to read audio floats not consecutively?

I`m trying to draw waveform out of mp3 file.
I've succeeded to extract floats using ExtAudioFileReadTest app provided in the Core Audio SDK documentation(link: http://stephan.bernsee.com/ExtAudioFileReadTest.zip), but it reads floats consecutively.
The problem is, that my audio file is very long (about 1 hour), so if I read floats consecutively, it takes so much time.
Is it possible to skip the audio file, then read small partial of audio, then skip and read?
Yes, use ExtAudioFileSeek() to seek to the desired sample frame. It has some bugs depending on what format you're using (or did on 10.6) but MP3 should be OK.