How to Get Audio sample data from mp3 using NAudio - naudio

I have an mp3 file into one large array of audio samples.
I want the audio samples to be floats.
NAudio.Wave.WaveStream pcm=NAudio.Wave.WaveFormatConversionStream.CreatePcmStream(new NAudio.Wave.Mp3FileReader(OFD.FileName));
so far I get the pcm stream and can play that back fine but I don't know how to read the raw data out of the stream.

Use AudioFileReader. This implements ISampleProvider so the Read method allows you to read directly into a float array of samples.
Alternatively use the ToSampleProvider method after your Mp3FileReader. You don't need to use WaveFormatConversionStream, since Mp3FileReader (and AudioFileReader) already decompress the MP3 frames.

Related

Is it possible to set Windows.Media.SpeechSynthesis stream format as in SAPI 5.3?

I'm using Windows.Media.SpeechSynthesis (C++/WinRT) to convert text to audio file. Previously I was using SAPI where was possible to set Audio Format when binding to a file via SPBindToFile(...) before speaking.
Is there any similar method in Windows.Media.SpeechSynthesis? Seems that there is only possible to get 16kHz, 16Bit, Mono wave stream, does it?
Does SpeechSynthesisStream already contain a real audio stream after speech synthesis, or does it hold some precalculated raw data, and does actual encoding happen when accessing its data (playback on a device or copying to another not-speech-specific stream)?
Thank you!
I think there should be possible to control the speech synthesis stream format somehow.
The WinRT synthesis engines output 16Khz 16-bit mono data. There isn't any resampling layer to change the format.

Get samples from a wav file while audio is being played with NAudio

I've been looking at the NAudio demo application "Audio file playback". What I'm missing from this demo is a way to get hold of the samples while the audio file is being played.
I figured that it would somehow be possible to fill a BufferedWaveProvider with samples using a callback whenever new samples are needed, but I can't figure out how.
My other (non-preferred) idea is to make a special version of e.g. DirectSoundOut where I can get hold of the samples before they are written to the sound card.
Any ideas?
With audio file playback in NAudio you construct an audio pipeline, starting with your audio file and going through various transformations (e.g. changing volume) along the way before ending up at your output device. The NAudioDemo does in fact show how the samples can be accessed along the way by drawing a waveform (pre-volume adjustment) and by showing a volume meter (post-volume adjustment).
You could, for example, create an implementer of IWaveProvider or ISampleProvider and insert it into the pipeline. Then, in the Read method, you read from your source, and then you can process or examine or write to disk the samples before passing them on to the next stage in the pipeline. Look at the AudioPlaybackPanel.CreateInputStream to see how this is done in the demo.

Playback buffer of audio values - iOS

I'm trying to playback an NSArray of sample audio values that I have created in Objective-C. I've generated the values with no problem, but how do I go about playing them back through an iPhone?
Thanks
I would suggest converting the NSArray to an NSData object, then using the AVAudioPlayer method initWithData:error: (see here) to load as playable audio. AVAudioPlayer has the advantage of being extremely simple to use in relation to Audio Queue and Audio Unit methods.
How you go about converting NSArray to NSData depends on the type of your samples (this SO post might give an idea of how this could be done, although the NSKeyedArchiver archiving process might screw around with your samples). I would suggest, if your sample generation process allows, just creating your samples into an NSData object and skip such a conversion.
Copy the values out of the NSArray into an C array of appropriately scaled PCM samples. Then you can convert this array to a WAV file by prepending a RIFF header and play this file using an AVAudioPlayer, or you could directly feed the PCM samples into the C array buffers of an Audio Queue or the RemoteIO Audio Unit in their audio callbacks.

Convert from AIFF to AAC using Apple API only

I am creating a movie file using QTMovie from QTKit and everything's working nicely. The only problem I have is that the audio in the resulting MOV file is just the raw AIFF hence the file size is larger than I'd like. I've seen plenty about third party libraries capable of encoding to AAC but are there any Apple APIs which I can call to do this job? I don't mind converting the AIFF to AAC prior to adding it to my QTMovie or having the encoding done as part of writing the QTMovie to disk.
This was actually easily achievable using QTKit. I just needed to set the QTMovieExportType to 'mpg4' and QTMovieExport to be YES when calling writeToFile:withAttributes:.

When reading audio file with ExtAudioFile read, is it possible to read audio floats not consecutively?

I`m trying to draw waveform out of mp3 file.
I've succeeded to extract floats using ExtAudioFileReadTest app provided in the Core Audio SDK documentation(link: http://stephan.bernsee.com/ExtAudioFileReadTest.zip), but it reads floats consecutively.
The problem is, that my audio file is very long (about 1 hour), so if I read floats consecutively, it takes so much time.
Is it possible to skip the audio file, then read small partial of audio, then skip and read?
Yes, use ExtAudioFileSeek() to seek to the desired sample frame. It has some bugs depending on what format you're using (or did on 10.6) but MP3 should be OK.