pop at the beginning of playback - naudio

When playing a memory stream containing wav encoded audio, the playback starts with a sharp pop/crackle:
ms = new MemoryStream(File.ReadAllBytes(audio_filename));
[...]
dispose_audio();
sound_output = new DirectSoundOut();
IWaveProvider provider = new RawSourceWaveStream(ms, new WaveFormat());
sound_output.Init(provider);
sound_output.Play();
That pop/crackle does not occur when playing the wav file directly:
dispose_audio();
NAudio.Wave.WaveStream pcm = new WaveChannel32(new NAudio.Wave.WaveFileReader(audio_filename));
audio_stream = new BlockAlignReductionStream(pcm);
sound_output = new DirectSoundOut();
sound_output.Init(audio_stream);
sound_output.Play();
Same file is playing, but when the wav data are stored in a memory stream first, there is a somewhat loud pop at the beginning of the playback.
I am very much a newbie with NAudio and audio in general, so it's probably something silly, but I can't seem to figure it out.

You are playing the WAV file header as though it were audio. Instead of RawSourceWaveStream, you still need to use WaveFileReader, just pass in your memory stream.

Related

ToneJs extract frequency data of signal

Hey im making a synthesizer with ToneJs and want to draw the audio wave of the signal. Basicly the output of Tone.Destination
I know you can get the frequency data of a audio file for example like so
const audioSrc = audioContext.createMediaElementSource(audioElement)
const analyser = audioContext.createAnalyser()
audioSrc.connect(analyser)
audioSrc.connect(audioContext.destination)
const frequencyData = new Uint8Array(analyser.frequencyBinCount)
but I cant get it to work with ToneJs using a Synth or Oscillator. I always get an array of 0's.
I tried several things. Using the Tone.Waveform() or Tone.Analyser() but no success. I dont really get the concept here I think. How do you connect the audio source to the Analyser/Waveform? And whats is acutally the audio source object here?
const waveform = new Tone.Waveform()
Tone.Destination.connect(waveform)
console.log(waveform.getValue())
Any help is appreaciated!

Feed m4s files directly into SourceBuffer doesn't work

I downloaded some init MP4 file(init.mp4) and a sequence of m4s files(like 744397965.m4s, 744397966.m4s, 744397967.m4s...) from a live stream http://vm2.dashif.org/livesim/testpic_2s/Manifest.mpd using Dash.js.
Then I tried to feed these files directly into SourceBuffer bind with a video element, no pictures been played and no error thrown, why?
Did you seek the video element to the earliest timestamp in the resultant buffer (if it is not zero) and then call play() ?

MFT NAudio Resampling on the fly

I want to resample an audio file using NAudio and MFT on-the-fly.
For example, I have the following audio file:
File name: MyAudioFile.mp3
Duration: 10 sec
When this file is being played, I only want to resample that particular position to WAV in the desired format.
So, if the length of "MyAudioFile.mp3" is 10 sec, and the "current play position" is 2.5 sec, I want to resample only that portion of data into WAV format at the sampling rate of 48 KHz.
When the audio progresses further, again, only the "current play position" must be resampled.
I tried the following code:
WaveStream reader = new MediaFoundationReaderRT([path of
"MyAudioFile.mp3"]);
MemoryStream outMemStream = new MemoryStream(); //Decode to memory
stream
using (reader)
using (var resampler = new MediaFoundationResampler(reader,
resampler.WaveFormat))
{
WaveFileWriter.CreateWaveFile(outMemStream, resampler);
rsws = new RawSourceWaveStream(outMemStream, resampler.WaveFormat);
}
WaveChannel32 waveformInputStream = new WaveChannel32(rsws);
The resampling happens properly; however it resamples the whole audio file, which takes time.
What I am looking at is just resampling the "current play position" of the audio, and discard any other position information.
Thanks! Appreciate if you can provide some sample.
To resample on the fly, just pass the reader directly into MediaFoundationResampler. You will now have an ISampleProvider so you won't be able to use WaveChannel32, but really that is an obsolete class now, and you should be able to do anything you need with other ISampleProvider classes from NAudio.

Save to buffer instead of file when recording aac with Audio Queue Services

My goal is to record audio in AAC format and send it over a network connection as a stream.
I'm using Audio Queue Services and have based my code on the SpeakHere example. I know that for writing to a file it uses the AudioFileWritePackets() function.
Here's the callback function:
void MyInputBufferHandler(void* inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc) {
Recorder *aqr = (Recorder *)inUserData;
if (inNumPackets > 0) {
// write packets to file
AudioFileWritePackets(aqr->mRecordFile, FALSE,
inBuffer->mAudioDataByteSize,
inPacketDesc, aqr->mRecordPacket,
&inNumPackets, inBuffer->mAudioData);
aqr->mRecordPacket += inNumPackets;
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if ([aqr IsRunning])
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
At first I thought AudioFileWritePackets works by directly writing the content of inBuffer->mAudioData. However when I manually write just the contents of mAudioData to a file the decoding doesn't seem to work.
On examining and comparing the raw data of what AudioFileWritePackets writes to a file and of me just writing mAudioData to a file, they seem to have attached a header to it.
As I can't find out how AudioFileWritePackets() works inside, my question is, how can I write the recorded audio to a buffer (or stream, if you prefer to call it this way) instead of a file, so that I can later send it over a network connection?
Again, in short here's what I need: record audio in aac format and stream the audio over network connection.
Thanks! I've been searching my head blue...
P.S: please if you point me to existing projects, make sure they are what I need. I really feel like I've been looking for all possible projects out there >.<
This should help.
Check the "HandleInputBuffer" method in "SpeechToTextModule.m".

Mimic file IO in j2me midlet using RMS

I want to be able to record audio and save it to persistent storage in my j2me application. As I understand j2me does not expose the handset's file system, instead it wants the developer to use the RMS system. I understand the idea behind RMS but cannot seem to think of the best way to implement audio recording using it. I have a continuous stream of bits from the audio input which must be saved, 1) should I make a buffer and then periodically create a new record with the bytes in the buffer. 2) Should I put each sample in a new record? 3) should I save the entire recording file in a byte array and then only write it to the RMS on stop recording?
Is there a better way to achieve this other than RMS?
Consider this code below and edit it as necessary it should solve your problem by writing to the phone filesystem directly
getRoots();
FileConnection fc = null;
DataOutputStream dos = null;
fc = (FileConnection)Connector.open("file:///E:/");
if (!fc.exists())
{
fc.mkdir();
}
fc = (FileConnection) Connector.open("file:///E:/test.wav");
if (!fc.exists())
{
fc.create();
}
dos = fc.openDataOutputStream();
dos.write( recordedSoundArray);