Trouble writing a remote I/O render callback function - objective-c

I'm writing an iOS app that takes input from the microphone, runs it through a high-pass filter audio unit, and plays it back through the speakers. I've been able to do this successfully by using the AUGraph API. In it, I put two nodes: a Remote I/O unit, and an effect audio unit (kAudioUnitType_Effect, kAudioUnitSubType_HighPassFilter), and connected the io unit's input element's output scope to the effect unit's input, and the effect node's output to the io unit's output element's input scope. But now I have the need to do some analysis based on the processed audio samples, so I need direct access to the buffer. This means (and please correct me if I'm wrong) I can no longer use AUGraphConnectNodeInput to make the connection between the effect node's output and the io unit's output element, and have to attach an render callback function for the io unit's output element, so that I can access the buffer whenever the speakers need new samples.
I've done so, but I get a -50 error when I call the AudioUnitRender function in the render callback. I believe I have a case of ASBDs mismatch between the two audio units, since I'm not doing anything about that in the render callback (and the AUGraph took care of it before).
Here's the code:
AudioController.h:
#interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
}
#property (readonly, nonatomic) AudioUnit mEffects;
#property (readonly, nonatomic) AudioUnit ioUnit;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
#end
AudioController.mm:
#implementation AudioController
…
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (__bridge AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription fxOutputASBD;
UInt32 fxOutputASBDSize = sizeof(fxOutputASBD);
AudioUnitGetProperty([THIS mEffects], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &fxOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * fxOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = fxOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
//TODO prender ARM y solucionar problema de memoria
OSStatus result = AudioUnitRender([THIS mEffects], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[THIS hasError:result:__FILE__:__LINE__];
memcpy(ioData, buffer.mData, buffer.mDataByteSize);
return noErr;
}
- (void)initializeAUGraph
{
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_LowPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph to hold the AudioUnits
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode );
[self hasError:result:__FILE__:__LINE__];
// Connect the effect node's output to the output node's input
// This is no longer the case, as I need to access the buffer
// result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, outputNode, 0);
[self hasError:result:__FILE__:__LINE__];
// Connect the output node's input scope's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
[self hasError:result:__FILE__:__LINE__];
// open the graph AudioUnits
result = AUGraphOpen(mGraph);
[self hasError:result:__FILE__:__LINE__];
// Get a link to the effect AU
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
[self hasError:result:__FILE__:__LINE__];
// Same for io unit
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
[self hasError:result:__FILE__:__LINE__];
// Enable input on io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
[self hasError:result:__FILE__:__LINE__];
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = (__bridge void*)self;
// Set a callback for the specified node's specified input
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
[self hasError:result:__FILE__:__LINE__];
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
[self hasError:result:__FILE__:__LINE__];
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
[self hasError:result:__FILE__:__LINE__];
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
[self hasError:result:__FILE__:__LINE__];
// Once everything is set up call initialize to validate connections
result = AUGraphInitialize(mGraph);
[self hasError:result:__FILE__:__LINE__];
}
#end
As I said before, I'm getting a -50 error on the AudioUnitRender call, and I'm finding little to no documentation at all about it.
Any help will be much appreciated.
Thanks to Tim Bolstad (http://timbolstad.com/2010/03/14/core-audio-getting-started/) for providing an excellent starting point tutorial.

There are simpler working examples of using RemoteIO for just playing buffers of audio. Perhaps start with one of those first rather than a graph.

Check to make sure that you are actually making all of the necessary connections. It appears that you are initializing most everything necessary, but if you simply want to passthrough audio you don't need the render callback function.
Now, if you want to do the filter, you may need one, but even so, make sure that you are actually connecting the components together properly.
Here's a snippet from an app I'm working on:
AUGraphConnectNodeInput(graph, outputNode, kInputBus, mixerNode, kInputBus);
AUGraphConnectNodeInput(graph, mixerNode, kOutputBus, outputNode, kOutputBus);
This connects the input from the RemoteIO unit to a Multichannel Mixer unit, then connects the output from the mixer to the RemoteIO's output to the speaker.

It looks to me like you're passing in the wrong audio unit to AudioUnitRender. I think you need to pass in ioUnit instead of mEffects. In any case, double check all of the parameters you're passing in to AudioUnitRender. When I see -50 returned it's because I botched one of them.

Related

Writing silent CMSampleBuffer causes crash: CrashIfClientProvidedBogusAudioBufferList

i'm recording Video/Audio using AVAssetwriter and want to be able to write silent Sample Buffers. I'm not that experienced in CoreAudio, so i'm having trouble coming up with a working solution.
The idea is to keep recording video when an audio device is disconnected until it's reconnected. The problem is, that AVFoundation somehow push the audio to the front, so the resulting movie file is massivley out of sync.
My current implementation tries to create an empty/silent CMSampleBuffer to place in between segments where there's no audio device connected.
if (audioOutput == captureOutput && audioWriterInput.readyForMoreMediaData) {
if (needToFillAudioGap) {
CMTime temp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMItemCount numSamples = temp.value - lastAudioDisconnect.value;
OSStatus status;
CMBlockBufferRef bbuf = NULL;
CMSampleBufferRef sbuf = NULL;
int nchans = 2;
size_t buflen = numSamples * nchans * sizeof(float);
NSMutableData* data = [NSMutableData dataWithLength:buflen];
void* samples = [data mutableBytes];
status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
samples,
buflen,
kCFAllocatorNull,
NULL,
0,
buflen,
0,
&bbuf);
if (status != noErr) {
NSLog(#"CMBlockBufferCreateWithMemoryBlock error: %d", (int)status);
return;
}
CMBlockBufferRef blockBufferContiguous;
status = CMBlockBufferCreateContiguous(kCFAllocatorDefault,
bbuf,
kCFAllocatorNull,
NULL,
0,
buflen,
0,
&blockBufferContiguous);
CFRelease(bbuf);
if(status != noErr)
{
printf("CMBlockBufferCreateContiguous failed with error %d\n", (int)status);
return;
}
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(kCFAllocatorDefault, blockBufferContiguous, CMSampleBufferGetFormatDescription(sampleBuffer), numSamples, lastAudioDisconnect, NULL, &sbuf);
CFRelease(blockBufferContiguous);
if (status != noErr) {
NSLog(#"CMSampleBufferCreate error: %d", (int)status);
return;
}
BOOL r = [audioWriterInput appendSampleBuffer:sbuf];
if (!r) {
NSLog(#"appendSampleBuffer error: %d", (int)status);
return;
}
CFRelease(sbuf);
NSLog(#"Filling Audio Gap");
needToFillAudioGap = false;
} else {
if ([audioWriterInput appendSampleBuffer:sampleBuffer])
lastAudioDisconnect = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
}
}
The sampleBuffer at the top is the first samplebuffer after the audio device is reconnected. That gives me information on how long the gap I have to fill should be. LastAudioDisconnect always holds the Presentation Timestamp from the last audio samplebuffer that was written.
With Guard Malloc enabled the program crashes with: CrashIfClientProvidedBogusAudioBufferList
EDIT:
With Guard Malloc disabled i am able to reconnect the audio device multiple times while recording and when i stop to record, the gap is there without problems.
The problem then is that i only have a couple a minutes to stop the recording after reconnecting a device, because the AVAssetWriter fails randomly with the error code 11800 (AVErrorUnknown).
The error is because your created CMSampleBuffer is too long.
The created CMSampleBuffer should be the same length (contain the same number of samples) as the proceeding sample buffers that are filled with live audio. You can create multiple silent buffers, if necessary, and deliver them at the same rate as the live audio buffers.

How to avoid robotic sound when play remote audio buffer for iOS?

I just try to developing a VOIP application,
the audio buffer which fetch from RecordingCallBack would be wrapped
to a NSData and then send to the remote-side by GCDAsyncSocket
and the remote-side would get the NSData, unwrapped to an audio
buffer, and then the PlayingCallBack will fetch the audio buffer.
my plan is working so far, running fine on local ( the socket send data to local, and play the buffer local )
but when it running on two devices ( one real iphone-4s, one simulator )
the voice would became stranger, sounds like robotic sound
is there anyway to avoid the robotic sound effect ?
Here is my AudioUnit Settings:
#pragma mark - Init Methods
- (void)initAudioUint
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.0f; // FS
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
audioFormat.mChannelsPerFrame = 1; // stereo output
audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = sizeof(short) * 8; // 16-bit
audioFormat.mBytesPerFrame = audioFormat.mBitsPerChannel / 8 * audioFormat.mChannelsPerFrame;
audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame * audioFormat.mFramesPerPacket;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
/*
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
// Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = 512 * 2;
tempBuffer.mData = malloc( 512 * 2 );
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// TODO: Allocate our own buffers if we want
*/
// Initialise
status = AudioUnitInitialize(audioUnit);
checkStatus(status);
conversionBuffer = (SInt16 *) malloc(1024 * sizeof(SInt16));
}
BTW, is there any way to set the audioFormat.mFramesPerPacket > 1 ?
in my case, it would print error, if the param > 1.
I was thinking about send a buffer which contain multi-frames (for
fetch more time to play on the remote-side), it should be better than
send one frame one packet for VOIP ?
Since the audio sample rate clocks of the two devices won't be perfectly synchronized, you will have to handle buffer underflow and overflow due to slight sample rate mismatches, as well as network latency jitter.
Also note that the buffer size sent to the RemoteIO callback may not stay constant, so the two callbacks will have to be able to handle buffer size mismatches.
I just resolved this problem now!
need to set up the property of audio session, make sure two device has same BufferDuration
// set preferred buffer size
Float32 audioBufferSize = (set up the duration);
UInt32 size = sizeof(audioBufferSize);
result = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
size, &audioBufferSize);

Cliks and distortions in Lame encoded Mp3 file

I'm trying to encode the raw PCM data from microphone to MP3 using AudioToolbox framework and Lame. And although everything seems to run fine, there is this problem with "clicks" and "distortions" present in the encoded stream.
I'm not sure that I setup AudioQueue correctly and also that I process the encoded buffer in the right wat...
My code to setup audio recording:
AudioStreamBasicDescription streamFormat;
memset(&streamFormat, 0, sizeof(AudioStreamBasicDescription));
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger|kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2;
streamFormat.mBytesPerFrame = 2;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
AudioQueueNewInput(&streamFormat, InputBufferCallback, (__bridge void*)(self), nil, nil, 0, &mQueue);
UInt32 bufferByteSize = 44100;
memset(&mEncodedBuffer, 0, sizeof(mEncodedBuffer)); //mEncoded buffer is
//unsigned char [72000]
AudioQueueBufferRef buffer;
for (int i=0; i<3; i++) {
AudioQueueAllocateBuffer(mQueue, bufferByteSize, &buffer);
AudioQueueEnqueueBuffer(mQueue, buffer, 0, NULL);
}
AudioQueueStart(mQueue, nil);
Then the AudioQueue callback function calls to lame_encode_buffer and then writes the encoded buffer to file:
void InputBufferCallback (void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription* inPacketDesc) {
memset(&mEncodedBuffer, 0, sizeof(mEncodedBuffer));
int encodedBytes = lame_encode_buffer(glf, (short*)inBuffer->mAudioData, NULL, inBuffer->mAudioDataByteSize, mEncodedBuffer, 72000);
//What I don't understand is that if I write the full 'encodedBytes' data, then there are A LOT of distortions and original sound is seriously broken
NSData* data = [NSData dataWithBytes:mEncodedBuffer length:encodedBytes/2];
[mOutputFile writeData:data];
}
And when I afterward try to play the file which contains Lame encoded data with AVAudioPlayer I clearly hear original sound but with some clicks and distortions around.
Can anybody advise what's wrong here?
Your code does not appear to be paying attention to inNumPackets, which is the amount of actual audio data given the callback.
Also, doing a long operation, such as running an encoder, inside an audio callback might not be fast enough and thus may violate response requirements. Any long function calls should be done outside the callback.

Error -50 from AudioUnitRender

I'm getting error -50 (invalid parameters) from AudioUnitRender in the following context. I'm using this Pitch Detector sample app as my starting point, and it works fine. The only major difference in my project is that I'm also using the Remote I/O unit for audio output. The audio output works fine. Here is my input callback and my initialization code (with error checking removed for brevity). I know it's a lot but error -50 really gives me very little information as to where the problem might be.
Input callback:
OSStatus inputCallback( void* inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
WBAudio* audioObject= (WBAudio*)inRefCon;
AudioUnit rioUnit = audioObject->m_audioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, audioObject->m_inBufferList );
if (renderErr < 0) {
return renderErr; // breaks here
}
return noErr;
} // end inputCallback()
Initialization:
- (id) init {
self= [super init];
if( !self ) return nil;
OSStatus result;
//! Initialize a buffer list for rendering input
size_t bytesPerSample;
bytesPerSample = sizeof(SInt16);
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
m_inBufferList->mNumberBuffers = 1;
m_inBufferList->mBuffers[0].mNumberChannels = 1;
m_inBufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
m_inBufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
//! Initialize an audio session to get buffer size
result = AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory);
// Set preferred buffer size
Float32 preferredBufferSize = static_cast<float>(m_pBoard->m_uBufferSize) / m_pBoard->m_fSampleRate;
result = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize);
// Get actual buffer size
Float32 audioBufferSize;
UInt32 size = sizeof (audioBufferSize);
result = AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration, &size, &audioBufferSize);
result = AudioSessionSetActive(true);
//! Create our Remote I/O component description
AudioComponentDescription desc;
desc.componentType= kAudioUnitType_Output;
desc.componentSubType= kAudioUnitSubType_RemoteIO;
desc.componentFlags= 0;
desc.componentFlagsMask= 0;
desc.componentManufacturer= kAudioUnitManufacturer_Apple;
//! Find the corresponding component
AudioComponent outputComponent = AudioComponentFindNext(NULL, &desc);
//! Create the component instance
result = AudioComponentInstanceNew(outputComponent, &m_audioUnit);
//! Enable audio output
UInt32 flag = 1;
result = AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag));
//! Enable audio input
result= AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag));
//! Create our audio stream description
m_audioFormat.mSampleRate= m_pBoard->m_fSampleRate;
m_audioFormat.mFormatID= kAudioFormatLinearPCM;
m_audioFormat.mFormatFlags= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_audioFormat.mFramesPerPacket= 1;
m_audioFormat.mChannelsPerFrame= 1;
m_audioFormat.mBitsPerChannel= 16;
m_audioFormat.mBytesPerPacket= 2;
m_audioFormat.mBytesPerFrame= 2;
//! Set the stream format
result = AudioUnitSetProperty( m_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &m_audioFormat, sizeof(m_audioFormat));
result = AudioUnitSetProperty(m_audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &m_audioFormat, sizeof(m_audioFormat));
//! Set the render callback
AURenderCallbackStruct renderCallbackStruct= {0};
renderCallbackStruct.inputProc= renderCallback;
renderCallbackStruct.inputProcRefCon= m_pBoard;
result = AudioUnitSetProperty(m_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &renderCallbackStruct, sizeof(renderCallbackStruct));
//! Set the input callback
AURenderCallbackStruct inputCallbackStruct = {0};
inputCallbackStruct.inputProc= inputCallback;
inputCallbackStruct.inputProcRefCon= self;
result= AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, kOutputBus, &inputCallbackStruct, sizeof( inputCallbackStruct ) );
//! Initialize the unit
result = AudioUnitInitialize( m_audioUnit );
return self;
}
You are allocating the m_inBufferList as:
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
This should be:
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * numberOfBuffers); //numberOfBuffers in your case is 1
Maybe this can solve your problem.
Error -50 in the develop doc means params error ,
make sure you have pass the right params in AudioUnitRender。
check the stream format and your unit
I agree with Kurt Pattyn that the allocation of m_inBufferList is incorrect, and might be the cause of -50 bad param error.
Except I think that for a single buffer it should be
(AudioBufferList *)malloc(sizeof(AudioBufferList)
My evidence is the following sizes and the code from Adamson & Avila below.
(lldb) sizeof(AudioBufferList)
24
(lldb) po sizeof(AudioBuffer)
16
(lldb) po offsetof(AudioBufferList, mBuffers[0])
8
According to Chris Adamson & Kevin Avila in Learning Core Audio:
// Allocate an AudioBufferList plus enough space for
// array of AudioBuffers
UInt32 propsize = offsetof(AudioBufferList, mBuffers[0]) +
(sizeof(AudioBuffer) * player->streamFormat.mChannelsPerFrame);
// malloc buffer lists
player->inputBuffer = (AudioBufferList *)malloc(propsize);
player->inputBuffer->mNumberBuffers = player->streamFormat.mChannelsPerFrame;
// Pre-malloc buffers for AudioBufferLists
for(UInt32 i =0; i< player->inputBuffer->mNumberBuffers ; i++) {
player->inputBuffer->mBuffers[i].mNumberChannels = 1;
player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
Last but not least, I just happened upon this code with the following comment : )
//credit to TheAmazingAudioEngine for an illustration of proper audiobufferlist allocation.
// Google leads to some really really bad allocation code...
[other code]
sampleABL = malloc(sizeof(AudioBufferList) + (bufferCnt-1)*sizeof(AudioBuffer));
https://github.com/zakk4223/CocoaSplit/blob/master/CocoaSplit/CAMultiAudio/CAMultiAudioPCMPlayer.m#L203

How do you add to an AudioBufferList with an AVAssetReader?

I have been working on reading in an audio asset using AVAssetReader so that I can later play back the audio with an AUGraph with an AudioUnit callback. I have the AUGraph and AudioUnit callback working but it reads files from disk and if the file is too big it would take up too much memory and crash the app. So I am instead reading the asset directly and only a limited size. I will then manage it as a double buffer and get the AUGraph what it needs when it needs it.
(Note: I would love know if I can use Audio Queue Services and still use an AUGraph with AudioUnit callback so memory is managed for me by the iOS frameworks.)
My problem is that I do not have a good understanding of arrays, structs and pointers in C. The part where I need help is taking the individual AudioBufferList which holds onto a single AudioBuffer and add that data to another AudioBufferList which holds onto all of the data to be used later. I believe I need to use memcpy but it is not clear how to use it or even initialize an AudioBufferList for my purposes. I am using MixerHost for reference which is the sample project from Apple which reads in the file from disk.
I have uploaded my work in progress if you would like to load it up in Xcode. I've figured out most of what I need to get this done and once I have the data being collected all in one place I should be good to go.
Sample Project: MyAssetReader.zip
In the header you can see I declare the bufferList as a pointer to the struct.
#interface MyAssetReader : NSObject {
BOOL reading;
signed long sampleTotal;
Float64 totalDuration;
AudioBufferList *bufferList; // How should this be handled?
}
Then I allocate bufferList this way, largely borrowing from MixerHost...
UInt32 channelCount = [asset.tracks count];
if (channelCount > 1) {
NSLog(#"We have more than 1 channel!");
}
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (#"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[arrayIndex] = emptyBuffer;
bufferList->mBuffers[arrayIndex].mNumberChannels = 1;
// How should mData be initialized???
bufferList->mBuffers[arrayIndex].mData = malloc(sizeof(AudioUnitSampleType));
}
Finally I loop through the reads.
int frameCount = 0;
CMSampleBufferRef nextBuffer;
while (assetReader.status == AVAssetReaderStatusReading) {
nextBuffer = [assetReaderOutput copyNextSampleBuffer];
AudioBufferList localBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &localBufferList, sizeof(localBufferList), NULL, NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
// increase the number of total bites
bufferList->mBuffers[0].mDataByteSize += localBufferList.mBuffers[0].mDataByteSize;
// carefully copy the data into the buffer list
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
// get information about duration and position
//CMSampleBufferGet
CMItemCount sampleCount = CMSampleBufferGetNumSamples(nextBuffer);
Float64 duration = CMTimeGetSeconds(CMSampleBufferGetDuration(nextBuffer));
Float64 presTime = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(nextBuffer));
if (isnan(duration)) duration = 0.0;
if (isnan(presTime)) presTime = 0.0;
//NSLog(#"sampleCount: %ld", sampleCount);
//NSLog(#"duration: %f", duration);
//NSLog(#"presTime: %f", presTime);
self.sampleTotal += sampleCount;
self.totalDuration += duration;
frameCount++;
free(nextBuffer);
}
I am unsure about the what that I handle mDataByteSize and mData, especially with memcpy. Since mData is a void pointer this is an extra tricky area.
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
In this line I think it should be copying the value from the data in localBufferList to the position in the bufferList plus the number of frames to position the pointer where it should write the data. I have a couple of ideas on what I need to change to get this to work.
Since a void pointer is just 1 and not the size of the pointer for an AudioUnitSampleType I may need to multiply it also by sizeof(AudioUnitSampleType) to get the memcpy into the right position
I may not be using malloc properly to prepare mData but since I am not sure how many frames there will be I am not sure what to do to initialize it
Currently when I run this app it ends this function with an invalid pointer for bufferList.
I appreciate your help with making me better understand how to manage an AudioBufferList.
I've come up with my own answer. I decided to use an NSMutableData object which allows me to appendBytes from the CMSampleBufferRef after calling CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer to get an AudioBufferList.
[data appendBytes:localBufferList.mBuffers[0].mData length:localBufferList.mBuffers[0].mDataByteSize];
Once the read loop is done I have all of the data in my NSMutableData object. I then create and populate the AudioBufferList this way.
audioBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList));
if (NULL == audioBufferList) {
NSLog (#"*** malloc failure for allocating audioBufferList memory");
[data release];
return;
}
audioBufferList->mNumberBuffers = 1;
audioBufferList->mBuffers[0].mNumberChannels = channelCount;
audioBufferList->mBuffers[0].mDataByteSize = [data length];
audioBufferList->mBuffers[0].mData = (AudioUnitSampleType *)malloc([data length]);
if (NULL == audioBufferList->mBuffers[0].mData) {
NSLog (#"*** malloc failure for allocating mData memory");
[data release];
return;
}
memcpy(audioBufferList->mBuffers[0].mData, [data mutableBytes], [data length]);
[data release];
I'd appreciate a little code review on how I use malloc to create the struct and populate it. I am getting a EXC_BAD_ACCESS error sporadically but I cannot pinpoint where the error is just yet. Since I am using malloc on the struct I should not have to retain it anywhere. I do call "free" to release child elements within the struct and finally the struct itself everywhere that I use malloc.