I'm getting error -50 (invalid parameters) from AudioUnitRender in the following context. I'm using this Pitch Detector sample app as my starting point, and it works fine. The only major difference in my project is that I'm also using the Remote I/O unit for audio output. The audio output works fine. Here is my input callback and my initialization code (with error checking removed for brevity). I know it's a lot but error -50 really gives me very little information as to where the problem might be.
Input callback:
OSStatus inputCallback( void* inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
WBAudio* audioObject= (WBAudio*)inRefCon;
AudioUnit rioUnit = audioObject->m_audioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, audioObject->m_inBufferList );
if (renderErr < 0) {
return renderErr; // breaks here
}
return noErr;
} // end inputCallback()
Initialization:
- (id) init {
self= [super init];
if( !self ) return nil;
OSStatus result;
//! Initialize a buffer list for rendering input
size_t bytesPerSample;
bytesPerSample = sizeof(SInt16);
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
m_inBufferList->mNumberBuffers = 1;
m_inBufferList->mBuffers[0].mNumberChannels = 1;
m_inBufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
m_inBufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
//! Initialize an audio session to get buffer size
result = AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory);
// Set preferred buffer size
Float32 preferredBufferSize = static_cast<float>(m_pBoard->m_uBufferSize) / m_pBoard->m_fSampleRate;
result = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize);
// Get actual buffer size
Float32 audioBufferSize;
UInt32 size = sizeof (audioBufferSize);
result = AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration, &size, &audioBufferSize);
result = AudioSessionSetActive(true);
//! Create our Remote I/O component description
AudioComponentDescription desc;
desc.componentType= kAudioUnitType_Output;
desc.componentSubType= kAudioUnitSubType_RemoteIO;
desc.componentFlags= 0;
desc.componentFlagsMask= 0;
desc.componentManufacturer= kAudioUnitManufacturer_Apple;
//! Find the corresponding component
AudioComponent outputComponent = AudioComponentFindNext(NULL, &desc);
//! Create the component instance
result = AudioComponentInstanceNew(outputComponent, &m_audioUnit);
//! Enable audio output
UInt32 flag = 1;
result = AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag));
//! Enable audio input
result= AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag));
//! Create our audio stream description
m_audioFormat.mSampleRate= m_pBoard->m_fSampleRate;
m_audioFormat.mFormatID= kAudioFormatLinearPCM;
m_audioFormat.mFormatFlags= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_audioFormat.mFramesPerPacket= 1;
m_audioFormat.mChannelsPerFrame= 1;
m_audioFormat.mBitsPerChannel= 16;
m_audioFormat.mBytesPerPacket= 2;
m_audioFormat.mBytesPerFrame= 2;
//! Set the stream format
result = AudioUnitSetProperty( m_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &m_audioFormat, sizeof(m_audioFormat));
result = AudioUnitSetProperty(m_audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &m_audioFormat, sizeof(m_audioFormat));
//! Set the render callback
AURenderCallbackStruct renderCallbackStruct= {0};
renderCallbackStruct.inputProc= renderCallback;
renderCallbackStruct.inputProcRefCon= m_pBoard;
result = AudioUnitSetProperty(m_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &renderCallbackStruct, sizeof(renderCallbackStruct));
//! Set the input callback
AURenderCallbackStruct inputCallbackStruct = {0};
inputCallbackStruct.inputProc= inputCallback;
inputCallbackStruct.inputProcRefCon= self;
result= AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, kOutputBus, &inputCallbackStruct, sizeof( inputCallbackStruct ) );
//! Initialize the unit
result = AudioUnitInitialize( m_audioUnit );
return self;
}
You are allocating the m_inBufferList as:
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
This should be:
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * numberOfBuffers); //numberOfBuffers in your case is 1
Maybe this can solve your problem.
Error -50 in the develop doc means params error ,
make sure you have pass the right params in AudioUnitRender。
check the stream format and your unit
I agree with Kurt Pattyn that the allocation of m_inBufferList is incorrect, and might be the cause of -50 bad param error.
Except I think that for a single buffer it should be
(AudioBufferList *)malloc(sizeof(AudioBufferList)
My evidence is the following sizes and the code from Adamson & Avila below.
(lldb) sizeof(AudioBufferList)
24
(lldb) po sizeof(AudioBuffer)
16
(lldb) po offsetof(AudioBufferList, mBuffers[0])
8
According to Chris Adamson & Kevin Avila in Learning Core Audio:
// Allocate an AudioBufferList plus enough space for
// array of AudioBuffers
UInt32 propsize = offsetof(AudioBufferList, mBuffers[0]) +
(sizeof(AudioBuffer) * player->streamFormat.mChannelsPerFrame);
// malloc buffer lists
player->inputBuffer = (AudioBufferList *)malloc(propsize);
player->inputBuffer->mNumberBuffers = player->streamFormat.mChannelsPerFrame;
// Pre-malloc buffers for AudioBufferLists
for(UInt32 i =0; i< player->inputBuffer->mNumberBuffers ; i++) {
player->inputBuffer->mBuffers[i].mNumberChannels = 1;
player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
Last but not least, I just happened upon this code with the following comment : )
//credit to TheAmazingAudioEngine for an illustration of proper audiobufferlist allocation.
// Google leads to some really really bad allocation code...
[other code]
sampleABL = malloc(sizeof(AudioBufferList) + (bufferCnt-1)*sizeof(AudioBuffer));
https://github.com/zakk4223/CocoaSplit/blob/master/CocoaSplit/CAMultiAudio/CAMultiAudioPCMPlayer.m#L203
Related
I am trying to read the ARGB pixel data from a png image asset in my ios App.
I am using CGDataProvider to get a CFDataRef as described here:
http://developer.apple.com/library/ios/#qa/qa1509/_index.html
It works perfectly the first time I use it on a certain image. But the second time I use it on THE SAME image, it returns a length 0 CFDataRef.
Maybe I am not releasing something? Why would it do that?
- (GLuint)initWithCGImage:(CGImageRef)newImageSource
{
CGDataProviderRef dataProvider;
CFDataRef dataRef;
GLuint t;
#try {
// NSLog(#"initWithCGImage");
// report_memory2();
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
// CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
// CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
GLubyte *imageData = NULL;
//CFDataRef dataFromImageDataProvider;
// stbi stbiClass;
int x;
int y;
int comp;
dataProvider = CGImageGetDataProvider(newImageSource);
dataRef = CGDataProviderCopyData(dataProvider);
const unsigned char * bytesRef = CFDataGetBytePtr(dataRef);
// NSUInteger length = CFDataGetLength(dataRef);
//CGDataProviderRelease(dataProvider);
//dataProvider = nil;
/*
UIImage *tmpImage = [UIImage imageWithCGImage:newImageSource];
NSData *data2 = UIImagePNGRepresentation(tmpImage);
// if (data2==NULL)
// data2 = UIImageJPEGRepresentation(tmpImage, 1);
unsigned char *bytes = (unsigned char *)[data2 bytes];
NSUInteger length = [data2 length];*/
// stbiClass.img_buffer = bytes;
// stbiClass.buflen = length;
// stbiClass.img_buffer_original = bytes;
// stbiClass.img_buffer_end = bytes + length;
// unsigned char *data = stbi_load_main(&stbiClass, &x, &y, &comp, 0);
//unsigned char * data = bytesRef;
x = widthOfImage;
y = heightOfImage;
comp = CGImageGetBitsPerPixel(newImageSource)/8;
int textureWidth = [self CalcPow2: x];
int textureHeight = [self CalcPow2: y];
unsigned char *scaledData = [self scaleImageWithParams:#{#"x":#(x), #"y":#(y), #"comp":#(comp), #"targetX":#(textureWidth), #"targetY":#(textureHeight)} andData:(unsigned char *)bytesRef];
//CFRelease (dataRef);
// dataRef = nil;
// free (data);
glGenTextures(1, &t);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t);
GLint format = (comp > 3) ? GL_RGBA : GL_RGB;
imageData = scaledData;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, imageData);
//GLenum err = glGetError();
}
#finally
{
CGDataProviderRelease(dataProvider);
// CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(dataRef);
}
return t;
}
The second time this is called on a CGImageRef that originate from a [UIimage imageNamed: Path] with the same Path as the first time, I get a dataRef of length 0.
It works the first time though.
I have found one big issue with the code I posted and fixed it.
First of all, I was getting crashs even if I didn't load the same image twice, but rather more images. Since the issue is related to memory it failed in all sort of weird ways.
The issue with the code is that I am calling: "CGDataProviderRelease(dataProvider);"
I am using the data provider of newImageSource, but I didn't create this dataprovider. That is why I shouldn't release it.
You need to release things only if you created, retained or copied them.
Apart from that my App crash sometime due to low memory, but after fixing this I was able to use the "economy" type where I allocate and release as soon as possible.
Currently I can't see anything else wrong with this specific code.
I just try to developing a VOIP application,
the audio buffer which fetch from RecordingCallBack would be wrapped
to a NSData and then send to the remote-side by GCDAsyncSocket
and the remote-side would get the NSData, unwrapped to an audio
buffer, and then the PlayingCallBack will fetch the audio buffer.
my plan is working so far, running fine on local ( the socket send data to local, and play the buffer local )
but when it running on two devices ( one real iphone-4s, one simulator )
the voice would became stranger, sounds like robotic sound
is there anyway to avoid the robotic sound effect ?
Here is my AudioUnit Settings:
#pragma mark - Init Methods
- (void)initAudioUint
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.0f; // FS
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
audioFormat.mChannelsPerFrame = 1; // stereo output
audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = sizeof(short) * 8; // 16-bit
audioFormat.mBytesPerFrame = audioFormat.mBitsPerChannel / 8 * audioFormat.mChannelsPerFrame;
audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame * audioFormat.mFramesPerPacket;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
/*
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
// Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = 512 * 2;
tempBuffer.mData = malloc( 512 * 2 );
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// TODO: Allocate our own buffers if we want
*/
// Initialise
status = AudioUnitInitialize(audioUnit);
checkStatus(status);
conversionBuffer = (SInt16 *) malloc(1024 * sizeof(SInt16));
}
BTW, is there any way to set the audioFormat.mFramesPerPacket > 1 ?
in my case, it would print error, if the param > 1.
I was thinking about send a buffer which contain multi-frames (for
fetch more time to play on the remote-side), it should be better than
send one frame one packet for VOIP ?
Since the audio sample rate clocks of the two devices won't be perfectly synchronized, you will have to handle buffer underflow and overflow due to slight sample rate mismatches, as well as network latency jitter.
Also note that the buffer size sent to the RemoteIO callback may not stay constant, so the two callbacks will have to be able to handle buffer size mismatches.
I just resolved this problem now!
need to set up the property of audio session, make sure two device has same BufferDuration
// set preferred buffer size
Float32 audioBufferSize = (set up the duration);
UInt32 size = sizeof(audioBufferSize);
result = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
size, &audioBufferSize);
I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg
First configure the audio:
/**
* We need to specifie our format on which we want to work.
* We use Linear PCM cause its uncompressed and we work on raw data.
* for more informations check.
*
* We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8;
audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16);
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16);
The recording callback is:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
NSLog(#"Log record: %lu", inBusNumber);
NSLog(#"Log record: %lu", inNumberFrames);
NSLog(#"Log record: %lu", (UInt32)inTimeStamp);
// the data gets rendered here
AudioBuffer buffer;
// a variable where we check the status
OSStatus status;
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
/**
on this point we define the number of channels, which is mono
for the iphone. the number of frames is usally 512 or 1024.
*/
buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size
buffer.mNumberChannels = 1; // one channel
buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size
// we put our buffer into a bufferlist array for rendering
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// render input and check for error
status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];
// process the bufferlist in the audio processor
[audioProcessor processBuffer:&bufferList];
// clean up the buffer
free(bufferList.mBuffers[0].mData);
//NSLog(#"RECORD");
return noErr;
}
With data:
inBusNumber = 1
inNumberFrames = 1024
inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange
However, the framesize that i need to encode mp3 is 1152. How can i configure it?
If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.
You can configure the number of frames per slice an AudioUnit will use with the property kAudioUnitProperty_MaximumFramesPerSlice. However, I think the best solution in your case is to buffer the incoming audio to a ring buffer and then signal your encoder that audio is available. Since you're transcoding to MP3 I'm not sure what real-time means in this case.
I'm writing an iOS app that takes input from the microphone, runs it through a high-pass filter audio unit, and plays it back through the speakers. I've been able to do this successfully by using the AUGraph API. In it, I put two nodes: a Remote I/O unit, and an effect audio unit (kAudioUnitType_Effect, kAudioUnitSubType_HighPassFilter), and connected the io unit's input element's output scope to the effect unit's input, and the effect node's output to the io unit's output element's input scope. But now I have the need to do some analysis based on the processed audio samples, so I need direct access to the buffer. This means (and please correct me if I'm wrong) I can no longer use AUGraphConnectNodeInput to make the connection between the effect node's output and the io unit's output element, and have to attach an render callback function for the io unit's output element, so that I can access the buffer whenever the speakers need new samples.
I've done so, but I get a -50 error when I call the AudioUnitRender function in the render callback. I believe I have a case of ASBDs mismatch between the two audio units, since I'm not doing anything about that in the render callback (and the AUGraph took care of it before).
Here's the code:
AudioController.h:
#interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
}
#property (readonly, nonatomic) AudioUnit mEffects;
#property (readonly, nonatomic) AudioUnit ioUnit;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
#end
AudioController.mm:
#implementation AudioController
…
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (__bridge AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription fxOutputASBD;
UInt32 fxOutputASBDSize = sizeof(fxOutputASBD);
AudioUnitGetProperty([THIS mEffects], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &fxOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * fxOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = fxOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
//TODO prender ARM y solucionar problema de memoria
OSStatus result = AudioUnitRender([THIS mEffects], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[THIS hasError:result:__FILE__:__LINE__];
memcpy(ioData, buffer.mData, buffer.mDataByteSize);
return noErr;
}
- (void)initializeAUGraph
{
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_LowPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph to hold the AudioUnits
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode );
[self hasError:result:__FILE__:__LINE__];
// Connect the effect node's output to the output node's input
// This is no longer the case, as I need to access the buffer
// result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, outputNode, 0);
[self hasError:result:__FILE__:__LINE__];
// Connect the output node's input scope's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
[self hasError:result:__FILE__:__LINE__];
// open the graph AudioUnits
result = AUGraphOpen(mGraph);
[self hasError:result:__FILE__:__LINE__];
// Get a link to the effect AU
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
[self hasError:result:__FILE__:__LINE__];
// Same for io unit
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
[self hasError:result:__FILE__:__LINE__];
// Enable input on io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
[self hasError:result:__FILE__:__LINE__];
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = (__bridge void*)self;
// Set a callback for the specified node's specified input
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
[self hasError:result:__FILE__:__LINE__];
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
[self hasError:result:__FILE__:__LINE__];
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
[self hasError:result:__FILE__:__LINE__];
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
[self hasError:result:__FILE__:__LINE__];
// Once everything is set up call initialize to validate connections
result = AUGraphInitialize(mGraph);
[self hasError:result:__FILE__:__LINE__];
}
#end
As I said before, I'm getting a -50 error on the AudioUnitRender call, and I'm finding little to no documentation at all about it.
Any help will be much appreciated.
Thanks to Tim Bolstad (http://timbolstad.com/2010/03/14/core-audio-getting-started/) for providing an excellent starting point tutorial.
There are simpler working examples of using RemoteIO for just playing buffers of audio. Perhaps start with one of those first rather than a graph.
Check to make sure that you are actually making all of the necessary connections. It appears that you are initializing most everything necessary, but if you simply want to passthrough audio you don't need the render callback function.
Now, if you want to do the filter, you may need one, but even so, make sure that you are actually connecting the components together properly.
Here's a snippet from an app I'm working on:
AUGraphConnectNodeInput(graph, outputNode, kInputBus, mixerNode, kInputBus);
AUGraphConnectNodeInput(graph, mixerNode, kOutputBus, outputNode, kOutputBus);
This connects the input from the RemoteIO unit to a Multichannel Mixer unit, then connects the output from the mixer to the RemoteIO's output to the speaker.
It looks to me like you're passing in the wrong audio unit to AudioUnitRender. I think you need to pass in ioUnit instead of mEffects. In any case, double check all of the parameters you're passing in to AudioUnitRender. When I see -50 returned it's because I botched one of them.
every time callback function of the audio unit is being called, i get another number of samples which is strange because NSLog(#"%ld",inNumberFrames); gives me always 512.
when i do this :
NSLog(#"%li",strlen((const char *)(&bufferList)->mBuffers[0].mData));
i get numbers such as: 50 20 19 160 200 1 ...
which is strange.
each call back, i have to get the full buffer 512 samples no ?
i know i dont get all needed samples because if i input a sin 2khz , i get zero crossing of about 600, and if i put nothing, i g et the same.
to retrieve data i do :
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
//NSLog(#"%ld",inNumberFrames);
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
//checkStatus(status);
NSLog(#"%li",strlen((const char *)(&bufferList)->mBuffers[0].mData));
int16_t *q = (int16_t *)(&bufferList)->mBuffers[0].mData;
for(int i=0; i < strlen((const char *)(&bufferList)->mBuffers[0].mData); i++)
{
....
any help with this will save me days !
thanks .
A strlen() of the audio sample buffer is not related to the number of samples. Instead, look at the number of frames given as the callback function's parameter.