remoteIO call back function is giving unstable buffer list - objective-c

every time callback function of the audio unit is being called, i get another number of samples which is strange because NSLog(#"%ld",inNumberFrames); gives me always 512.
when i do this :
NSLog(#"%li",strlen((const char *)(&bufferList)->mBuffers[0].mData));
i get numbers such as: 50 20 19 160 200 1 ...
which is strange.
each call back, i have to get the full buffer 512 samples no ?
i know i dont get all needed samples because if i input a sin 2khz , i get zero crossing of about 600, and if i put nothing, i g et the same.
to retrieve data i do :
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
//NSLog(#"%ld",inNumberFrames);
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
//checkStatus(status);
NSLog(#"%li",strlen((const char *)(&bufferList)->mBuffers[0].mData));
int16_t *q = (int16_t *)(&bufferList)->mBuffers[0].mData;
for(int i=0; i < strlen((const char *)(&bufferList)->mBuffers[0].mData); i++)
{
....
any help with this will save me days !
thanks .

A strlen() of the audio sample buffer is not related to the number of samples. Instead, look at the number of frames given as the callback function's parameter.

Related

Using RemoteIO gives unreasonable buffers value and sizes

Using the callback function on the iphone , i am trying get microphone input signal.
After so much problems i have discover this :
When i input to the buffer a pure sin wave (to the mac simulator ) i can see the signal, but then, it becomes lower and lower till zero .
I was thinking that this is relate to apple's bug ,that the number of buffer's samples on the mac is 471 instead of 1024 . can i solve this bug somehow ???
This is my callback :
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2; //* sizeof(SInt16) ?
buffer.mData = NULL;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
int16_t *q = (int16_t *)(&bufferList)->mBuffers[0].mData;
//here i print q,which is good for 4 seconds-when i can see the pure sin, than it goes down to zero-while sin wave is still in the air
EDIT:
this is not happening on the device, only on the mac !
i am pretty sure its related to the bug that the mac see 417 samples in the buffer !

How to configure the framesize using AudioUnit.framework on iOS

I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg
First configure the audio:
/**
* We need to specifie our format on which we want to work.
* We use Linear PCM cause its uncompressed and we work on raw data.
* for more informations check.
*
* We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8;
audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16);
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16);
The recording callback is:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
NSLog(#"Log record: %lu", inBusNumber);
NSLog(#"Log record: %lu", inNumberFrames);
NSLog(#"Log record: %lu", (UInt32)inTimeStamp);
// the data gets rendered here
AudioBuffer buffer;
// a variable where we check the status
OSStatus status;
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
/**
on this point we define the number of channels, which is mono
for the iphone. the number of frames is usally 512 or 1024.
*/
buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size
buffer.mNumberChannels = 1; // one channel
buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size
// we put our buffer into a bufferlist array for rendering
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// render input and check for error
status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];
// process the bufferlist in the audio processor
[audioProcessor processBuffer:&bufferList];
// clean up the buffer
free(bufferList.mBuffers[0].mData);
//NSLog(#"RECORD");
return noErr;
}
With data:
inBusNumber = 1
inNumberFrames = 1024
inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange
However, the framesize that i need to encode mp3 is 1152. How can i configure it?
If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.
You can configure the number of frames per slice an AudioUnit will use with the property kAudioUnitProperty_MaximumFramesPerSlice. However, I think the best solution in your case is to buffer the incoming audio to a ring buffer and then signal your encoder that audio is available. Since you're transcoding to MP3 I'm not sure what real-time means in this case.

Error -50 from AudioUnitRender

I'm getting error -50 (invalid parameters) from AudioUnitRender in the following context. I'm using this Pitch Detector sample app as my starting point, and it works fine. The only major difference in my project is that I'm also using the Remote I/O unit for audio output. The audio output works fine. Here is my input callback and my initialization code (with error checking removed for brevity). I know it's a lot but error -50 really gives me very little information as to where the problem might be.
Input callback:
OSStatus inputCallback( void* inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
WBAudio* audioObject= (WBAudio*)inRefCon;
AudioUnit rioUnit = audioObject->m_audioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, audioObject->m_inBufferList );
if (renderErr < 0) {
return renderErr; // breaks here
}
return noErr;
} // end inputCallback()
Initialization:
- (id) init {
self= [super init];
if( !self ) return nil;
OSStatus result;
//! Initialize a buffer list for rendering input
size_t bytesPerSample;
bytesPerSample = sizeof(SInt16);
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
m_inBufferList->mNumberBuffers = 1;
m_inBufferList->mBuffers[0].mNumberChannels = 1;
m_inBufferList->mBuffers[0].mDataByteSize = 512*bytesPerSample;
m_inBufferList->mBuffers[0].mData = calloc(512, bytesPerSample);
//! Initialize an audio session to get buffer size
result = AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory);
// Set preferred buffer size
Float32 preferredBufferSize = static_cast<float>(m_pBoard->m_uBufferSize) / m_pBoard->m_fSampleRate;
result = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize);
// Get actual buffer size
Float32 audioBufferSize;
UInt32 size = sizeof (audioBufferSize);
result = AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration, &size, &audioBufferSize);
result = AudioSessionSetActive(true);
//! Create our Remote I/O component description
AudioComponentDescription desc;
desc.componentType= kAudioUnitType_Output;
desc.componentSubType= kAudioUnitSubType_RemoteIO;
desc.componentFlags= 0;
desc.componentFlagsMask= 0;
desc.componentManufacturer= kAudioUnitManufacturer_Apple;
//! Find the corresponding component
AudioComponent outputComponent = AudioComponentFindNext(NULL, &desc);
//! Create the component instance
result = AudioComponentInstanceNew(outputComponent, &m_audioUnit);
//! Enable audio output
UInt32 flag = 1;
result = AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag));
//! Enable audio input
result= AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag));
//! Create our audio stream description
m_audioFormat.mSampleRate= m_pBoard->m_fSampleRate;
m_audioFormat.mFormatID= kAudioFormatLinearPCM;
m_audioFormat.mFormatFlags= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_audioFormat.mFramesPerPacket= 1;
m_audioFormat.mChannelsPerFrame= 1;
m_audioFormat.mBitsPerChannel= 16;
m_audioFormat.mBytesPerPacket= 2;
m_audioFormat.mBytesPerFrame= 2;
//! Set the stream format
result = AudioUnitSetProperty( m_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &m_audioFormat, sizeof(m_audioFormat));
result = AudioUnitSetProperty(m_audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus, &m_audioFormat, sizeof(m_audioFormat));
//! Set the render callback
AURenderCallbackStruct renderCallbackStruct= {0};
renderCallbackStruct.inputProc= renderCallback;
renderCallbackStruct.inputProcRefCon= m_pBoard;
result = AudioUnitSetProperty(m_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &renderCallbackStruct, sizeof(renderCallbackStruct));
//! Set the input callback
AURenderCallbackStruct inputCallbackStruct = {0};
inputCallbackStruct.inputProc= inputCallback;
inputCallbackStruct.inputProcRefCon= self;
result= AudioUnitSetProperty( m_audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, kOutputBus, &inputCallbackStruct, sizeof( inputCallbackStruct ) );
//! Initialize the unit
result = AudioUnitInitialize( m_audioUnit );
return self;
}
You are allocating the m_inBufferList as:
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
This should be:
m_inBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * numberOfBuffers); //numberOfBuffers in your case is 1
Maybe this can solve your problem.
Error -50 in the develop doc means params error ,
make sure you have pass the right params in AudioUnitRender。
check the stream format and your unit
I agree with Kurt Pattyn that the allocation of m_inBufferList is incorrect, and might be the cause of -50 bad param error.
Except I think that for a single buffer it should be
(AudioBufferList *)malloc(sizeof(AudioBufferList)
My evidence is the following sizes and the code from Adamson & Avila below.
(lldb) sizeof(AudioBufferList)
24
(lldb) po sizeof(AudioBuffer)
16
(lldb) po offsetof(AudioBufferList, mBuffers[0])
8
According to Chris Adamson & Kevin Avila in Learning Core Audio:
// Allocate an AudioBufferList plus enough space for
// array of AudioBuffers
UInt32 propsize = offsetof(AudioBufferList, mBuffers[0]) +
(sizeof(AudioBuffer) * player->streamFormat.mChannelsPerFrame);
// malloc buffer lists
player->inputBuffer = (AudioBufferList *)malloc(propsize);
player->inputBuffer->mNumberBuffers = player->streamFormat.mChannelsPerFrame;
// Pre-malloc buffers for AudioBufferLists
for(UInt32 i =0; i< player->inputBuffer->mNumberBuffers ; i++) {
player->inputBuffer->mBuffers[i].mNumberChannels = 1;
player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
Last but not least, I just happened upon this code with the following comment : )
//credit to TheAmazingAudioEngine for an illustration of proper audiobufferlist allocation.
// Google leads to some really really bad allocation code...
[other code]
sampleABL = malloc(sizeof(AudioBufferList) + (bufferCnt-1)*sizeof(AudioBuffer));
https://github.com/zakk4223/CocoaSplit/blob/master/CocoaSplit/CAMultiAudio/CAMultiAudioPCMPlayer.m#L203

remoteIO how to play it?

i manage to get the recordCallBck and deal with the buffer data.
now i want to play that data.
i have the play callback but i just cant find anywhere how to play this buffers.
callback:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
now what?
how would i play that?
have a look at this example, helped me out a lot when trying to deal with the data. This is an example of a working app that plays anything you have spoken into the microphone. It uses 2 callbacks, one for recording the data and placing in a global audio buffer and a second for getting that data back into the playback callback.
http://www.stefanpopp.de/2011/capture-iphone-microphone/

Removing Last Buffer Played From Memory

I have an Objective-C audio app based on audio unit and my problem is when I play a sound then stop it then play another sound I hear the last buffer from the first sound. I think I should just free the ioData or something but all my tries have failed.
static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
RemoteIOPlayer *remoteIOplayer = (__bridge RemoteIOPlayer *)inRefCon;
for (int i = 0 ; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
UInt32 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++)
{
frameBuffer[j]= [[remoteIOplayer inMemoryAudioFile] getNextPacket:inBusNumber];
}
}
return noErr;
}
Please help :) Thanks.
Ok its just about UNintializing the augraph and initializing it again after the user stops