Using RemoteIO gives unreasonable buffers value and sizes - objective-c

Using the callback function on the iphone , i am trying get microphone input signal.
After so much problems i have discover this :
When i input to the buffer a pure sin wave (to the mac simulator ) i can see the signal, but then, it becomes lower and lower till zero .
I was thinking that this is relate to apple's bug ,that the number of buffer's samples on the mac is 471 instead of 1024 . can i solve this bug somehow ???
This is my callback :
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2; //* sizeof(SInt16) ?
buffer.mData = NULL;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
int16_t *q = (int16_t *)(&bufferList)->mBuffers[0].mData;
//here i print q,which is good for 4 seconds-when i can see the pure sin, than it goes down to zero-while sin wave is still in the air
EDIT:
this is not happening on the device, only on the mac !
i am pretty sure its related to the bug that the mac see 417 samples in the buffer !

Related

How to configure the framesize using AudioUnit.framework on iOS

I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg
First configure the audio:
/**
* We need to specifie our format on which we want to work.
* We use Linear PCM cause its uncompressed and we work on raw data.
* for more informations check.
*
* We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8;
audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16);
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16);
The recording callback is:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
NSLog(#"Log record: %lu", inBusNumber);
NSLog(#"Log record: %lu", inNumberFrames);
NSLog(#"Log record: %lu", (UInt32)inTimeStamp);
// the data gets rendered here
AudioBuffer buffer;
// a variable where we check the status
OSStatus status;
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
/**
on this point we define the number of channels, which is mono
for the iphone. the number of frames is usally 512 or 1024.
*/
buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size
buffer.mNumberChannels = 1; // one channel
buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size
// we put our buffer into a bufferlist array for rendering
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// render input and check for error
status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];
// process the bufferlist in the audio processor
[audioProcessor processBuffer:&bufferList];
// clean up the buffer
free(bufferList.mBuffers[0].mData);
//NSLog(#"RECORD");
return noErr;
}
With data:
inBusNumber = 1
inNumberFrames = 1024
inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange
However, the framesize that i need to encode mp3 is 1152. How can i configure it?
If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.
You can configure the number of frames per slice an AudioUnit will use with the property kAudioUnitProperty_MaximumFramesPerSlice. However, I think the best solution in your case is to buffer the incoming audio to a ring buffer and then signal your encoder that audio is available. Since you're transcoding to MP3 I'm not sure what real-time means in this case.

remoteIO how to play it?

i manage to get the recordCallBck and deal with the buffer data.
now i want to play that data.
i have the play callback but i just cant find anywhere how to play this buffers.
callback:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
now what?
how would i play that?
have a look at this example, helped me out a lot when trying to deal with the data. This is an example of a working app that plays anything you have spoken into the microphone. It uses 2 callbacks, one for recording the data and placing in a global audio buffer and a second for getting that data back into the playback callback.
http://www.stefanpopp.de/2011/capture-iphone-microphone/

Removing Last Buffer Played From Memory

I have an Objective-C audio app based on audio unit and my problem is when I play a sound then stop it then play another sound I hear the last buffer from the first sound. I think I should just free the ioData or something but all my tries have failed.
static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
RemoteIOPlayer *remoteIOplayer = (__bridge RemoteIOPlayer *)inRefCon;
for (int i = 0 ; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
UInt32 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++)
{
frameBuffer[j]= [[remoteIOplayer inMemoryAudioFile] getNextPacket:inBusNumber];
}
}
return noErr;
}
Please help :) Thanks.
Ok its just about UNintializing the augraph and initializing it again after the user stops

remoteIO call back function is giving unstable buffer list

every time callback function of the audio unit is being called, i get another number of samples which is strange because NSLog(#"%ld",inNumberFrames); gives me always 512.
when i do this :
NSLog(#"%li",strlen((const char *)(&bufferList)->mBuffers[0].mData));
i get numbers such as: 50 20 19 160 200 1 ...
which is strange.
each call back, i have to get the full buffer 512 samples no ?
i know i dont get all needed samples because if i input a sin 2khz , i get zero crossing of about 600, and if i put nothing, i g et the same.
to retrieve data i do :
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
//NSLog(#"%ld",inNumberFrames);
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
//checkStatus(status);
NSLog(#"%li",strlen((const char *)(&bufferList)->mBuffers[0].mData));
int16_t *q = (int16_t *)(&bufferList)->mBuffers[0].mData;
for(int i=0; i < strlen((const char *)(&bufferList)->mBuffers[0].mData); i++)
{
....
any help with this will save me days !
thanks .
A strlen() of the audio sample buffer is not related to the number of samples. Instead, look at the number of frames given as the callback function's parameter.

core audio callback, inTimeStamp at the beginning or end of recording

In core audio, when the recordingCallback is called:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
does inTimeStamp reference the time when the audio began to be received or the time when the audio was finished being received.
was it
X if X is equal to the time when recording began
or
X + the buffer length
thank you,
nonono
The timestamp is for the time when the buffer was captured, specifically the bus time of the system (see this thread on the CoreAudio mailing list for details). So it would refer to the time in the first sample of the buffer, not the last sample.