Tempo of MIDI File Using CoreMidi - objective-c

I was wondering how to find the tempo of a midi file using the CoreMidi framework. As I understand it, the MusicSequence class is used for opening a midi file. It contains a number of tracks including a tempo track which is of type MusicTrack. Upon inspecting the MusicTrack class, there doesn't seem to be any parameter or method for actually getting the tempo. I got the following code from this site...here's the code:
-(void) openMidiFile {
MusicSequence sequence;
NewMusicSequence(&sequence);
NSURL *midiFileURL = [[NSBundle mainBundle] URLForResource:#"bach-invention-01" withExtension:#"mid"];
MusicSequenceFileLoad(sequence, (__bridge CFURLRef)midiFileURL, 0,
kMusicSequenceLoadSMF_ChannelsToTracks); //needs to change later
MusicTrack tempoTrack;
MusicSequenceGetTempoTrack(sequence, &tempoTrack);
MusicEventIterator iterator;
NewMusicEventIterator(tempoTrack, &iterator);
Boolean hasNext = YES;
MusicTimeStamp timestamp = 0;
MusicEventType eventType = 0;
const void *eventData = NULL;
UInt32 eventDataSize = 0;
// Run the loop
MusicEventIteratorHasCurrentEvent(iterator, &hasNext);
while (hasNext) {
MusicEventIteratorGetEventInfo(iterator,
&timestamp,
&eventType,
&eventData,
&eventDataSize);
// Process each event here
printf("Event found! type: %d\n", eventType); //tempo occurs when eventType is 3
printf("Event data: %d\n", (int)eventData); //data for tempo?
MusicEventIteratorNextEvent(iterator);
MusicEventIteratorHasCurrentEvent(iterator, &hasNext);
}
}

Each eventType has a corresponding structure for its data, described in MusicPlayer.h.
You are probably looking for events of type kMusicEventType_ExtendedTempo, which will have data of type ExtendedTempoEvent, which is just:
/*!
#struct ExtendedTempoEvent
#discussion specifies the value for a tempo in beats per minute
*/
typedef struct ExtendedTempoEvent
{
Float64 bpm;
} ExtendedTempoEvent;
So your code might be:
MusicEventIteratorGetEventInfo(iterator,
&timestamp,
&eventType,
&eventData,
&eventDataSize);
if (eventType == kMusicEventType_ExtendedTempo &&
eventDataSize == sizeof(ExtendedTempoEvent)) {
ExtendedTempoEvent *tempoEvent = (ExtendedTempoEvent *)eventData;
Float64 tempo = tempoEvent->bpm;
NSLog(#"Tempo is %g", tempo);
}
Keep in mind: a MIDI file may have more than one tempo in it. You can use the event timestamps to find out when it changes tempo.

Related

CoreAudio AudioQueue callback function never called, no errors reported

I am trying to do a simple playback from a file functionality and it appears that my callback function is never called. It doesn't really make sense because all of the OSStatuses come back 0 and other numbers all appear correct as well (like the output packets read pointer from AudioFileReadPackets).
Here is the setup:
OSStatus stat;
stat = AudioFileOpenURL(
(CFURLRef)urlpath, kAudioFileReadPermission, 0, &aStreamData->aFile
);
UInt32 dsze = 0;
stat = AudioFileGetPropertyInfo(
aStreamData->aFile, kAudioFilePropertyDataFormat, &dsze, 0
);
stat = AudioFileGetProperty(
aStreamData->aFile, kAudioFilePropertyDataFormat, &dsze, &aStreamData->aDescription
);
stat = AudioQueueNewOutput(
&aStreamData->aDescription, bufferCallback, aStreamData, NULL, NULL, 0, &aStreamData->aQueue
);
aStreamData->pOffset = 0;
for(int i = 0; i < NUM_BUFFERS; i++) {
stat = AudioQueueAllocateBuffer(
aStreamData->aQueue, aStreamData->aDescription.mBytesPerPacket, &aStreamData->aBuffer[i]
);
bufferCallback(aStreamData, aStreamData->aQueue, aStreamData->aBuffer[i]);
}
stat = AudioQueuePrime(aStreamData->aQueue, 0, NULL);
stat = AudioQueueStart(aStreamData->aQueue, NULL);
(Not shown is where I'm checking the value of stat in between the functions, it just comes back normal.)
And the callback function:
void bufferCallback(void *uData, AudioQueueRef queue, AudioQueueBufferRef buffer) {
UInt32 bread = 0;
UInt32 pread = buffer->mAudioDataBytesCapacity / player->aStreamData->aDescription.mBytesPerPacket;
OSStatus stat;
stat = AudioFileReadPackets(
player->aStreamData->aFile, false, &bread, NULL, player->aStreamData->pOffset, &pread, buffer->mAudioData
);
buffer->mAudioDataByteSize = bread;
stat = AudioQueueEnqueueBuffer(queue, buffer, 0, NULL);
player->aStreamData->pOffset += pread;
}
Where aStreamData is my user data struct (typedefed so I can use it as a class property) and player is a static instance of the controlling Objective-C class. If any other code is wanted please let me know. I am a bit at my wit's end. Printing any of the numbers involved here yields the correct result, including functions in bufferCallback when I call it myself in the allocate loop. It just never gets called thereafter. The start up method returns and nothing happens.
Also anecdotally, I am using a peripheral device (an MBox Pro 3) to play the sound which CoreAudio only boots up when it is about to output. IE if I start iTunes or something, the speakers pop faintly and there is an LED that goes from blinking to solid. The device boots up like it does so CA is definitely doing something. (Also I've of course tried it with the onboard Macbook sound sans the device.)
I've read other solutions to problems that sound similiar and they don't work. Stuff like using multiple buffers which I am doing now and doesn't appear to make any difference.
I basically assume I am doing something obviously wrong somehow but not sure what it could be. I've read the relevant documentation, looked at the available code examples and scoured the net a bit for answers and it appears that this is all I need to do and it should just go.
At the very least, is there anything else I can do to investigate?
My first answer was not good enough, so I compiled a minimal example that will play a 2 channel, 16 bit wave file.
The main difference from your code is that I made a property listener listening for play start and stop events.
As for your code, it seems legit at first glance. Two things I will point out, though:
1. Is seems you are allocating buffers with TOO SMALL a buffer size. I have noticed that AudioQueues won't play if the buffers are too small, which seems to fit your problem.
2. Have you verified the properties returned?
Back to my code example:
Everything is hard coded, so it is not exactly good coding practice, but it shows how you can do it.
AudioStreamTest.h
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
uint32_t bufferSizeInSamples;
AudioFileID file;
UInt32 currentPacket;
AudioQueueRef audioQueue;
AudioQueueBufferRef buffer[3];
AudioStreamBasicDescription audioStreamBasicDescription;
#interface AudioStreamTest : NSObject
- (void)start;
- (void)stop;
#end
AudioStreamTest.m
#import "AudioStreamTest.h"
#implementation AudioStreamTest
- (id)init
{
self = [super init];
if (self) {
bufferSizeInSamples = 441;
file = NULL;
currentPacket = 0;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerFrame = 4;
audioStreamBasicDescription.mBytesPerPacket = 4;
audioStreamBasicDescription.mChannelsPerFrame = 2;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mReserved = 0;
audioStreamBasicDescription.mSampleRate = 44100;
}
return self;
}
- (void)start {
AudioQueueNewOutput(&audioStreamBasicDescription, AudioEngineOutputBufferCallback, (__bridge void *)(self), NULL, NULL, 0, &audioQueue);
AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);
AudioQueueStart(audioQueue, NULL);
}
- (void)stop {
AudioQueueStop(audioQueue, YES);
AudioQueueRemovePropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);
}
void AudioEngineOutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
if (file == NULL) return;
UInt32 bytesRead = bufferSizeInSamples * 4;
UInt32 packetsRead = bufferSizeInSamples;
AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, inBuffer->mAudioData);
inBuffer->mAudioDataByteSize = bytesRead;
currentPacket += packetsRead;
if (bytesRead == 0) {
AudioQueueStop(inAQ, false);
}
else {
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
}
void AudioEnginePropertyListenerProc (void *inUserData, AudioQueueRef inAQ, AudioQueuePropertyID inID) {
//We are only interested in the property kAudioQueueProperty_IsRunning
if (inID != kAudioQueueProperty_IsRunning) return;
//Get the status of the property
UInt32 isRunning = false;
UInt32 size = sizeof(isRunning);
AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &isRunning, &size);
if (isRunning) {
currentPacket = 0;
NSString *fileName = #"/Users/roy/Documents/XCodeProjectsData/FUZZ/03.wav";
NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: fileName];
AudioFileOpenURL((__bridge CFURLRef) fileURL, kAudioFileReadPermission, 0, &file);
for (int i = 0; i < 3; i++){
AudioQueueAllocateBuffer(audioQueue, bufferSizeInSamples * 4, &buffer[i]);
UInt32 bytesRead = bufferSizeInSamples * 4;
UInt32 packetsRead = bufferSizeInSamples;
AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, buffer[i]->mAudioData);
buffer[i]->mAudioDataByteSize = bytesRead;
currentPacket += packetsRead;
AudioQueueEnqueueBuffer(audioQueue, buffer[i], 0, NULL);
}
}
else {
if (file != NULL) {
AudioFileClose(file);
file = NULL;
for (int i = 0; i < 3; i++) {
AudioQueueFreeBuffer(audioQueue, buffer[i]);
buffer[i] = NULL;
}
}
}
}
-(void)dealloc {
[super dealloc];
AudioQueueDispose(audioQueue, true);
audioQueue = NULL;
}
#end
Lastly, I want to include some research I have done today to test the robustness of AudioQueues.
I have noticed that if you make too small AudioQueue buffers, it won't play at all. That made me play around a bit to see why it is not playing.
If I try buffer size that can hold only 150 samples, I get no sound at all.
If I try buffer size that can hold 175 samples, it plays the whole song through, but with A lot of distortion. 175 amounts to a tad less than 4 ms of audio.
AudioQueue keeps asking for new buffers as long as you keep supplying buffers. That is regardless of AudioQueue actually playing your buffers or not.
If you supply a buffer with size 0, the buffer will be lost and an error kAudioQueueErr_BufferEmpty is returned for that queue enqueue request. You will never see AudioQueue ask you to fill that buffer again. If this happened for the last queue you have posted, AudioQueue will stop asking you to fill any more buffers. In that case you will not hear any more audio for that session.
To see why AudioQueues is not playing anything with smaller buffer sizes, I made a test to see if my callback is called at all even when there is no sound. The answer is that the buffers gets called all the time as long as AudioQueues is playing and needs data.
So if you keep feeding buffers to the queue, no buffer is ever lost. It doesn't happen. Unless there is an error, of course.
So why is no sound playing?
I tested to see if 'AudioQueueEnqueueBuffer()' returned any errors. It did not. No other errors within my play routine either. The data returned from reading from file is also good.
Everything is normal, buffers are good, data re-enqueued is good, there is just no sound.
So my last test was to slowly increase buffer size till I could hear anything. I finally heard faint and sporadic distortion.
Then it came to me...
It seems that the problem lies with that the system tries to keep the stream in sync with time so if you enqueue audio, and the time for the audio you wanted to play has passed, it will just skip that part of the buffer. If the buffer size becomes too small, more and more data is dropped or skipped until the audio system is in sync again. Which is never if the buffer size is too small. (You can hear this as distortion if you chose a buffer size that is barely large enough to support continuous play.)
If you think about it, it is the only way the audio queue can work, but it is a good realisation when you are clueless like me and "discover" how it really works.
I decided to take a look at this again and was able to solve it by making the buffers larger. I've accepted the answer by #RoyGal since it was their suggestion but I wanted to provide the actual code that works since I guess others are having the same problem (question has a few favorites that aren't me at the moment).
One thing I tried was making the packet size larger:
aData->aDescription.mFramesPerPacket = 512; // or some other number
aData->aDescription.mBytesPerPacket = (
aData->aDescription.mFramesPerPacket * aData->aDescription.mBytesPerFrame
);
This does NOT work: it causes AudioQueuePrime to fail with an AudioConverterNew returned -50 message. I guess it wants mFramesPerPacket to be 1 for PCM.
(I also tried setting the kAudioQueueProperty_DecodeBufferSizeFrames property which didn't seem to do anything. Not sure what it's for.)
The solution seems to be to only allocate the buffer(s) with the specified size:
AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,
&aData->aBuffer[i]
);
And the size has to be sufficiently large. I found the magic number is:
mBytesPerPacket * 1024 / N_BUFFERS
(Where N_BUFFERS is the number of buffers and should be > 1 or playback is choppy.)
Here is an MCVE demonstrating the issue and solution:
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AudioToolbox/AudioQueue.h>
#import <AudioToolbox/AudioFile.h>
#define N_BUFFERS 2
#define N_BUFFER_PACKETS 1024
typedef struct AStreamData {
AudioFileID aFile;
AudioQueueRef aQueue;
AudioQueueBufferRef aBuffer[N_BUFFERS];
AudioStreamBasicDescription aDescription;
SInt64 pOffset;
volatile BOOL isRunning;
} AStreamData;
void printASBD(AudioStreamBasicDescription* desc) {
printf("mSampleRate = %d\n", (int)desc->mSampleRate);
printf("mBytesPerPacket = %d\n", desc->mBytesPerPacket);
printf("mFramesPerPacket = %d\n", desc->mFramesPerPacket);
printf("mBytesPerFrame = %d\n", desc->mBytesPerFrame);
printf("mChannelsPerFrame = %d\n", desc->mChannelsPerFrame);
printf("mBitsPerChannel = %d\n", desc->mBitsPerChannel);
}
void bufferCallback(
void *vData, AudioQueueRef aQueue, AudioQueueBufferRef aBuffer
) {
AStreamData* aData = (AStreamData*)vData;
UInt32 bRead = 0;
UInt32 pRead = (
aBuffer->mAudioDataBytesCapacity / aData->aDescription.mBytesPerPacket
);
OSStatus stat;
stat = AudioFileReadPackets(
aData->aFile, false, &bRead, NULL, aData->pOffset, &pRead, aBuffer->mAudioData
);
if(stat != 0) {
printf("AudioFileReadPackets returned %d\n", stat);
}
if(pRead == 0) {
aData->isRunning = NO;
return;
}
aBuffer->mAudioDataByteSize = bRead;
stat = AudioQueueEnqueueBuffer(aQueue, aBuffer, 0, NULL);
if(stat != 0) {
printf("AudioQueueEnqueueBuffer returned %d\n", stat);
}
aData->pOffset += pRead;
}
AStreamData* beginPlayback(NSURL* path) {
static AStreamData* aData;
aData = malloc(sizeof(AStreamData));
OSStatus stat;
stat = AudioFileOpenURL(
(CFURLRef)path, kAudioFileReadPermission, 0, &aData->aFile
);
printf("AudioFileOpenURL returned %d\n", stat);
UInt32 dSize = 0;
stat = AudioFileGetPropertyInfo(
aData->aFile, kAudioFilePropertyDataFormat, &dSize, 0
);
printf("AudioFileGetPropertyInfo returned %d\n", stat);
stat = AudioFileGetProperty(
aData->aFile, kAudioFilePropertyDataFormat, &dSize, &aData->aDescription
);
printf("AudioFileGetProperty returned %d\n", stat);
printASBD(&aData->aDescription);
stat = AudioQueueNewOutput(
&aData->aDescription, bufferCallback, aData, NULL, NULL, 0, &aData->aQueue
);
printf("AudioQueueNewOutput returned %d\n", stat);
aData->pOffset = 0;
for(int i = 0; i < N_BUFFERS; i++) {
// change YES to NO for stale playback
if(YES) {
stat = AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,
&aData->aBuffer[i]
);
} else {
stat = AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket,
&aData->aBuffer[i]
);
}
printf(
"AudioQueueAllocateBuffer returned %d for aBuffer[%d] with capacity %d\n",
stat, i, aData->aBuffer[i]->mAudioDataBytesCapacity
);
bufferCallback(aData, aData->aQueue, aData->aBuffer[i]);
}
UInt32 numFramesPrepared = 0;
stat = AudioQueuePrime(aData->aQueue, 0, &numFramesPrepared);
printf("AudioQueuePrime returned %d with %d frames prepared\n", stat, numFramesPrepared);
stat = AudioQueueStart(aData->aQueue, NULL);
printf("AudioQueueStart returned %d\n", stat);
UInt32 pSize = sizeof(UInt32);
UInt32 isRunning;
stat = AudioQueueGetProperty(
aData->aQueue, kAudioQueueProperty_IsRunning, &isRunning, &pSize
);
printf("AudioQueueGetProperty returned %d\n", stat);
aData->isRunning = !!isRunning;
return aData;
}
void endPlayback(AStreamData* aData) {
OSStatus stat = AudioQueueStop(aData->aQueue, NO);
printf("AudioQueueStop returned %d\n", stat);
}
NSString* getPath() {
// change NO to YES and enter path to hard code
if(NO) {
return #"";
}
char input[512];
printf("Enter file path: ");
scanf("%[^\n]", input);
return [[NSString alloc] initWithCString:input encoding:NSASCIIStringEncoding];
}
int main(int argc, const char* argv[]) {
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
NSURL* path = [NSURL fileURLWithPath:getPath()];
AStreamData* aData = beginPlayback(path);
if(aData->isRunning) {
do {
printf("Queue is running...\n");
[NSThread sleepForTimeInterval:1.0];
} while(aData->isRunning);
endPlayback(aData);
} else {
printf("Playback did not start\n");
}
[pool drain];
return 0;
}

MIDISend: to play a musical note on iPhone

I am trying to generate a musical note that will play through the iPhone speakers using Objective-C and MIDI. I have the code below but it is not doing anything. What am I doing wrong?
MIDIPacketList packetList;
packetList.numPackets = 1;
MIDIPacket* firstPacket = &packetList.packet[0];
firstPacket->timeStamp = 0; // send immediately
firstPacket->length = 3;
firstPacket->data[0] = 0x90;
firstPacket->data[1] = 80;
firstPacket->data[2] = 120;
MIDIPacketList pklt=packetList;
MIDISend(MIDIGetSource(0), MIDIGetDestination(0), &pklt);
You've got three problems:
Declaring a MIDIPacketList doesn't allocate memory or initialize the structure
You're passing the results of MIDIGetSource (which returns a MIDIEndpointRef) as the first parameter to MIDISend where it is expecting a MIDIPortRef instead. (You probably ignored a compiler warning about this. Never ignore compiler warnings.)
Sending a MIDI note in iOS doesn't make any sound. If you don't have an external MIDI device connected to your iOS device, you need to set up something with CoreAudio that will generate sounds. That's beyond the scope of this answer.
So this code will run, but it won't make any sounds unless you've got external hardware:
//Look to see if there's anything that will actually play MIDI notes
NSLog(#"There are %lu destinations", MIDIGetNumberOfDestinations());
// Prepare MIDI Interface Client/Port for writing MIDI data:
MIDIClientRef midiclient = 0;
MIDIPortRef midiout = 0;
OSStatus status;
status = MIDIClientCreate(CFSTR("Test client"), NULL, NULL, &midiclient);
if (status) {
NSLog(#"Error trying to create MIDI Client structure: %d", (int)status);
}
status = MIDIOutputPortCreate(midiclient, CFSTR("Test port"), &midiout);
if (status) {
NSLog(#"Error trying to create MIDI output port: %d", (int)status);
}
Byte buffer[128];
MIDIPacketList *packetlist = (MIDIPacketList *)buffer;
MIDIPacket *currentpacket = MIDIPacketListInit(packetlist);
NSInteger messageSize = 3; //Note On is a three-byte message
Byte msg[3] = {0x90, 80, 120};
MIDITimeStamp timestamp = 0;
currentpacket = MIDIPacketListAdd(packetlist, sizeof(buffer), currentpacket, timestamp, messageSize, msg);
MIDISend(midiout, MIDIGetDestination(0), packetlist);

CoreMIDI track accessing time signature information

I'm designing a rhythm game which is driven by a MIDI track. The MIDI messages trigger the release of on-screen elements. I am loading the MIDI data from a file and then playing it using a MusicSequence and MusicPlayer.
I understand that MIDI files contain time and key signature information as meta messages at the start of the file. However I've not found a way to retrieve this information from either the MusicPlayer or the MusicSequence.
The information I need is the number of seconds it take to play a quaver, crotchet etc... I would expect this to be affected by the time signature and the MusicPlayerPlayRateScalar value.
It look like this information can be found in the CoreAudio clock but I've not been able to work how this is accessed for a particular music sequence.
Are there any CoreAudio experts out there who know how to do this?
You need to get the tempo track of the midi file and then iterate through it to get the tempo(s).
To get the sequence length you need to find the longest track:
(MusicTimeStamp)getSequenceLength:(MusicSequence)aSequence {
UInt32 tracks;
MusicTimeStamp len = 0.0f;
if (MusicSequenceGetTrackCount(sequence, &tracks) != noErr)
return len;
for (UInt32 i = 0; i < tracks; i++) {
MusicTrack track = NULL;
MusicTimeStamp trackLen = 0;
UInt32 trackLenLen = sizeof(trackLen);
MusicSequenceGetIndTrack(sequence, i, &track);
MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLen);
if (len < trackLen)
len = trackLen;
}
return len;
}
// - get the tempo track:
OSStatus result = noErr;
MusicTrack tempoTrack;
result = MusicSequenceGetTempoTrack(sequence, &tempoTrack);
if (noErr != result) {[self printErrorMessage: #"MusicSequenceGetTempoTrack" withStatus: result];}
MusicEventIterator iterator = NULL;
NewMusicEventIterator(tempoTrack, &iterator);
MusicTimeStamp timestamp = 0;
MusicEventType eventType = 0;
const void *eventData = NULL;
UInt32 eventDataSize = 0;
MusicEventIteratorGetEventInfo(iterator, &timestamp, &eventType, &eventData, &eventDataSize);

How to get array of float audio data from AudioQueueRef in iOS?

I'm working on getting audio into the iPhone in a form where I can pass it to a (C++) analysis algorithm. There are, of course, many options: the AudioQueue tutorial at trailsinthesand gets things started.
The audio callback, though, gives an AudioQueueRef, and I'm finding Apple's documentation thin on this side of things. Built-in methods to write to a file, but nothing where you actually peer inside the packets to see the data.
I need data. I don't want to write anything to a file, which is what all the tutorials — and even Apple's convenience I/O objects — seem to be aiming at. Apple's AVAudioRecorder (infuriatingly) will give you levels and write the data, but not actually give you access to it. Unless I'm missing something...
How to do this? In the code below there is inBuffer->mAudioData which is tantalizingly close but I can find no information about what format this 'data' is in or how to access it.
AudioQueue Callback:
void AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
static int count = 0;
RecordState* recordState = (RecordState*)inUserData;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
++count;
printf("Got buffer %d\n", count);
}
And the code to write the audio to a file:
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData); // THIS! This is what I want to look inside of.
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
// so you don't have to hunt them all down when you decide to switch to float:
#define AUDIO_DATA_TYPE_FORMAT SInt16
// the actual sample-grabbing code:
int sampleCount = inBuffer->mAudioDataBytesCapacity / sizeof(AUDIO_DATA_TYPE_FORMAT);
AUDIO_DATA_TYPE_FORMAT *samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData;
Then you have your (in this case SInt16) array samples which you can access from samples[0] to samples[sampleCount-1].
The above solution did not work for me, I was getting the wrong sample data itself.(an endian issue) If incase someone is getting wrong sample data in future, I hope this helps you :
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}

Triggering UI code from Audio Unit Render Proc on iOS

I have a Multichannel Mixer audio unit playing back audio files in an iOS app, and I need to figure out how to update the app's UI and perform a reset when the render callback hits the end of the longest audio file (which is set up to run on bus 0). As my code below shows I am trying to use KVO to achieve this (using the boolean variable tapesUnderway - the AutoreleasePool is necessary as this Objective-C code is running outside of its normal domain, see http://www.cocoabuilder.com/archive/cocoa/57412-nscfnumber-no-pool-in-place-just-leaking.html).
static OSStatus tapesRenderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;
UInt32 bufferFrames = sndbuf[inBusNumber].numFrames;
AudioUnitSampleType *in = sndbuf[inBusNumber].data;
// These mBuffers are the output buffers and are empty; these two lines are just setting the references to them (via outA and outB)
AudioUnitSampleType *outA = (AudioUnitSampleType *)ioData->mBuffers[0].mData;
AudioUnitSampleType *outB = (AudioUnitSampleType *)ioData->mBuffers[1].mData;
UInt32 sample = sndbuf[inBusNumber].sampleNum;
// --------------------------------------------------------------
// Set the start time here
if(inBusNumber == 0 && !tapesFirstRenderPast)
{
printf("Tapes first render past\n");
tapesStartSample = inTimeStamp->mSampleTime;
tapesFirstRenderPast = YES; // MAKE SURE TO RESET THIS ON SONG RESTART
firstPauseSample = tapesStartSample;
}
// --------------------------------------------------------------
// Now process the samples
for(UInt32 i = 0; i < inNumberFrames; ++i)
{
if(inBusNumber == 0)
{
// ------------------------------------------------------
// Bus 0 is the backing track, and is always playing back
outA[i] = in[sample++];
outB[i] = in[sample++]; // For stereo set desc.SetAUCanonical to (2, true) and increment samples in both output calls
lastSample = inTimeStamp->mSampleTime + (Float64)i; // Set the last played sample in order to compensate for pauses
// ------------------------------------------------------
// Use this logic to mark end of tune
if(sample >= (bufferFrames * 2) && !tapesEndPast)
{
// USE KVO TO NOTIFY METHOD OF VALUE CHANGE
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
FuturesEPMedia *futuresMedia = [FuturesEPMedia sharedFuturesEPMedia];
NSNumber *boolNo = [[NSNumber alloc] initWithBool: NO];
[futuresMedia setValue: boolNo forKey: #"tapesUnderway"];
[boolNo release];
[pool release];
tapesEndPast = YES;
}
}
else
{
// ------------------------------------------------------
// The other buses are the open sections, and are synched through the tapesSectionsTimes array
Float64 sectionTime = tapesSectionTimes[inBusNumber] * kGraphSampleRate; // Section time in samples
Float64 currentSample = inTimeStamp->mSampleTime + (Float64)i;
if(!isPaused && !playFirstRenderPast)
{
pauseGap += currentSample - firstPauseSample;
playFirstRenderPast = YES;
pauseFirstRenderPast = NO;
}
if(currentSample > (tapesStartSample + sectionTime + pauseGap) && sample < (bufferFrames * 2))
{
outA[i] = in[sample++];
outB[i] = in[sample++];
}
else
{
outA[i] = 0;
outB[i] = 0;
}
}
}
sndbuf[inBusNumber].sampleNum = sample;
return noErr;
}
At the moment when this variable is changed it triggers a method in self, but this leads to an unacceptable delay (20-30 seconds) when executed from this render callback (I am thinking because it is Objective-C code running in the high priority audio thread?). How do I effectively trigger such a change without the delay? (The trigger will change a pause button to a play button and call a reset method to prepare for the next play.)
Thanks
Yes. Don't use objc code in the render thread since its high priority. If you store state in memory (ptr or struct) and then get a timer in the main thread to poll (check) the value(s) in memory. The timer need not be anywhere near as fast as the render thread and will be very accurate.
Try this.
Global :
BOOL FlgTotalSampleTimeCollected = False;
Float64 HigestSampleTime = 0 ;
Float64 TotalSampleTime = 0;
in -(OSStatus) setUpAUFilePlayer:
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"couldn't get file's data format");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
Float64 sTime = nPackets * fileASBD.mFramesPerPacket;
if (HigestSampleTime < sTime)
{
HigestSampleTime = sTime;
}
In RenderCallBack :
if (*actionFlags & kAudioUnitRenderAction_PreRender)
{
if (!THIS->FlgTotalSampleTimeCollected)
{
[THIS setFlgTotalSampleTimeCollected:TRUE];
[THIS setTotalSampleTime:(inTimeStamp->mSampleTime + THIS->HigestSampleTime)];
}
}
else if (*actionFlags & kAudioUnitRenderAction_PostRender)
{
if (inTimeStamp->mSampleTime > THIS->TotalSampleTime)
{
NSLog(#"inTimeStamp->mSampleTime :%f",inTimeStamp->mSampleTime);
NSLog(#"audio completed");
[THIS callAudioCompletedMethodHere];
}
}
This worked for me.
Test in Device.