MIDISend: to play a musical note on iPhone - objective-c

I am trying to generate a musical note that will play through the iPhone speakers using Objective-C and MIDI. I have the code below but it is not doing anything. What am I doing wrong?
MIDIPacketList packetList;
packetList.numPackets = 1;
MIDIPacket* firstPacket = &packetList.packet[0];
firstPacket->timeStamp = 0; // send immediately
firstPacket->length = 3;
firstPacket->data[0] = 0x90;
firstPacket->data[1] = 80;
firstPacket->data[2] = 120;
MIDIPacketList pklt=packetList;
MIDISend(MIDIGetSource(0), MIDIGetDestination(0), &pklt);

You've got three problems:
Declaring a MIDIPacketList doesn't allocate memory or initialize the structure
You're passing the results of MIDIGetSource (which returns a MIDIEndpointRef) as the first parameter to MIDISend where it is expecting a MIDIPortRef instead. (You probably ignored a compiler warning about this. Never ignore compiler warnings.)
Sending a MIDI note in iOS doesn't make any sound. If you don't have an external MIDI device connected to your iOS device, you need to set up something with CoreAudio that will generate sounds. That's beyond the scope of this answer.
So this code will run, but it won't make any sounds unless you've got external hardware:
//Look to see if there's anything that will actually play MIDI notes
NSLog(#"There are %lu destinations", MIDIGetNumberOfDestinations());
// Prepare MIDI Interface Client/Port for writing MIDI data:
MIDIClientRef midiclient = 0;
MIDIPortRef midiout = 0;
OSStatus status;
status = MIDIClientCreate(CFSTR("Test client"), NULL, NULL, &midiclient);
if (status) {
NSLog(#"Error trying to create MIDI Client structure: %d", (int)status);
}
status = MIDIOutputPortCreate(midiclient, CFSTR("Test port"), &midiout);
if (status) {
NSLog(#"Error trying to create MIDI output port: %d", (int)status);
}
Byte buffer[128];
MIDIPacketList *packetlist = (MIDIPacketList *)buffer;
MIDIPacket *currentpacket = MIDIPacketListInit(packetlist);
NSInteger messageSize = 3; //Note On is a three-byte message
Byte msg[3] = {0x90, 80, 120};
MIDITimeStamp timestamp = 0;
currentpacket = MIDIPacketListAdd(packetlist, sizeof(buffer), currentpacket, timestamp, messageSize, msg);
MIDISend(midiout, MIDIGetDestination(0), packetlist);

Related

How do I create an Inter App MIDI In port

I will program an inter App MIDI In Port in my Arranger App, that can be accessed by other MIDI App's. I would appreciate very much to get some sample code. I built a virtual MIDI In port like this, but how to make it visible for other App's:
MIDIClientRef virtualMidi;
result = MIDIClientCreate(CFSTR("Virtual Client"), MyMIDINotifyProc, NULL, &virtualMidi);
You need to use MIDIDestinationCreate, which will be visible to other MIDI Clients. You need to provide a MIDIReadProc callback that will be notified when a MIDI event arrives to your MIDI Destination. You may create another MIDI Input Port as well, with the same callback, that you can connect yourself from within your own program to an external MIDI Source.
Here is an example (in C++):
void internalCreate(CFStringRef name)
{
OSStatus result = noErr;
result = MIDIClientCreate( name , nullptr, nullptr, &m_client );
if (result != noErr) {
qDebug() << "MIDIClientCreate() err:" << result;
return;
}
result = MIDIDestinationCreate ( m_client, name, MacMIDIReadProc, (void*) this, &m_endpoint );
if (result != noErr) {
qDebug() << "MIDIDestinationCreate() err:" << result;
return;
}
result = MIDIInputPortCreate( m_client, name, MacMIDIReadProc, (void *) this, &m_port );
if (result != noErr) {
qDebug() << "MIDIInputPortCreate() error:" << result;
return;
}
}
Another example, in ObjectiveC from symplesynth
- (id)initWithName:(NSString*)newName
{
PYMIDIManager* manager = [PYMIDIManager sharedInstance];
MIDIEndpointRef newEndpoint;
OSStatus error;
SInt32 newUniqueID;
// This makes sure that we don't get notified about this endpoint until after
// we're done creating it.
[manager disableNotifications];
MIDIDestinationCreate ([manager midiClientRef], (CFStringRef)newName, midiReadProc, self, &newEndpoint);
// This code works around a bug in OS X 10.1 that causes
// new sources/destinations to be created without unique IDs.
error = MIDIObjectGetIntegerProperty (newEndpoint, kMIDIPropertyUniqueID, &newUniqueID);
if (error == kMIDIUnknownProperty) {
newUniqueID = PYMIDIAllocateUniqueID();
MIDIObjectSetIntegerProperty (newEndpoint, kMIDIPropertyUniqueID, newUniqueID);
}
MIDIObjectSetIntegerProperty (newEndpoint, CFSTR("PYMIDIOwnerPID"), [[NSProcessInfo processInfo] processIdentifier]);
[manager enableNotifications];
self = [super initWithMIDIEndpointRef:newEndpoint];
ioIsRunning = NO;
return self;
}
Ports can't be discovered from the API, but sources and destinations can. You want to create a MIDISource or MIDIDestination so that MIDI clients can call MIDIGetNumberOfDestinations/MIDIGetDestination or MIDIGetNumberOfSources/MIDIGetSource and discover it.
FYI, there is no need to do what you are planning to do on macOS because the IAC driver already does it. If this is for iOS, these are the steps to follow:
Create at least one MIDI Client.
Create a MIDIInputPort with a read block for I/O.
Use MIDIPortConnectSource to attach the input port to every MIDI Source of interest.
[From now, every MIDI message received by the source will come to your read block.]
If you want to resend this data to a different destination, you'll need to have created a MIDIOutputPort as well. Use MIDISend with that port to the desired MIDI Destination.

Webm (VP8 / Opus) file read and write back

I am trying to develop a webrtc simulator in C/C++. For media handling, I plan to use libav. I am thinking of below steps to realize media exchange between two webrtc simulator. Say I have two webrtc simulators A and B.
Read media at A from a input webm file using av_read_frame api.
I assume I will get the encoded media (audio / video) data, am I correct here?
Send the encoded media data to simulator B over a UDP socket.
Simulator B receives the media data in UDP socket as RTP packets.
Simulator B extracts audio/video data from just received RTP packet.
I assume the extracted media data at simulator B are the encoded data only (am I correct here). I do not want to decode it. I want to write it to a file. Later I will play the file to check if I have done everything right.
To simplify this problem lets take out UDP socket part. Then my question reduces to read data from a webm input file, get the encoded media, prepare the packet and write to a output file using av_interleaved_write_frame or any other appropriate api. All these things I want to do using libav.
Is there any example code I can refer.
Or can somebody please guide me to develop it.
I am trying with a test program. As a first step, my aim is to read from a file and write to an output file. I have below code, but it is not working properly.
//#define _AUDIO_WRITE_ENABLED_
#include "libavutil/imgutils.h"
#include "libavutil/samplefmt.h"
#include "libavformat/avformat.h"
static AVPacket pkt;
static AVFormatContext *fmt_ctx = NULL;
static AVFormatContext *av_format_context = NULL;
static AVOutputFormat *av_output_format = NULL;
static AVCodec *video_codec = NULL;
static AVStream *video_stream = NULL;
static AVCodec *audio_codec = NULL;
static AVStream *audio_stream = NULL;
static const char *src_filename = NULL;
static const char *dst_filename = NULL;
int main (int argc, char **argv)
{
int ret = 0;
int index = 0;
if (argc != 3)
{
printf("Usage: ./webm input_video_file output_video_file \n");
exit(0);
}
src_filename = argv[1];
dst_filename = argv[2];
printf("Source file = %s , Destination file = %s\n", src_filename, dst_filename);
av_register_all();
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0)
{
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0)
{
fprintf(stderr, "Could not find stream information\n");
exit(2);
}
av_output_format = av_guess_format(NULL, dst_filename, NULL);
if(!av_output_format)
{
fprintf(stderr, "Could not guess output file format\n");
exit(3);
}
av_output_format->audio_codec = AV_CODEC_ID_VORBIS;
av_output_format->video_codec = AV_CODEC_ID_VP8;
av_format_context = avformat_alloc_context();
if(!av_format_context)
{
fprintf(stderr, "Could not allocation av format context\n");
exit(4);
}
av_format_context->oformat = av_output_format;
strcpy(av_format_context->filename, dst_filename);
video_codec = avcodec_find_encoder(av_output_format->video_codec);
if (!video_codec)
{
fprintf(stderr, "Codec not found\n");
exit(5);
}
video_stream = avformat_new_stream(av_format_context, video_codec);
if (!video_stream)
{
fprintf(stderr, "Could not alloc stream\n");
exit(6);
}
avcodec_get_context_defaults3(video_stream->codec, video_codec);
video_stream->codec->codec_id = AV_CODEC_ID_VP8;
video_stream->codec->codec_type = AVMEDIA_TYPE_VIDEO;
video_stream->time_base = (AVRational) {1, 30};
video_stream->codec->width = 640;
video_stream->codec->height = 480;
video_stream->codec->pix_fmt = PIX_FMT_YUV420P;
video_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
video_stream->codec->bit_rate = 400000;
video_stream->codec->gop_size = 10;
video_stream->codec->max_b_frames=1;
#ifdef _AUDIO_WRITE_ENABLED_
audio_codec = avcodec_find_encoder(av_output_format->audio_codec);
if (!audio_codec)
{
fprintf(stderr, "Codec not found audio codec\n");
exit(5);
}
audio_stream = avformat_new_stream(av_format_context, audio_codec);
if (!audio_stream)
{
fprintf(stderr, "Could not alloc stream for audio\n");
exit(6);
}
avcodec_get_context_defaults3(audio_stream->codec, audio_codec);
audio_stream->codec->codec_id = AV_CODEC_ID_VORBIS;
audio_stream->codec->codec_type = AVMEDIA_TYPE_AUDIO;
audio_stream->time_base = (AVRational) {1, 30};
audio_stream->codec->sample_rate = 8000;
audio_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
#endif
if(!(av_output_format->flags & AVFMT_NOFILE))
{
if (avio_open(&av_format_context->pb, dst_filename, AVIO_FLAG_WRITE) < 0)
{
fprintf(stderr, "Could not open '%s'\n", dst_filename);
}
}
/* Before avformat_write_header set the stream */
avformat_write_header(av_format_context, NULL);
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
pkt.stream_index = video_stream->index;
ret = av_read_frame(fmt_ctx, &pkt);
while (ret >= 0)
{
index++;
//pkt.stream_index = video_avstream->index;
if(pkt.stream_index == video_stream->index)
{
printf("Video: Read cycle %d, bytes read = %d, pkt stream index=%d\n", index, pkt.size, pkt.stream_index);
av_write_frame(av_format_context, &pkt);
}
#ifdef _AUDIO_WRITE_ENABLED_
else if(pkt.stream_index == audio_stream->index)
{
printf("Audio: Read cycle %d, bytes read = %d, pkt stream index=%d\n", index, pkt.size, pkt.stream_index);
av_write_frame(av_format_context, &pkt);
}
#endif
av_free_packet(&pkt);
ret = av_read_frame(fmt_ctx, &pkt);
}
av_write_trailer(av_format_context);
/** Exit procedure starts */
avformat_close_input(&fmt_ctx);
avformat_free_context(av_format_context);
return 0;
}
When I execute this program, it outputs "codec not found". Now sure whats going wrong, Can somebody help please.
Codec not found issue is resolved by separately building libvpx1.4 version. Still struggling to read from source file, and writing to a destination file.
EDIT 1: After code modification, only video stuff I am able to write to a file, though some more errors are still present.
EDIT 2: With modified code (2nd round), I see video frames are written properly. For audio frames I added the code under a macro _AUDIO_WRITE_ENABLED_ , but if I enable this macro program crashing. Can somebody guide whats wrong in audio write part (code under macro _AUDIO_WRITE_ENABLED_).
I am not fully answering your question, but I hope we will get to the final solution eventually. When I tried to run your code, I got this error "time base not set".
Time base and other header specs are part of codec. This is, how I have this thing specified for writing into file (vStream is of AVStream):
#if LIBAVCODEC_VER_AT_LEAST(53, 21)
avcodec_get_context_defaults3(rc->vStream->codec, AVMEDIA_TYPE_VIDEO);
#else
avcodec_get_context_defaults2(rc->vStream->codec, AVMEDIA_TYPE_VIDEO);
#endif
#if LIBAVCODEC_VER_AT_LEAST(54, 25)
vStream->codec->codec_id = AV_CODEC_ID_VP8;
#else
vStream->codec->codec_id = CODEC_ID_VP8;
#endif
vStream->codec->codec_type = AVMEDIA_TYPE_VIDEO;
vStream->codec->time_base = (AVRational) {1, 30};
vStream->codec->width = 640;
vStream->codec->height = 480;
vStream->codec->pix_fmt = PIX_FMT_YUV420P;
EDIT: I ran your program in Valgrind and it segfaults on av_write_frame. Looks like its time_base and other specs for output are not set properly.
Add the specs before avformat_write_header(), before it is too late.

CoreAudio AudioQueue callback function never called, no errors reported

I am trying to do a simple playback from a file functionality and it appears that my callback function is never called. It doesn't really make sense because all of the OSStatuses come back 0 and other numbers all appear correct as well (like the output packets read pointer from AudioFileReadPackets).
Here is the setup:
OSStatus stat;
stat = AudioFileOpenURL(
(CFURLRef)urlpath, kAudioFileReadPermission, 0, &aStreamData->aFile
);
UInt32 dsze = 0;
stat = AudioFileGetPropertyInfo(
aStreamData->aFile, kAudioFilePropertyDataFormat, &dsze, 0
);
stat = AudioFileGetProperty(
aStreamData->aFile, kAudioFilePropertyDataFormat, &dsze, &aStreamData->aDescription
);
stat = AudioQueueNewOutput(
&aStreamData->aDescription, bufferCallback, aStreamData, NULL, NULL, 0, &aStreamData->aQueue
);
aStreamData->pOffset = 0;
for(int i = 0; i < NUM_BUFFERS; i++) {
stat = AudioQueueAllocateBuffer(
aStreamData->aQueue, aStreamData->aDescription.mBytesPerPacket, &aStreamData->aBuffer[i]
);
bufferCallback(aStreamData, aStreamData->aQueue, aStreamData->aBuffer[i]);
}
stat = AudioQueuePrime(aStreamData->aQueue, 0, NULL);
stat = AudioQueueStart(aStreamData->aQueue, NULL);
(Not shown is where I'm checking the value of stat in between the functions, it just comes back normal.)
And the callback function:
void bufferCallback(void *uData, AudioQueueRef queue, AudioQueueBufferRef buffer) {
UInt32 bread = 0;
UInt32 pread = buffer->mAudioDataBytesCapacity / player->aStreamData->aDescription.mBytesPerPacket;
OSStatus stat;
stat = AudioFileReadPackets(
player->aStreamData->aFile, false, &bread, NULL, player->aStreamData->pOffset, &pread, buffer->mAudioData
);
buffer->mAudioDataByteSize = bread;
stat = AudioQueueEnqueueBuffer(queue, buffer, 0, NULL);
player->aStreamData->pOffset += pread;
}
Where aStreamData is my user data struct (typedefed so I can use it as a class property) and player is a static instance of the controlling Objective-C class. If any other code is wanted please let me know. I am a bit at my wit's end. Printing any of the numbers involved here yields the correct result, including functions in bufferCallback when I call it myself in the allocate loop. It just never gets called thereafter. The start up method returns and nothing happens.
Also anecdotally, I am using a peripheral device (an MBox Pro 3) to play the sound which CoreAudio only boots up when it is about to output. IE if I start iTunes or something, the speakers pop faintly and there is an LED that goes from blinking to solid. The device boots up like it does so CA is definitely doing something. (Also I've of course tried it with the onboard Macbook sound sans the device.)
I've read other solutions to problems that sound similiar and they don't work. Stuff like using multiple buffers which I am doing now and doesn't appear to make any difference.
I basically assume I am doing something obviously wrong somehow but not sure what it could be. I've read the relevant documentation, looked at the available code examples and scoured the net a bit for answers and it appears that this is all I need to do and it should just go.
At the very least, is there anything else I can do to investigate?
My first answer was not good enough, so I compiled a minimal example that will play a 2 channel, 16 bit wave file.
The main difference from your code is that I made a property listener listening for play start and stop events.
As for your code, it seems legit at first glance. Two things I will point out, though:
1. Is seems you are allocating buffers with TOO SMALL a buffer size. I have noticed that AudioQueues won't play if the buffers are too small, which seems to fit your problem.
2. Have you verified the properties returned?
Back to my code example:
Everything is hard coded, so it is not exactly good coding practice, but it shows how you can do it.
AudioStreamTest.h
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
uint32_t bufferSizeInSamples;
AudioFileID file;
UInt32 currentPacket;
AudioQueueRef audioQueue;
AudioQueueBufferRef buffer[3];
AudioStreamBasicDescription audioStreamBasicDescription;
#interface AudioStreamTest : NSObject
- (void)start;
- (void)stop;
#end
AudioStreamTest.m
#import "AudioStreamTest.h"
#implementation AudioStreamTest
- (id)init
{
self = [super init];
if (self) {
bufferSizeInSamples = 441;
file = NULL;
currentPacket = 0;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerFrame = 4;
audioStreamBasicDescription.mBytesPerPacket = 4;
audioStreamBasicDescription.mChannelsPerFrame = 2;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mReserved = 0;
audioStreamBasicDescription.mSampleRate = 44100;
}
return self;
}
- (void)start {
AudioQueueNewOutput(&audioStreamBasicDescription, AudioEngineOutputBufferCallback, (__bridge void *)(self), NULL, NULL, 0, &audioQueue);
AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);
AudioQueueStart(audioQueue, NULL);
}
- (void)stop {
AudioQueueStop(audioQueue, YES);
AudioQueueRemovePropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);
}
void AudioEngineOutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
if (file == NULL) return;
UInt32 bytesRead = bufferSizeInSamples * 4;
UInt32 packetsRead = bufferSizeInSamples;
AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, inBuffer->mAudioData);
inBuffer->mAudioDataByteSize = bytesRead;
currentPacket += packetsRead;
if (bytesRead == 0) {
AudioQueueStop(inAQ, false);
}
else {
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
}
void AudioEnginePropertyListenerProc (void *inUserData, AudioQueueRef inAQ, AudioQueuePropertyID inID) {
//We are only interested in the property kAudioQueueProperty_IsRunning
if (inID != kAudioQueueProperty_IsRunning) return;
//Get the status of the property
UInt32 isRunning = false;
UInt32 size = sizeof(isRunning);
AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &isRunning, &size);
if (isRunning) {
currentPacket = 0;
NSString *fileName = #"/Users/roy/Documents/XCodeProjectsData/FUZZ/03.wav";
NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: fileName];
AudioFileOpenURL((__bridge CFURLRef) fileURL, kAudioFileReadPermission, 0, &file);
for (int i = 0; i < 3; i++){
AudioQueueAllocateBuffer(audioQueue, bufferSizeInSamples * 4, &buffer[i]);
UInt32 bytesRead = bufferSizeInSamples * 4;
UInt32 packetsRead = bufferSizeInSamples;
AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, buffer[i]->mAudioData);
buffer[i]->mAudioDataByteSize = bytesRead;
currentPacket += packetsRead;
AudioQueueEnqueueBuffer(audioQueue, buffer[i], 0, NULL);
}
}
else {
if (file != NULL) {
AudioFileClose(file);
file = NULL;
for (int i = 0; i < 3; i++) {
AudioQueueFreeBuffer(audioQueue, buffer[i]);
buffer[i] = NULL;
}
}
}
}
-(void)dealloc {
[super dealloc];
AudioQueueDispose(audioQueue, true);
audioQueue = NULL;
}
#end
Lastly, I want to include some research I have done today to test the robustness of AudioQueues.
I have noticed that if you make too small AudioQueue buffers, it won't play at all. That made me play around a bit to see why it is not playing.
If I try buffer size that can hold only 150 samples, I get no sound at all.
If I try buffer size that can hold 175 samples, it plays the whole song through, but with A lot of distortion. 175 amounts to a tad less than 4 ms of audio.
AudioQueue keeps asking for new buffers as long as you keep supplying buffers. That is regardless of AudioQueue actually playing your buffers or not.
If you supply a buffer with size 0, the buffer will be lost and an error kAudioQueueErr_BufferEmpty is returned for that queue enqueue request. You will never see AudioQueue ask you to fill that buffer again. If this happened for the last queue you have posted, AudioQueue will stop asking you to fill any more buffers. In that case you will not hear any more audio for that session.
To see why AudioQueues is not playing anything with smaller buffer sizes, I made a test to see if my callback is called at all even when there is no sound. The answer is that the buffers gets called all the time as long as AudioQueues is playing and needs data.
So if you keep feeding buffers to the queue, no buffer is ever lost. It doesn't happen. Unless there is an error, of course.
So why is no sound playing?
I tested to see if 'AudioQueueEnqueueBuffer()' returned any errors. It did not. No other errors within my play routine either. The data returned from reading from file is also good.
Everything is normal, buffers are good, data re-enqueued is good, there is just no sound.
So my last test was to slowly increase buffer size till I could hear anything. I finally heard faint and sporadic distortion.
Then it came to me...
It seems that the problem lies with that the system tries to keep the stream in sync with time so if you enqueue audio, and the time for the audio you wanted to play has passed, it will just skip that part of the buffer. If the buffer size becomes too small, more and more data is dropped or skipped until the audio system is in sync again. Which is never if the buffer size is too small. (You can hear this as distortion if you chose a buffer size that is barely large enough to support continuous play.)
If you think about it, it is the only way the audio queue can work, but it is a good realisation when you are clueless like me and "discover" how it really works.
I decided to take a look at this again and was able to solve it by making the buffers larger. I've accepted the answer by #RoyGal since it was their suggestion but I wanted to provide the actual code that works since I guess others are having the same problem (question has a few favorites that aren't me at the moment).
One thing I tried was making the packet size larger:
aData->aDescription.mFramesPerPacket = 512; // or some other number
aData->aDescription.mBytesPerPacket = (
aData->aDescription.mFramesPerPacket * aData->aDescription.mBytesPerFrame
);
This does NOT work: it causes AudioQueuePrime to fail with an AudioConverterNew returned -50 message. I guess it wants mFramesPerPacket to be 1 for PCM.
(I also tried setting the kAudioQueueProperty_DecodeBufferSizeFrames property which didn't seem to do anything. Not sure what it's for.)
The solution seems to be to only allocate the buffer(s) with the specified size:
AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,
&aData->aBuffer[i]
);
And the size has to be sufficiently large. I found the magic number is:
mBytesPerPacket * 1024 / N_BUFFERS
(Where N_BUFFERS is the number of buffers and should be > 1 or playback is choppy.)
Here is an MCVE demonstrating the issue and solution:
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AudioToolbox/AudioQueue.h>
#import <AudioToolbox/AudioFile.h>
#define N_BUFFERS 2
#define N_BUFFER_PACKETS 1024
typedef struct AStreamData {
AudioFileID aFile;
AudioQueueRef aQueue;
AudioQueueBufferRef aBuffer[N_BUFFERS];
AudioStreamBasicDescription aDescription;
SInt64 pOffset;
volatile BOOL isRunning;
} AStreamData;
void printASBD(AudioStreamBasicDescription* desc) {
printf("mSampleRate = %d\n", (int)desc->mSampleRate);
printf("mBytesPerPacket = %d\n", desc->mBytesPerPacket);
printf("mFramesPerPacket = %d\n", desc->mFramesPerPacket);
printf("mBytesPerFrame = %d\n", desc->mBytesPerFrame);
printf("mChannelsPerFrame = %d\n", desc->mChannelsPerFrame);
printf("mBitsPerChannel = %d\n", desc->mBitsPerChannel);
}
void bufferCallback(
void *vData, AudioQueueRef aQueue, AudioQueueBufferRef aBuffer
) {
AStreamData* aData = (AStreamData*)vData;
UInt32 bRead = 0;
UInt32 pRead = (
aBuffer->mAudioDataBytesCapacity / aData->aDescription.mBytesPerPacket
);
OSStatus stat;
stat = AudioFileReadPackets(
aData->aFile, false, &bRead, NULL, aData->pOffset, &pRead, aBuffer->mAudioData
);
if(stat != 0) {
printf("AudioFileReadPackets returned %d\n", stat);
}
if(pRead == 0) {
aData->isRunning = NO;
return;
}
aBuffer->mAudioDataByteSize = bRead;
stat = AudioQueueEnqueueBuffer(aQueue, aBuffer, 0, NULL);
if(stat != 0) {
printf("AudioQueueEnqueueBuffer returned %d\n", stat);
}
aData->pOffset += pRead;
}
AStreamData* beginPlayback(NSURL* path) {
static AStreamData* aData;
aData = malloc(sizeof(AStreamData));
OSStatus stat;
stat = AudioFileOpenURL(
(CFURLRef)path, kAudioFileReadPermission, 0, &aData->aFile
);
printf("AudioFileOpenURL returned %d\n", stat);
UInt32 dSize = 0;
stat = AudioFileGetPropertyInfo(
aData->aFile, kAudioFilePropertyDataFormat, &dSize, 0
);
printf("AudioFileGetPropertyInfo returned %d\n", stat);
stat = AudioFileGetProperty(
aData->aFile, kAudioFilePropertyDataFormat, &dSize, &aData->aDescription
);
printf("AudioFileGetProperty returned %d\n", stat);
printASBD(&aData->aDescription);
stat = AudioQueueNewOutput(
&aData->aDescription, bufferCallback, aData, NULL, NULL, 0, &aData->aQueue
);
printf("AudioQueueNewOutput returned %d\n", stat);
aData->pOffset = 0;
for(int i = 0; i < N_BUFFERS; i++) {
// change YES to NO for stale playback
if(YES) {
stat = AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,
&aData->aBuffer[i]
);
} else {
stat = AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket,
&aData->aBuffer[i]
);
}
printf(
"AudioQueueAllocateBuffer returned %d for aBuffer[%d] with capacity %d\n",
stat, i, aData->aBuffer[i]->mAudioDataBytesCapacity
);
bufferCallback(aData, aData->aQueue, aData->aBuffer[i]);
}
UInt32 numFramesPrepared = 0;
stat = AudioQueuePrime(aData->aQueue, 0, &numFramesPrepared);
printf("AudioQueuePrime returned %d with %d frames prepared\n", stat, numFramesPrepared);
stat = AudioQueueStart(aData->aQueue, NULL);
printf("AudioQueueStart returned %d\n", stat);
UInt32 pSize = sizeof(UInt32);
UInt32 isRunning;
stat = AudioQueueGetProperty(
aData->aQueue, kAudioQueueProperty_IsRunning, &isRunning, &pSize
);
printf("AudioQueueGetProperty returned %d\n", stat);
aData->isRunning = !!isRunning;
return aData;
}
void endPlayback(AStreamData* aData) {
OSStatus stat = AudioQueueStop(aData->aQueue, NO);
printf("AudioQueueStop returned %d\n", stat);
}
NSString* getPath() {
// change NO to YES and enter path to hard code
if(NO) {
return #"";
}
char input[512];
printf("Enter file path: ");
scanf("%[^\n]", input);
return [[NSString alloc] initWithCString:input encoding:NSASCIIStringEncoding];
}
int main(int argc, const char* argv[]) {
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
NSURL* path = [NSURL fileURLWithPath:getPath()];
AStreamData* aData = beginPlayback(path);
if(aData->isRunning) {
do {
printf("Queue is running...\n");
[NSThread sleepForTimeInterval:1.0];
} while(aData->isRunning);
endPlayback(aData);
} else {
printf("Playback did not start\n");
}
[pool drain];
return 0;
}

CoreMIDI track accessing time signature information

I'm designing a rhythm game which is driven by a MIDI track. The MIDI messages trigger the release of on-screen elements. I am loading the MIDI data from a file and then playing it using a MusicSequence and MusicPlayer.
I understand that MIDI files contain time and key signature information as meta messages at the start of the file. However I've not found a way to retrieve this information from either the MusicPlayer or the MusicSequence.
The information I need is the number of seconds it take to play a quaver, crotchet etc... I would expect this to be affected by the time signature and the MusicPlayerPlayRateScalar value.
It look like this information can be found in the CoreAudio clock but I've not been able to work how this is accessed for a particular music sequence.
Are there any CoreAudio experts out there who know how to do this?
You need to get the tempo track of the midi file and then iterate through it to get the tempo(s).
To get the sequence length you need to find the longest track:
(MusicTimeStamp)getSequenceLength:(MusicSequence)aSequence {
UInt32 tracks;
MusicTimeStamp len = 0.0f;
if (MusicSequenceGetTrackCount(sequence, &tracks) != noErr)
return len;
for (UInt32 i = 0; i < tracks; i++) {
MusicTrack track = NULL;
MusicTimeStamp trackLen = 0;
UInt32 trackLenLen = sizeof(trackLen);
MusicSequenceGetIndTrack(sequence, i, &track);
MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLen);
if (len < trackLen)
len = trackLen;
}
return len;
}
// - get the tempo track:
OSStatus result = noErr;
MusicTrack tempoTrack;
result = MusicSequenceGetTempoTrack(sequence, &tempoTrack);
if (noErr != result) {[self printErrorMessage: #"MusicSequenceGetTempoTrack" withStatus: result];}
MusicEventIterator iterator = NULL;
NewMusicEventIterator(tempoTrack, &iterator);
MusicTimeStamp timestamp = 0;
MusicEventType eventType = 0;
const void *eventData = NULL;
UInt32 eventDataSize = 0;
MusicEventIteratorGetEventInfo(iterator, &timestamp, &eventType, &eventData, &eventDataSize);

How to get array of float audio data from AudioQueueRef in iOS?

I'm working on getting audio into the iPhone in a form where I can pass it to a (C++) analysis algorithm. There are, of course, many options: the AudioQueue tutorial at trailsinthesand gets things started.
The audio callback, though, gives an AudioQueueRef, and I'm finding Apple's documentation thin on this side of things. Built-in methods to write to a file, but nothing where you actually peer inside the packets to see the data.
I need data. I don't want to write anything to a file, which is what all the tutorials — and even Apple's convenience I/O objects — seem to be aiming at. Apple's AVAudioRecorder (infuriatingly) will give you levels and write the data, but not actually give you access to it. Unless I'm missing something...
How to do this? In the code below there is inBuffer->mAudioData which is tantalizingly close but I can find no information about what format this 'data' is in or how to access it.
AudioQueue Callback:
void AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
static int count = 0;
RecordState* recordState = (RecordState*)inUserData;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
++count;
printf("Got buffer %d\n", count);
}
And the code to write the audio to a file:
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData); // THIS! This is what I want to look inside of.
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
// so you don't have to hunt them all down when you decide to switch to float:
#define AUDIO_DATA_TYPE_FORMAT SInt16
// the actual sample-grabbing code:
int sampleCount = inBuffer->mAudioDataBytesCapacity / sizeof(AUDIO_DATA_TYPE_FORMAT);
AUDIO_DATA_TYPE_FORMAT *samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData;
Then you have your (in this case SInt16) array samples which you can access from samples[0] to samples[sampleCount-1].
The above solution did not work for me, I was getting the wrong sample data itself.(an endian issue) If incase someone is getting wrong sample data in future, I hope this helps you :
-(void)feedSamplesToEngine:(UInt32)audioDataBytesCapacity audioData:(void *)audioData {
int sampleCount = audioDataBytesCapacity / sizeof(SAMPLE_TYPE);
SAMPLE_TYPE *samples = (SAMPLE_TYPE*)audioData;
//SAMPLE_TYPE *sample_le = (SAMPLE_TYPE *)malloc(sizeof(SAMPLE_TYPE)*sampleCount );//for swapping endians
std::string shorts;
double power = pow(2,10);
for(int i = 0; i < sampleCount; i++)
{
SAMPLE_TYPE sample_le = (0xff00 & (samples[i] << 8)) | (0x00ff & (samples[i] >> 8)) ; //Endianess issue
char dataInterim[30];
sprintf(dataInterim,"%f ", sample_le/power); // normalize it.
shorts.append(dataInterim);
}