Can't set multiple musical instruments in iOS using MusicPlayer and AUGraph - objective-c

I have a MusicPlayer that holds a MusicSequence containing 3 MusicTracks. I have set up an AUGraph with 3 AUSampler Nodes plugged into a multichannel mixer, which in turn is connected to an output node.
I am using a SoundFont, and would like my 3 different MusicTracks to play on 3 different musical instruments, as is described here. However, the code I've got doesn't work - instead, it plays only one of the parts.
I create the AUGraph as follows:
NewAUGraph (&_processingGraph);
AUNode samplerNode, samplerNodeTwo, samplerNodeThree, ioNode, mixerNode;
AudioComponentDescription cd = {};
cd.componentManufacturer = kAudioUnitManufacturer_Apple;
//----------------------------------------
// Add 3 Sampler unit nodes to the graph
//----------------------------------------
cd.componentType = kAudioUnitType_MusicDevice;
cd.componentSubType = kAudioUnitSubType_Sampler;
AUGraphAddNode (self.processingGraph, &cd, &samplerNode);
AUGraphAddNode (self.processingGraph, &cd, &samplerNodeTwo);
AUGraphAddNode (self.processingGraph, &cd, &samplerNodeThree);
//-----------------------------------
// 2. Add a Mixer unit node to the graph
//-----------------------------------
cd.componentType = kAudioUnitType_Mixer;
cd.componentSubType = kAudioUnitSubType_MultiChannelMixer;
AUGraphAddNode (self.processingGraph, &cd, &mixerNode);
//--------------------------------------
// 3. Add the Output unit node to the graph
//--------------------------------------
cd.componentType = kAudioUnitType_Output;
cd.componentSubType = kAudioUnitSubType_RemoteIO; // Output to speakers
AUGraphAddNode (self.processingGraph, &cd, &ioNode);
//---------------
// Open the graph
//---------------
AUGraphOpen (self.processingGraph);
//-----------------------------------------------------------
// Obtain the mixer unit instance from its corresponding node
//-----------------------------------------------------------
AUGraphNodeInfo (
self.processingGraph,
mixerNode,
NULL,
&mixerUnit
);
//--------------------------------
// Set the bus count for the mixer
//--------------------------------
UInt32 numBuses = 3;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&numBuses,
sizeof(numBuses));
//------------------
// Connect the nodes
//------------------
AUGraphConnectNodeInput (self.processingGraph, samplerNode, 0, mixerNode, 0);
AUGraphConnectNodeInput (self.processingGraph, samplerNodeTwo, 0, mixerNode, 1);
AUGraphConnectNodeInput (self.processingGraph, samplerNodeThree, 0, mixerNode, 2);
// Connect the mixer unit to the output unit
AUGraphConnectNodeInput (self.processingGraph, mixerNode, 0, ioNode, 0);
// Obtain references to all of the audio units from their nodes
AUGraphNodeInfo (self.processingGraph, samplerNode, 0, &_samplerUnit);
AUGraphNodeInfo (self.processingGraph, samplerNodeTwo, 0, &_samplerUnitTwo);
AUGraphNodeInfo (self.processingGraph, samplerNodeThree, 0, &_samplerUnitThree);
AUGraphNodeInfo (self.processingGraph, ioNode, 0, &_ioUnit);
I then load the 3 instruments from the SoundFont (IDs 0, 1 and 2 in the SoundFont) as follows, passing in the 'bankURL' of the SoundFont:
// Load the first instrument
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) 0;
AudioUnitSetProperty(self.samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// Load the second instrument
AUSamplerBankPresetData bpdataTwo;
bpdataTwo.bankURL = (__bridge CFURLRef) bankURL;
bpdataTwo.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdataTwo.bankLSB = kAUSampler_DefaultBankLSB;
bpdataTwo.presetID = (UInt8) 1;
AudioUnitSetProperty(self.samplerUnitTwo,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdataTwo,
sizeof(bpdataTwo));
// Load the third instrument
AUSamplerBankPresetData bpdataThree;
bpdataThree.bankURL = (__bridge CFURLRef) bankURL;
bpdataThree.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdataThree.bankLSB = kAUSampler_DefaultBankLSB;
bpdataThree.presetID = (UInt8) 2;
AudioUnitSetProperty(self.samplerUnitThree,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdataThree,
sizeof(bpdataThree));
Finally, I set the AUSampler nodes to be used by each MusicTrack as follows:
//-------------------------------------------------
// Set the AUSampler nodes to be used by each track
//-------------------------------------------------
MusicTrack track, trackTwo, trackThree;
MusicSequenceGetIndTrack(testSequence, 0, &track);
MusicSequenceGetIndTrack(testSequence, 1, &trackTwo);
MusicSequenceGetIndTrack(testSequence, 2, &trackThree);
AUNode samplerNode, samplerNodeTwo, samplerNodeThree;
AUGraphGetIndNode (self.processingGraph, 0, &samplerNode);
AUGraphGetIndNode (self.processingGraph, 1, &samplerNodeTwo);
AUGraphGetIndNode (self.processingGraph, 2, &samplerNodeThree);
MusicTrackSetDestNode(track, samplerNode);
MusicTrackSetDestNode(trackTwo, samplerNodeTwo);
MusicTrackSetDestNode(trackThree, samplerNodeThree);
However, when I then play the MusicPlayer, I only hear a single part playing. The problem is arising in trying to use different instruments - when I use a single instrument with the standard MusicPlayer setup (instead of editing the AUGraph as I do above), it works fine.
Does anyone have any idea what I'm doing wrong?

I've found the solution. Before loading the instruments from the SoundFont, the following line is needed:
MusicSequenceSetAUGraph(testSequence, self.processingGraph);
As long as the point at which this line is run comes before the instruments are loaded from the SoundFont and before the various MusicTracks are assigned AUSampler nodes, it seems to work - all parts are played on different instruments, as desired. This answer to a related question helped me figure this out.

I had exactly same issue as you. All tracks play with the first sound font instrument.
I followed your solution but it not work at first. Finally, I resolve the problem.
As your mentioned, the sequence of calling functions really maters. Yes, it is. Actually, the sequence calling should be like this:
.....
MusicSequenceSetAUGraph(s, _processingGraph);
.......
MusicTrackSetDestNode(track[i], samplerNodes[i]);
......
[self loadFromDLSOrSoundFont];
......
MusicPlayerStart(p);
This works in my project.
BTW, thanks for sharing your codes. Really helped :)

Related

<Vulkan> Use rendered vkImage as Texture

I want to use a vkImage rendered at a previous render pass as Texture to do the composite operation in a fragment shader. From here I learned vkCmdPipelineBarrier is used to wait for GPU finish a rendering operation and I write this code. It works well on Snapdragon devices. But not on Mali-G52. The Write-after-write error is partly happed. Is this code not enough? Any suggestions?
vkCmdEndRenderPass(cb);
vkCmdBeginRenderPass(cb, &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
VkViewport viewport = vks::initializers::viewport((float)offscreenPass.width, (float)offscreenPass.height, 0.0f, 1.0f);
vkCmdSetViewport(cb, 0, 1, &viewport);
VkRect2D scissor = vks::initializers::rect2D(offscreenPass.width, offscreenPass.height, 0, 0);
vkCmdSetScissor(cb, 0, 1, &scissor);
// https://github.com/KhronosGroup/Vulkan-Samples/blob/master/samples/performance/pipeline_barriers/pipeline_barriers.cpp
VkImageMemoryBarrier imageMemoryBarrier = vks::initializers::imageMemoryBarrier();
imageMemoryBarrier.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imageMemoryBarrier.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
imageMemoryBarrier.srcAccessMask = 0;
imageMemoryBarrier.dstAccessMask = 0;
imageMemoryBarrier.image = offscreenPass.color[drawframe].image;
imageMemoryBarrier.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
imageMemoryBarrier.subresourceRange.baseMipLevel = 0;
imageMemoryBarrier.subresourceRange.levelCount = 1;
imageMemoryBarrier.subresourceRange.baseArrayLayer = 0;
imageMemoryBarrier.subresourceRange.layerCount = 1;
vkCmdPipelineBarrier(
cb,
VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
0, 0, nullptr, 0, nullptr, 1, &imageMemoryBarrier);
imageMemoryBarrier.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imageMemoryBarrier.newLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL;
imageMemoryBarrier.image = offscreenPass.depth.image;
imageMemoryBarrier.srcAccessMask = 0;
imageMemoryBarrier.dstAccessMask = 0;
vkCmdPipelineBarrier(
cb,
VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
0, 0, nullptr, 0, nullptr, 1, &imageMemoryBarrier);
I have tried every pattern written here.
If you want to synchronize render passes then your pipeline barrier must be outside of the render pass in the command stream. I.e. it must be after the vkCmdEndRenderPass() of the first pass, and before the vkCmdBeginRenderPass() of the second pass. Pipeline barriers issued inside a render pass, as you are currently doing, are used for synchronization only within the current subpass.
Also, try to avoid:
srcStage=VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT
dstStage=VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT
... for pipeline barriers when you only consume the output of the first pass as a fragment shader input in the second. This is overly conservative and needlessly serializes execution of the geometry processing too. In this case, you should use:
srcStage=VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT
dstStage=VK_PIPELINE_STAGE_FRAGMENT_BIT
... which allows the non-dependent vertex shading and binning for the second pass to run in parallel to the first pass.
Self solved.
The difference in the precision of sampler2D between Adreno and Mali causes this issue. I can read correct data using "precision highp sampler2D".

Vulkan validation error when I try to reset a commandPool after vkQueueWaitIddle

I have a small Vulkan program that runs a compute shader in a loop.
There is only one commandBuffer that is allocated from the only commandPool I have.
After the commandBuffer is built, I submit it to the queue, and wait for it to comple with vkQueueWaitIddle. I does indeed wait for a while in that line of code. After that, I call vkResetCommandPool, which should reset all commandBuffer allocated with that pool (there is only one anyways).
...
vkEndCommandBuffer(commandBuffer);
{
VkSubmitInfo info = {};
info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
info.commandBufferCount = 1;
info.pCommandBuffers = &commandBuffer;
vkQueueSubmit(queue, 1, &info, VK_NULL_HANDLE);
}
vkQueueWaitIdle(queue);
vkResetCommandPool(device, commandPool, VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT);
When it tries to reset the commandPool the validation gives me the following error.
VUID-vkResetCommandPool-commandPool-00040(ERROR / SPEC): msgNum: -1254218959
- Validation Error: [ VUID-vkResetCommandPool-commandPool-00040 ]
Object 0: handle = 0x20d2ce0b718, type = VK_OBJECT_TYPE_COMMAND_BUFFER; |
MessageID = 0xb53e2331 |
Attempt to reset command pool with VkCommandBuffer 0x20d2ce0b718[] which is in use.
The Vulkan spec states: All VkCommandBuffer objects allocated from commandPool must not be in the pending state
(https://vulkan.lunarg.com/doc/view/1.2.176.1/windows/1.2-extensions/vkspec.html#VUID-vkResetCommandPool-commandPool-00040)
Objects: 1
[0] 0x20d2ce0b718, type: 6, name: NULL
But I don't understand why, since I'm already waiting with vkQueueWaitIdle. According to the documentation, once the commandBuffer is done executing, it should go to the invalid state, and I should be able to reset it.
Here's the relevan surrounding code:
VkCommandBufferBeginInfo beginInfo = {};
beginInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
beginInfo.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT;
beginInfo.pInheritanceInfo = nullptr;
for (i64 i = 0; i < numIterations; i++)
{
vkBeginCommandBuffer(commandBuffer, &beginInfo);
vkCmdBindPipeline(commandBuffer, VK_PIPELINE_BIND_POINT_COMPUTE, pipeline);
vkCmdBindDescriptorSets(commandBuffer, VK_PIPELINE_BIND_POINT_COMPUTE, pipelineLayout,
0, 2, descriptorSets, 0, nullptr);
uniforms.start = i * numThreads;
vkCmdUpdateBuffer(commandBuffer, unifsBuffer, 0, sizeof(uniforms), &uniforms);
vkCmdPipelineBarrier(commandBuffer,
VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT, 0,
0, nullptr,
1, &memBarriers[0],
0, nullptr);
vkCmdDispatch(commandBuffer, numThreads, 1, 1);
vkCmdPipelineBarrier(commandBuffer,
VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0,
0, nullptr,
1, &memBarriers[1],
0, nullptr);
VkBufferCopy copyInfo = {};
copyInfo.srcOffset = 0;
copyInfo.dstOffset = 0;
copyInfo.size = sizeof(i64) * numThreads;
vkCmdCopyBuffer(commandBuffer,
buffer, stagingBuffer, 1, &copyInfo);
vkEndCommandBuffer(commandBuffer);
{
VkSubmitInfo info = {};
info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
info.commandBufferCount = 1;
info.pCommandBuffers = &commandBuffer;
vkQueueSubmit(queue, 1, &info, VK_NULL_HANDLE);
}
vkQueueWaitIdle(queue);
vkResetCommandPool(device, commandPool, VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT);
i64* result;
vkMapMemory(device, stagingBufferMem, 0, sizeof(i64) * numThreads, 0, (void**)&result);
for (int i = 0; i < numThreads; i++)
{
if (result[i]) {
auto res = result[i];
vkUnmapMemory(device, stagingBufferMem);
return res;
}
}
vkUnmapMemory(device, stagingBufferMem);
}
I have found my problem. In vkCmdDispatch, I thought the paremeters specify the global size (number of compute shader invocations) but it's actually the number of work groups. Therefore, I was dispatching more threads than I intended, and my buffer wasn't big enough, so the threads were writing out of bounds.
I believe the validation layer wasn't giving me the right hints though.

How to change from one musicSequence to another without time delay

I'm playing a MIDI sequence via MusicPlayer which I loaded from a MIDI file and I want to change the sequence to another while playback.
When I try this:
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
it stops the playback. So I start it back again and set the time with
MusicPlayerSetTime(_player, currentTime);
so it plays again where the previous sequence stopped, but there is a little delay.
I've tried to add the time interval to currentTime, which I got by obtaining the time before stopping and after starting again. But there is still a delay.
I was wondering if there is an alternative to stopping -> changing sequence -> starting again.
You definitely need to manage the AUSamplers if you are adding and removing tracks or switching sequences. It probably is cleaner to dispose of the AUSampler and create a new one for each new track but it is also possible to 'recycle' AUSamplers but that means you will need to keep track of them.
Managing AUSamplers means that when you are no longer using an instance of one (for example if you delete or replace a MusicTrack), you need to disconnect it from the AUMixer instance, remove it from the AUGraph instance, and then update the AUGraph.
There are lots of ways to handle all this. For convenience in keeping track of AUSampler instances' bus number, sound font loaded and some other stuff, I use a subClass of NSObject named SamplerAudioUnitto contain all the needed properties and methods. Same for MusicTracks - I have a Track class - but this may not be needed in your project.
The gist though is that AUSamplers need to be managed for performance and memory. If an instance is no longer being used it should be removed and the AUMixer bus input freed up.
BTW - I check the docs and there is apparently no technical limit to the number of mixer busses - but the number does need to be specified.
// this is not cut and paste code - just an example of managing the AUSampler instance
- (OSStatus)deleteTrack:(Track*) trackObj
{
OSStatus result = noErr;
// turn off MP if playing
BOOL MPstate = [self isPlaying];
if (MPstate){
MusicPlayerStop(player);
}
//-disconnect node from mixer + update list of mixer buses
SamplerAudioUnit * samplerObj = trackObj.sampler;
UInt32 busNumber = samplerObj.busNumber;
result = AUGraphDisconnectNodeInput(graph, mixerNode, busNumber);
if (result) {[self printErrorMessage: #"AUGraphDisconnectNodeInput" withStatus: result];}
[self clearMixerBusState: busNumber]; // routine that keeps track of available busses
result = MusicSequenceDisposeTrack(sequence, trackObj.track);
if (result) {[self printErrorMessage: #"MusicSequenceDisposeTrack" withStatus: result];}
// remove AUSampler node
result = AUGraphRemoveNode(graph, samplerObj.samplerNode);
if (result) {[self printErrorMessage: #"AUGraphRemoveNode" withStatus: result];}
result = AUGraphUpdate(graph, NULL);
if (result) {[self printErrorMessage: #"AUGraphUpdate" withStatus: result];}
samplerObj = nil;
trackObj = nil;
if (MPstate){
MusicPlayerStart(player);
}
// CAShow(graph);
// CAShow(sequence);
return result;
}
Because
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
will still cause the player to stop, it is still possible to hear a little break.
So instead of updating the musicSequence, i went ahead and changed the content of the tracks instead, which won't cause any breaks:
MusicTrack currentTrack;
MusicTrack currentTrack2;
MusicSequenceGetIndTrack(musicSequence, 0, &currentTrack);
MusicSequenceGetIndTrack(musicSequence, 1, &currentTrack2);
MusicTrackClear(currentTrack, 0, _trackLen);
MusicTrackClear(currentTrack2, 0, _trackLen);
MusicSequence tmpSequence;
switch (number) {
case 0:
tmpSequence = musicSequence1;
break;
case 1:
tmpSequence = musicSequence2;
break;
case 2:
tmpSequence = musicSequence3;
break;
case 3:
tmpSequence = musicSequence4;
break;
default:
tmpSequence = musicSequence1;
break;
}
MusicTrack tmpTrack;
MusicTrack tmpTrack2;
MusicSequenceGetIndTrack(tmpSequence, 0, &tmpTrack);
MusicSequenceGetIndTrack(tmpSequence, 1, &tmpTrack2);
MusicTimeStamp trackLen = 0;
UInt32 trackLenLenLen = sizeof(trackLen);
MusicTrackGetProperty(tmpTrack, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLenLen);
_trackLen = trackLen;
MusicTrackCopyInsert(tmpTrack, 0, _trackLen, currentTrack, 0);
MusicTrackCopyInsert(tmpTrack2, 0, _trackLen, currentTrack2, 0);
No disconnection of nodes, no updating the graph, no stopping the player.

How does kAudioUnitSubType_NBandEQ work? Or equalizing using DSP formulas with Novocaine?

I'm trying to make a 10-band equalizer and the kAudioUnitSubType_NBandEQ audio unit seems the way to go, but Apple's documentation doesn't cover how to set/configure it.
I've already connected the nodes but it errors out when I try to connect the EQNode with the iONode (output): https://gist.github.com/2295463
How do I turn the effect into a working 10-band equalizer?
Update:
A working DSP formula with Novocaine is also a solution, any ideas! Those DSP formulas are quite complicated.
Update2:
I prefer a working DSP formula with Novocaine since that'd be much cleaner/smaller than programming Audio Nodes.
Update3:
"The Multitype EQ unit(of subtype kAudioUnitSubType_NBandEQ) provides an equalizer that can be configured as any one of the types described in “Mutitype EQ Unit Filter Types” (page 68)."
Source: http://developer.apple.com/library/ios/DOCUMENTATION/AudioUnit/Reference/AudioUnit_Framework/AudioUnit_Framework.pdf
But still no example.
IMPORTANT Update (17/05): I recommend everyone to use my DSP class I released on github: https://github.com/bartolsthoorn/NVDSP It'll probably save you quite some work. It will make developing a n-band equalizer or any kind of audio filters a breeze.
I'm the creator of Novocaine, and I've used it to make a 200-some-odd band EQ using vDSP.
I'm considering switching over to the NBandEQ audio unit, but I have a working solution with vDSP_deq22.
vDSP_deq22 filters data one sample at a time with a 2nd order IIR filter. You can find 2nd order Butterworth coefficients on musicdsp.org, or more generally by Googling. Matlab will also calculate them for you, if you have access to a copy. I used musicdsp.org. You'd create 10 vDSP_deq22 filters, run your audio through each one, multiply that band by a specified gain, and then add up the output of all the filters in the filter bank into your output audio.
10-band equalizer can be configured as below
converterNode -> eqNode->converterNode->ioNode.
Please refere below sample code. here i have used iPodEQ unit. Replace the iPodEQunit formatspecification with 10-band equalizer.
AUNode outputNode;
AUNode iPodTimeNode;
AUNode converterNode;
AUNode converterNode2;
AudioUnit converterAU;
AudioUnit converterAU2;
printf("create client format ASBD\n");
// client format audio going into the converter
mClientFormat.SetCanonical(1, false);
mClientFormat.mSampleRate = kGraphSampleRate;
mClientFormat.Print();
printf("create output format ASBD\n");
CAStreamBasicDescription localOutput;
localOutput.SetAUCanonical(2, false);
localOutput.mSampleRate = kGraphSampleRate;
// output format
mOutputFormat.SetCanonical(1, false);
mOutputFormat.mSampleRate = kGraphSampleRate;
mOutputFormat.Print();
OSStatus result = noErr;
printf("-----------\n");
printf("new AUGraph\n");
// create a new AUGraph
result = NewAUGraph(&mGraph);
if (result) { printf("NewAUGraph result %ld %08X %4.4s\n", result,
(unsigned int)result, (char*)&result); return; }
// create three CAComponentDescription for the AUs we want in the graph
// output unit
CAComponentDescription output_desc(kAudioUnitType_Output,
kAudioUnitSubType_GenericOutput,
kAudioUnitManufacturer_Apple);
// iPodTime unit
CAComponentDescription iPodTime_desc(kAudioUnitType_FormatConverter,
kAudioUnitSubType_AUiPodTimeOther,
kAudioUnitManufacturer_Apple);
// AU Converter
CAComponentDescription converter_desc(kAudioUnitType_FormatConverter,
kAudioUnitSubType_AUConverter,
kAudioUnitManufacturer_Apple);
printf("add nodes\n");
// create a node in the graph that is an AudioUnit, using the supplied
// AudioComponentDescription to find and open that unit
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
result = AUGraphAddNode(mGraph, &iPodTime_desc, &iPodTimeNode);
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode2);
// connect a node's output to a node's input
// converter -> iPodTime ->converter-> output
result = AUGraphConnectNodeInput(mGraph, converterNode2, 0, iPodTimeNode, 0);
result = AUGraphConnectNodeInput(mGraph, iPodTimeNode, 0, converterNode, 0);
result = AUGraphConnectNodeInput(mGraph, converterNode, 0, outputNode, 0);
// open the graph -- AudioUnits are open but not initialized
// (no resource allocation occurs here)
result = AUGraphOpen(mGraph);
// grab audio unit instances from the nodes
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterAU);
result = AUGraphNodeInfo(mGraph, converterNode2, NULL, &converterAU2);
result = AUGraphNodeInfo(mGraph, iPodTimeNode, NULL, &mIPodTime);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &mOutputUnit);
//Get EQ unit format
UInt32 size ;
CAStreamBasicDescription eqDesc;
AudioUnitGetProperty(mIPodTime, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &eqDesc, &size);
eqDesc.Print();
// setup render callback struct
AURenderCallbackStruct rcbs;
rcbs.inputProc = &renderInput;
rcbs.inputProcRefCon = &mUserData;
printf("set AUGraphSetNodeInputCallback\n");
// set a callback for the specified node's specified input bus (bus 1)
result = AUGraphSetNodeInputCallback(mGraph, converterNode2, 0, &rcbs);
//SetFormat
result = AudioUnitSetProperty(converterAU2, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &mClientFormat, sizeof(mClientFormat));
result = AudioUnitSetProperty(converterAU2, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &eqDesc, sizeof(eqDesc));
printf("set converter input bus %d client kAudioUnitProperty_StreamFormat\n", 0);
// set the input stream format, this is the format of the audio
// for the converter input bus (bus 1)
result = AudioUnitSetProperty(converterAU, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &localOutput, sizeof(localOutput));
// in an au graph, each nodes output stream format (including sample rate)
// needs to be set explicitly this stream format is propagated to its
// destination's input stream format
printf("set converter output kAudioUnitProperty_StreamFormat\n");
// set the output stream format of the converter
result = AudioUnitSetProperty(converterAU, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));
result = AudioUnitSetProperty(mOutputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));
// set the output stream format of the iPodTime unit
result = AudioUnitSetProperty(mIPodTime, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &localOutput, sizeof(localOutput));
printf("AUGraphInitialize\n");
// add a render notification, this is a callback that the graph will call every time the graph renders
// the callback will be called once before the graph’s render operation, and once after the render operation is complete

Setting up effect audio units for CoreAudio

I'm trying to setup a high-pass filter but AUGraphStart gives me -10863 when I try. I cannot find much documntation at all. Here is my attent to set up the filter:
- (void)initializeAUGraph{
AUNode outputNode;
AUNode mixerNode;
AUNode effectNode;
NewAUGraph(&mGraph);
// Create AudioComponentDescriptions for the AUs we want in the graph
// mixer component
AudioComponentDescription mixer_desc;
mixer_desc.componentType = kAudioUnitType_Mixer;
mixer_desc.componentSubType = kAudioUnitSubType_AU3DMixerEmbedded;
mixer_desc.componentFlags = 0;
mixer_desc.componentFlagsMask = 0;
mixer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// output component
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
//effect component
AudioComponentDescription effect_desc;
effect_desc.componentType = kAudioUnitType_Effect;
effect_desc.componentSubType = kAudioUnitSubType_HighPassFilter;
effect_desc.componentFlags = 0;
effect_desc.componentFlagsMask = 0;
effect_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph to hold our AudioUnits
AUGraphAddNode(mGraph, &output_desc, &outputNode);
AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);
AUGraphAddNode(mGraph, &effect_desc, &effectNode);
// Connect the nodes
AUGraphConnectNodeInput(mGraph, mixerNode, 0, effectNode, 0);
AUGraphConnectNodeInput(mGraph, effectNode, 0, outputNode, 0);
//Open Graph
AUGraphOpen(mGraph);
// Get a link to the mixer AU
AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
// Get a link to the effect AU
AUGraphNodeInfo(mGraph, effectNode, NULL, &mEffect);
//Setup buses
size_t numbuses = track_count;
UInt32 size = sizeof(numbuses);
AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, size);
//Setup Stream Format Data
AudioStreamBasicDescription desc;
size = sizeof(desc);
// Setup Stream Format
desc.mSampleRate = kGraphSampleRate;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
desc.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
desc.mChannelsPerFrame = 1;
desc.mFramesPerPacket = 1;
desc.mBytesPerFrame = sizeof(AudioSampleType);
desc.mBytesPerPacket = desc.mBytesPerFrame;
// Loop through and setup a callback for each source you want to send to the mixer.
for (int i = 0; i < numbuses; ++i) {
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
// Connect the callback to the mixer input channel
AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &renderCallbackStruct);
// Apply Stream Data
AudioUnitSetProperty(mMixer, kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,i,&desc,size);
AudioUnitSetParameter(mMixer, k3DMixerParam_Distance, kAudioUnitScope_Input, i, rand() % 6, 0);
rotation[i] = rand() % 360;
rotation_speed[i] = rand() % 5;
AudioUnitSetParameter(mMixer, k3DMixerParam_Azimuth, kAudioUnitScope_Input, i, rotation[i], 0);
AudioUnitSetParameter(mMixer, k3DMixerParam_Elevation, kAudioUnitScope_Input, i, 30, 0);
}
// Reset stream fromat data to 0
memset (&desc, 0, sizeof (desc));
// Setup output stream format
desc.mSampleRate = kGraphSampleRate;
// Apply Stream Data to Output
AudioUnitSetProperty(mEffect,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,0,&desc,size);
AudioUnitSetProperty(mEffect,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Output,0,&desc,size);
AudioUnitSetProperty(mMixer,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Output,0,&desc,size);
//All done so initialise
AUGraphInitialize(mGraph);
}
It works when I remove the high pass filter. How do I get the filter working?
Thank you.
PS: Is the 3D elevation supposed to do nothing?
if u still have the problem...u should add a converter unit between the mixernode and the effect node and set the input format of it as the mixer output and the output format to the format u get from audiounitgetproperty (converternode)
Not all Audio Units that are available on OSX are available on iOS. In fact, only a few are. According to below documentation the highpassfilter effect is not supported on iOS : http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/UsingSpecificAudioUnits/UsingSpecificAudioUnits.html#//apple_ref/doc/uid/TP40009492-CH17-SW1
"The iPod EQ unit (subtype kAudioUnitSubType_AUiPodEQ) is the only effect unit provided in iOS 4."
Note that it mentions iOS4. But I am unable to find any documentation on this for later versions of iOS.