Using a Filter Audio Unit Effect in iOS5 - objective-c

I'm trying to use a remote IO connection and route the audio input through the built in filter effect (iOS 5 only) and then back out of the hardware. I can make it route straight from the input to the output but I can't get the filter to work. I'm not sure whether it's the filter Audio Unit or the routing that I've got wrong.
This bit is just my attempt at setting up the filter and changing the routing so that the data is processed by it.
Any help is appreciated.
// ******* BEGIN FILTER ********
NSLog(#"Begin filter");
// Creates Audio Component Description - Output Filter
AudioComponentDescription filterCompDesc;
filterCompDesc .componentType = kAudioUnitType_Effect;
filterCompDesc.componentSubType = kAudioUnitSubType_LowPassFilter;
filterCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
filterCompDesc.componentFlags = 1;
filterCompDesc.componentFlagsMask = 1;
// Create Filter Unit
AudioUnit lpFilterUnit;
AudioComponent filterComponent = AudioComponentFindNext(NULL, &filterCompDesc);
setupErr = AudioComponentInstanceNew(filterComponent, &lpFilterUnit);
NSAssert(setupErr == noErr, #"No instance of filter");
AudioUnitElement bus2 = 2;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Output, bus2, &oneFlag, sizeof(oneFlag));
AudioUnitElement bus3 = 3;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Input, bus3, &oneFlag, sizeof(oneFlag));
// ******** END FILTER ******** //
AudioUnitConnection hardInToLP;
hardInToLP.sourceAudioUnit = remoteIOunit;
hardInToLP.sourceOutputNumber = 1;
hardInToLP.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
AudioUnitConnection LPToHardOut;
LPToHardOut.sourceAudioUnit = lpFilterUnit;
LPToHardOut.sourceOutputNumber = 1;
LPToHardOut.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
/*
// Sets up the Audio Units Connection - new instance called connection
AudioUnitConnection connection;
// Connect Audio Input's out to Audio Out's in
connection.sourceAudioUnit = remoteIOunit;
connection.sourceOutputNumber = bus1;
connection.destInputNumber = bus0;
setupErr = AudioUnitSetProperty(remoteIOunit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, bus0, &connection, sizeof(connection));
*/
NSAssert(setupErr == noErr, #"No RIO connection");

A couple things going on here:
You're gonna help yourself a lot if you do an assert (or some sort of check-error-and-log-it) after every call that can return an OSStatus. That way you'll figure out how far you're getting. Probably also want to log the actual OSStatus value when it's != noErr, and then look it up (start in "Audio Unit Component Services Reference" in Xcode documentation viewer).
After you create the filter AudioUnit, I don't get what you're doing with the AudioUnitSetProperty() calls. The second parameter should be the name of a property (something that starts with kAudioUnitProperty...). That's almost certainly returning an error right there.
remoteIOunit only has two buses, and they have special meanings. bus 1 is input from the mic, bus 0 is output to hardware. Trying to connect to remote io input scope bus 3 is probably going to be another error
Suggest you roll back to when you had audio pass-through working. That would mean you had just remoteIO, and a connection from output scope / bus 1 to input scope / bus 0.
Then create the filter unit. Change your connections so you connect:
remoteIO output scope bus 1 to filter input scope bus 0
filter output scope bus 0 to remoteIO input scope bus 0
The other thing that's going to be a problem is that all these iOS 5 filters seem to want to use floating-point LPCM formats, which is not the canonical format your other units will default to. You may have to get the stream format from the filter unit (input or output scope are probably the same?) and then set that as the format that remoteIO output scope / bus 1 produces and remoteIO input scope / bus 0 accepts. Another option would be to introduce AUConverter units before and after the filter unit.

The first answer given here just saved me a lot more frustration. No where does the Apple documentation tell you that the file formats for the Effect units require floating point. I couldn't figure out why it kept failing to play my audio properly until I read this post. I followed the advice above and retrieved the stream format from the low pass filter unit, and used that to set up two converter units that I created (ie. set the output format of the pre filter converter, and the input format of the post filter converter. Once I did that and connected all the nodes together it started working as expected.

im trying to use a low pass filter and when trying to do as suggested aka set the format i keep getting an error "the operation could not be completed" what in this code is faulty?
After retrieving the lowpassUnit I also check for errors but there are none.
result = AudioUnitSetProperty(lowpassUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &stereoStreamFormat, sizeof (stereoStreamFormat));
if (noErr != result)
{
NSLog(#"%#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]);
return;
}
PS: If anyone knows of proper Audio unit documentation please share as the official documentation is really lacking

Related

RX Fifo1 for CAN is not generating an interrupt callback (basically it is not receiving the data)

There are two types of messages on the CAN bus. Those are broadcast message and default message. Currently, I'm using fifo0 for both the message(which works perfectly fine). But I would like to use fifo1 specially for broadcast message. Below is my initializing code
uint8 BspCan_RxFilterConfig(uint32 filterId, uint32 filterMask, uint8 filterBankId, uint8 enableFlag, uint8 fifoAssignment)
{
///\todo Add method for calculating filter on the fly
CAN_FilterTypeDef canBusFilterConfig;
FunctionalState filterEnableFlag = ENABLE;
if(enableFlag == 0)
{
filterEnableFlag = DISABLE;
}
else
{
filterEnableFlag = ENABLE;
}
/*Define filter used to determine if application needs to handle message on the CAN bus or if it should
ignore it. If the selected rx FIFO is changed, the rx functions in this module must also be updated.
Using mask mode with all bits set to "don't care"*/
canBusFilterConfig.FilterBank = filterBankId; //Identification of which of the filter banks to define.
canBusFilterConfig.FilterMode = CAN_FILTERMODE_IDMASK; //Sets whether to filter out messages based on a specific id or a list
canBusFilterConfig.FilterScale = CAN_FILTERSCALE_32BIT; //Sets the width of the filter, 32-bit width means filter applies to full range of std id, extended id, IDE, and RTR bits
canBusFilterConfig.FilterIdHigh = (0xFFFF0000 & filterId)>>16; //For upper 16 bits, dominant bit is expected (logic 0)
canBusFilterConfig.FilterIdLow = 0x0000FFFF & filterId; //For Lower 16 bits, dominant bit is expected (logic 0)
canBusFilterConfig.FilterMaskIdHigh = (0xFFFF0000 & filterMask)>>16; //Upper 16 bits are don't care
canBusFilterConfig.FilterMaskIdLow = 0x0000FFFF & filterMask; //Lower 16 bits are don't care
//canBusFilterConfig.FilterFIFOAssignment = CAN_FILTER_FIFO0; //Sets which rx FIFO to which to apply the filter settings
canBusFilterConfig.FilterActivation = filterEnableFlag;
canBusFilterConfig.SlaveStartFilterBank = 1; //Bank for the defined filter. Arbitrary value.
if (fifoAssignment == 0)
{
canBusFilterConfig.FilterFIFOAssignment = CAN_FILTER_FIFO0;
}
else
{
canBusFilterConfig.FilterFIFOAssignment = CAN_FILTER_FIFO1;
}
//Only fails if CAN peripheral is not in ready or listening state
if (HAL_CAN_ConfigFilter(&gCanBusH, &canBusFilterConfig) != HAL_OK)
{
return(ERR_CAN_INIT_FAILED);
}
else
{
return(SZW_NO_ERROR);
}
}//end BspCan_RxFilterConfig
When initializing, fifo0 works perfectly but fifo1 doesn't. If I just initialize fifo1 for both types of messages, it doesn't generate the interrupt. What am I doing wrong over here ? How to i initialize fifo1 to make it work and generate interrupt? I also tried without using digital filters still no luck.
Thanks in advance,

How Can I Establish UART Communication between 2 Stm32 and produce PWM signal

Edit: I solved UART communication problem but I have new problem getting pwm signal after receiving Transmit Data. I can blink led I can drive relay with transmitted data but I could not produce PWM signal.
maps(120, 1, 1, 250, RxData[4]);
ADC_Left = Yx; __HAL_TIM_SET_COMPARE(&htim2,TIM_CHANNEL_1,ADC_Left);
I used __HAL_TIM_SET_COMPARE function but it doesnt work. I can observe ADC_Left’s value on Debug site but its not work.
I am trying to realize UART communication between 2 stm32. I know there are several topic related with but my question focused another one.
I am reading 2 adc value on stm32 which is only transmit these value and other one only receive these 2 adc value. To do this
MX_USART1_UART_Init();
__HAL_UART_ENABLE_IT(&huart1, UART_IT_RXNE); // Interrupt Enable
__HAL_UART_ENABLE_IT(&huart1, UART_IT_TC);
char TxData1[10];
..............
TxData1[0] = 0xEA;
TxData1[1] = wData.Byte_1;
TxData1[2] = wData.Byte_2;
TxData1[3] = wData.Byte_3;
TxData1[4] = wData.Right_Adc_Val;
TxData1[5] = wData.Left_Adc_Val;
TxData1[6] = wData.Byte_6;
for(uint8_t i = 1 ; i < 7; i++)
{
wData.Checksum = wData.Checksum + TxData1[i];
}
wData.Checksum_H = (wData.Checksum >> 8)&0xFF;
wData.Checksum_L = (wData.Checksum)&0xFF;
TxData1[7] = wData.Checksum_H;
TxData1[8] = wData.Checksum_L;
TxData1[9] = 0xAE;
HAL_UART_Transmit_IT(&huart1,(uint8_t*) &TxData1,10);
............
This block sent them I can observate them on Debug screen and using TTL module's Tx Rx pins.
MX_USART1_UART_Init();
__HAL_UART_ENABLE_IT(&huart1, UART_IT_RXNE); // Interrupt Enable
__HAL_UART_ENABLE_IT(&huart1, UART_IT_TC);
char RxData[10];
while(1){
HAL_UART_Receive_IT(&huart1,(uint8_t*) &RxData,10);
}
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if(huart->Instance == USART1)
{
HAL_UART_Receive_IT(&huart1,(uint8_t*) &RxData,10);
}
There is no problem up to here but when i getting RxData 0. index , it gives EA . Of course it should be give EA. When the adc data change all the ranking is changing. RxData[0] gives meaningless data. adc value is jumping over the all RxData array.
data locations must always be in the same index. How Can I get these data in stability for ex.
RxData[0]=EA
.
.
RxData[4]= should give adc value. so on.
..
Edit: I tried other mode of UART, DMA (in circular mode) and direct mode were used. I cant receive even 1 byte with DMA .
In your example code, you have an extra & that needs to be removed from both the transmit and receive HAL method calls. Example:
HAL_UART_Transmit_IT(&huart1,(uint8_t*) &TxData1,10);
HAL_UART_Transmit_IT(&huart1,(uint8_t*) TxData1,10);
To avoid this type of error in the future, recommend not using the cast and try something like the following:
uint8_t TxData1[10];
...
HAL_UART_Transmit_IT(&huart1, TxData1, sizeof(TxData1);

Problems connecting to the input pins of GMFBridge Sink Filter

I am experiencing a strange problem when trying to use the GMFBridge filter with the output of an Euresys UxH264 card.
I am trying to integrate this card into our solution, that relies on GMFBridge to handle the ability of continuous capture to multiple files, performing muxing and file-splitting without having to stop the capture graph.
This card captures video and audio from analog inputs. It provides a DirectShow filter exposing both a raw stream of the video input and a hardware-encoded H.264 stream. The audio stream is provided as an uncompressed stream only.
When I attempt to directly connect any of the output pins of the Euresys source filters to the input pins of the GMFBridge Sink, they get rejected, with the code VFW_E_NO_ALLOCATOR. (In the past I have successfully connected both H.264 and raw audio streams to the bridge).
Grasping at straws, I plugged in a pair of SampleGrabber filters between the Euresys card filters and the bridge sink filter, and -just like that- the connections between sample grabbers and sink were accepted.
However, I am not getting any packets on the other side of the bridge (the muxing graph). I inspected the running capture graph with GraphStudioNext and somehow the sample grabbers appear detached from my graph, even though I got successful confirmations when I connected them to the source filter!.
Here's the source code creating the graph.
void EuresysSourceBox::BuildGraph() {
HRESULT hRes;
CComPtr<IGraphBuilder> pGraph;
COM_CALL(pGraph.CoCreateInstance(CLSID_FilterGraph));
#ifdef REGISTER_IN_ROT
_rotEntry1 = FilterTools::RegisterGraphInROT(IntPtr(pGraph), "euresys graph");
#endif
// [*Video Source*]
String^ filterName = "Ux H.264 Visual Source";
Guid category = _GUIDToGuid((GUID)AM_KSCATEGORY_CAPTURE);
FilterMonikerList^ videoSourceList = FilterTools::GetFilterMonikersByName(category, filterName);
CComPtr<IBaseFilter> pVideoSource;
int monikerIndex = config->BoardIndex; // a filter instance will be retrieved for every installed board
clr_scoped_ptr<CComPtr<IMoniker>>^ ppVideoSourceMoniker = videoSourceList[monikerIndex];
COM_CALL((*ppVideoSourceMoniker->get())->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pVideoSource));
COM_CALL(pGraph->AddFilter(pVideoSource, L"VideoSource"));
// [Video Source]
//
// [*Audio Source*]
filterName = "Ux H.264 Audio Encoder";
FilterMonikerList^ audioSourceList = FilterTools::GetFilterMonikersByName(category, filterName);
CComPtr<IBaseFilter> pAudioSource;
clr_scoped_ptr<CComPtr<IMoniker>>^ ppAudioSourceMoniker = audioSourceList[monikerIndex];
COM_CALL((*ppAudioSourceMoniker->get())->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pAudioSource));
COM_CALL(pGraph->AddFilter(pAudioSource, L"AudioSource"));
CComPtr<IPin> pVideoCompressedOutPin(FilterTools::GetPin(pVideoSource, "Encoded"));
CComPtr<IPin> pAudioOutPin(FilterTools::GetPin(pAudioSource, "Audio"));
CComPtr<IBaseFilter> pSampleGrabber;
COM_CALL(pSampleGrabber.CoCreateInstance(CLSID_SampleGrabber));
COM_CALL(pGraph->AddFilter(pSampleGrabber, L"SampleGrabber"));
CComPtr<IPin> pSampleGrabberInPin(FilterTools::GetPin(pSampleGrabber, "Input"));
COM_CALL(pGraph->ConnectDirect(pVideoCompressedOutPin, pSampleGrabberInPin, NULL)); // DOES NOT FAIL!!
CComPtr<IBaseFilter> pSampleGrabber2;
COM_CALL(pSampleGrabber2.CoCreateInstance(CLSID_SampleGrabber));
COM_CALL(pGraph->AddFilter(pSampleGrabber2, L"SampleGrabber2"));
CComPtr<IPin> pSampleGrabber2InPin(FilterTools::GetPin(pSampleGrabber2, "Input"));
COM_CALL(pGraph->ConnectDirect(pAudioOutPin, pSampleGrabber2InPin, NULL)); // DOES NOT FAIL!!
// [Video Source]---
// |-->[*Bridge Sink*]
// [Audio Source]---
CComPtr<IPin> pSampleGrabberOutPin(FilterTools::GetPin(pSampleGrabber, "Output"));
CComPtr<IPin> pSampleGrabber2OutPin(FilterTools::GetPin(pSampleGrabber2, "Output"));
CreateGraphBridge(
IntPtr(pGraph),
IntPtr(pSampleGrabberOutPin),
IntPtr(pSampleGrabber2OutPin)
);
// Root graph to parent object
_ppCaptureGraph.reset(new CComPtr<IGraphBuilder>(pGraph));
}
COM_CALL is my HRESULT checking macro, it will raise a managed exception if the result is other than S_OK. So the connection between pins succeeded, but here is how the graph looks disjointed when it is running:
So, I have three questions:
1) What could VFW_E_NO_ALLOCATOR mean in this instance? (the source filter can be successfully connected to other components such as LAV Video decoder or ffdshow video decoder).
2) Is there a known workaround to circumvent the VFW_E_NO_ALLOCATOR problem?
3) Is it possible that a filter gets disconnected at runtime as it seems to be happening in my case?
I found a reference by Geraint Davies giving a reason as to why the GMFBridge sink filter may be rejecting the connection.
It looks as though the parser is insisting on using its own allocator
-- this is common with parsers where the output samples are merely pointers into the input samples. However, the bridge cannot implement
suspend mode without using its own allocator, so a copy is required.
With this information, I decided to create an ultra simple CTransformFilter filter that simply accepts the allocator offered by the bridge and copies to the output sample whatever comes in from the input sample. I am not 100% sure that what I did was right, but it is working now. I could successfully plug-in the Euresys card as part of my capture infrastructure.
For reference, if anyone experiences something similar, here is the code of the filter I created:
class SampleCopyGeneratorFilter : public CTransformFilter {
protected:
HRESULT CheckInputType(const CMediaType* mtIn) override { return S_OK; }
HRESULT GetMediaType(int iPosition, CMediaType* pMediaType) override;
HRESULT CheckTransform(const CMediaType *mtIn, const CMediaType *mtOut) override { return S_OK; }
HRESULT DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp) override;
HRESULT Transform(IMediaSample *pSource, IMediaSample *pDest) override;
public:
SampleCopyGeneratorFilter();
};
//--------------------------------------------------------------------------------------------------------------------
SampleCopyGeneratorFilter::SampleCopyGeneratorFilter()
: CTransformFilter(NAME("SampleCopyGeneratorFilter"), NULL, GUID_NULL)
{
}
HRESULT SampleCopyGeneratorFilter::GetMediaType(int iPosition, CMediaType* pMediaType) {
HRESULT hRes;
ASSERT(m_pInput->IsConnected());
if( iPosition < 0 )
return E_INVALIDARG;
CComPtr<IPin> connectedTo;
COM_CALL(m_pInput->ConnectedTo(&connectedTo));
CComPtr<IEnumMediaTypes> pMTEnumerator;
COM_CALL(connectedTo->EnumMediaTypes(&pMTEnumerator));
AM_MEDIA_TYPE* pIteratedMediaType;
for( int i = 0; i <= iPosition; i++ ) {
if( pMTEnumerator->Next(1, &pIteratedMediaType, NULL) != S_OK )
return VFW_S_NO_MORE_ITEMS;
if( i == iPosition )
*pMediaType = *pIteratedMediaType;
DeleteMediaType(pIteratedMediaType);
}
return S_OK;
}
HRESULT SampleCopyGeneratorFilter::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp) {
HRESULT hRes;
AM_MEDIA_TYPE mt;
COM_CALL(m_pInput->ConnectionMediaType(&mt));
try {
BITMAPINFOHEADER* pBMI = HEADER(mt.pbFormat);
pProp->cbBuffer = DIBSIZE(*pBMI); // format is compressed, uncompressed size should be enough
if( !pProp->cbAlign )
pProp->cbAlign = 1;
pProp->cbPrefix = 0;
pProp->cBuffers = 4;
ALLOCATOR_PROPERTIES actualProperties;
COM_CALL(pAlloc->SetProperties(pProp, &actualProperties));
if( pProp->cbBuffer > actualProperties.cbBuffer )
return E_FAIL;
return S_OK;
} finally{
FreeMediaType(mt);
}
}
HRESULT SampleCopyGeneratorFilter::Transform(IMediaSample *pSource, IMediaSample *pDest) {
HRESULT hRes;
BYTE* pBufferIn;
BYTE* pBufferOut;
long destSize = pDest->GetSize();
long dataLen = pSource->GetActualDataLength();
if( dataLen > destSize )
return VFW_E_BUFFER_OVERFLOW;
COM_CALL(pSource->GetPointer(&pBufferIn));
COM_CALL(pDest->GetPointer(&pBufferOut));
memcpy(pBufferOut, pBufferIn, dataLen);
pDest->SetActualDataLength(dataLen);
pDest->SetSyncPoint(pSource->IsSyncPoint() == S_OK);
return S_OK;
}
Here is how I inserted the filter in the capture graph:
CComPtr<IPin> pAACEncoderOutPin(FilterTools::GetPin(pAACEncoder, "XForm Out"));
CComPtr<IPin> pVideoSourceCompressedOutPin(FilterTools::GetPin(pVideoSource, "Encoded"));
CComPtr<IBaseFilter> pSampleCopier;
pSampleCopier = new SampleCopyGeneratorFilter();
COM_CALL(pGraph->AddFilter(pSampleCopier, L"SampleCopier"));
CComPtr<IPin> pSampleCopierInPin(FilterTools::GetPin(pSampleCopier, "XForm In"));
COM_CALL(pGraph->ConnectDirect(pVideoSourceCompressedOutPin, pSampleCopierInPin, NULL));
CComPtr<IPin> pSampleCopierOutPin(FilterTools::GetPin(pSampleCopier, "XForm Out"));
CreateGraphBridge(
IntPtr(pGraph),
IntPtr(pSampleCopierOutPin),
IntPtr(pAACEncoderOutPin)
);
Now, I still have no idea why inserting the sample grabber instead did not work and resulted in detached graphs. I corroborated this quirk by examining the graphs with Graphedit Plus too. If anyone can offer me an explanation, I would be very grateful indeed.

How to change from one musicSequence to another without time delay

I'm playing a MIDI sequence via MusicPlayer which I loaded from a MIDI file and I want to change the sequence to another while playback.
When I try this:
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
it stops the playback. So I start it back again and set the time with
MusicPlayerSetTime(_player, currentTime);
so it plays again where the previous sequence stopped, but there is a little delay.
I've tried to add the time interval to currentTime, which I got by obtaining the time before stopping and after starting again. But there is still a delay.
I was wondering if there is an alternative to stopping -> changing sequence -> starting again.
You definitely need to manage the AUSamplers if you are adding and removing tracks or switching sequences. It probably is cleaner to dispose of the AUSampler and create a new one for each new track but it is also possible to 'recycle' AUSamplers but that means you will need to keep track of them.
Managing AUSamplers means that when you are no longer using an instance of one (for example if you delete or replace a MusicTrack), you need to disconnect it from the AUMixer instance, remove it from the AUGraph instance, and then update the AUGraph.
There are lots of ways to handle all this. For convenience in keeping track of AUSampler instances' bus number, sound font loaded and some other stuff, I use a subClass of NSObject named SamplerAudioUnitto contain all the needed properties and methods. Same for MusicTracks - I have a Track class - but this may not be needed in your project.
The gist though is that AUSamplers need to be managed for performance and memory. If an instance is no longer being used it should be removed and the AUMixer bus input freed up.
BTW - I check the docs and there is apparently no technical limit to the number of mixer busses - but the number does need to be specified.
// this is not cut and paste code - just an example of managing the AUSampler instance
- (OSStatus)deleteTrack:(Track*) trackObj
{
OSStatus result = noErr;
// turn off MP if playing
BOOL MPstate = [self isPlaying];
if (MPstate){
MusicPlayerStop(player);
}
//-disconnect node from mixer + update list of mixer buses
SamplerAudioUnit * samplerObj = trackObj.sampler;
UInt32 busNumber = samplerObj.busNumber;
result = AUGraphDisconnectNodeInput(graph, mixerNode, busNumber);
if (result) {[self printErrorMessage: #"AUGraphDisconnectNodeInput" withStatus: result];}
[self clearMixerBusState: busNumber]; // routine that keeps track of available busses
result = MusicSequenceDisposeTrack(sequence, trackObj.track);
if (result) {[self printErrorMessage: #"MusicSequenceDisposeTrack" withStatus: result];}
// remove AUSampler node
result = AUGraphRemoveNode(graph, samplerObj.samplerNode);
if (result) {[self printErrorMessage: #"AUGraphRemoveNode" withStatus: result];}
result = AUGraphUpdate(graph, NULL);
if (result) {[self printErrorMessage: #"AUGraphUpdate" withStatus: result];}
samplerObj = nil;
trackObj = nil;
if (MPstate){
MusicPlayerStart(player);
}
// CAShow(graph);
// CAShow(sequence);
return result;
}
Because
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
will still cause the player to stop, it is still possible to hear a little break.
So instead of updating the musicSequence, i went ahead and changed the content of the tracks instead, which won't cause any breaks:
MusicTrack currentTrack;
MusicTrack currentTrack2;
MusicSequenceGetIndTrack(musicSequence, 0, &currentTrack);
MusicSequenceGetIndTrack(musicSequence, 1, &currentTrack2);
MusicTrackClear(currentTrack, 0, _trackLen);
MusicTrackClear(currentTrack2, 0, _trackLen);
MusicSequence tmpSequence;
switch (number) {
case 0:
tmpSequence = musicSequence1;
break;
case 1:
tmpSequence = musicSequence2;
break;
case 2:
tmpSequence = musicSequence3;
break;
case 3:
tmpSequence = musicSequence4;
break;
default:
tmpSequence = musicSequence1;
break;
}
MusicTrack tmpTrack;
MusicTrack tmpTrack2;
MusicSequenceGetIndTrack(tmpSequence, 0, &tmpTrack);
MusicSequenceGetIndTrack(tmpSequence, 1, &tmpTrack2);
MusicTimeStamp trackLen = 0;
UInt32 trackLenLenLen = sizeof(trackLen);
MusicTrackGetProperty(tmpTrack, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLenLen);
_trackLen = trackLen;
MusicTrackCopyInsert(tmpTrack, 0, _trackLen, currentTrack, 0);
MusicTrackCopyInsert(tmpTrack2, 0, _trackLen, currentTrack2, 0);
No disconnection of nodes, no updating the graph, no stopping the player.

How does kAudioUnitSubType_NBandEQ work? Or equalizing using DSP formulas with Novocaine?

I'm trying to make a 10-band equalizer and the kAudioUnitSubType_NBandEQ audio unit seems the way to go, but Apple's documentation doesn't cover how to set/configure it.
I've already connected the nodes but it errors out when I try to connect the EQNode with the iONode (output): https://gist.github.com/2295463
How do I turn the effect into a working 10-band equalizer?
Update:
A working DSP formula with Novocaine is also a solution, any ideas! Those DSP formulas are quite complicated.
Update2:
I prefer a working DSP formula with Novocaine since that'd be much cleaner/smaller than programming Audio Nodes.
Update3:
"The Multitype EQ unit(of subtype kAudioUnitSubType_NBandEQ) provides an equalizer that can be configured as any one of the types described in “Mutitype EQ Unit Filter Types” (page 68)."
Source: http://developer.apple.com/library/ios/DOCUMENTATION/AudioUnit/Reference/AudioUnit_Framework/AudioUnit_Framework.pdf
But still no example.
IMPORTANT Update (17/05): I recommend everyone to use my DSP class I released on github: https://github.com/bartolsthoorn/NVDSP It'll probably save you quite some work. It will make developing a n-band equalizer or any kind of audio filters a breeze.
I'm the creator of Novocaine, and I've used it to make a 200-some-odd band EQ using vDSP.
I'm considering switching over to the NBandEQ audio unit, but I have a working solution with vDSP_deq22.
vDSP_deq22 filters data one sample at a time with a 2nd order IIR filter. You can find 2nd order Butterworth coefficients on musicdsp.org, or more generally by Googling. Matlab will also calculate them for you, if you have access to a copy. I used musicdsp.org. You'd create 10 vDSP_deq22 filters, run your audio through each one, multiply that band by a specified gain, and then add up the output of all the filters in the filter bank into your output audio.
10-band equalizer can be configured as below
converterNode -> eqNode->converterNode->ioNode.
Please refere below sample code. here i have used iPodEQ unit. Replace the iPodEQunit formatspecification with 10-band equalizer.
AUNode outputNode;
AUNode iPodTimeNode;
AUNode converterNode;
AUNode converterNode2;
AudioUnit converterAU;
AudioUnit converterAU2;
printf("create client format ASBD\n");
// client format audio going into the converter
mClientFormat.SetCanonical(1, false);
mClientFormat.mSampleRate = kGraphSampleRate;
mClientFormat.Print();
printf("create output format ASBD\n");
CAStreamBasicDescription localOutput;
localOutput.SetAUCanonical(2, false);
localOutput.mSampleRate = kGraphSampleRate;
// output format
mOutputFormat.SetCanonical(1, false);
mOutputFormat.mSampleRate = kGraphSampleRate;
mOutputFormat.Print();
OSStatus result = noErr;
printf("-----------\n");
printf("new AUGraph\n");
// create a new AUGraph
result = NewAUGraph(&mGraph);
if (result) { printf("NewAUGraph result %ld %08X %4.4s\n", result,
(unsigned int)result, (char*)&result); return; }
// create three CAComponentDescription for the AUs we want in the graph
// output unit
CAComponentDescription output_desc(kAudioUnitType_Output,
kAudioUnitSubType_GenericOutput,
kAudioUnitManufacturer_Apple);
// iPodTime unit
CAComponentDescription iPodTime_desc(kAudioUnitType_FormatConverter,
kAudioUnitSubType_AUiPodTimeOther,
kAudioUnitManufacturer_Apple);
// AU Converter
CAComponentDescription converter_desc(kAudioUnitType_FormatConverter,
kAudioUnitSubType_AUConverter,
kAudioUnitManufacturer_Apple);
printf("add nodes\n");
// create a node in the graph that is an AudioUnit, using the supplied
// AudioComponentDescription to find and open that unit
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
result = AUGraphAddNode(mGraph, &iPodTime_desc, &iPodTimeNode);
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode2);
// connect a node's output to a node's input
// converter -> iPodTime ->converter-> output
result = AUGraphConnectNodeInput(mGraph, converterNode2, 0, iPodTimeNode, 0);
result = AUGraphConnectNodeInput(mGraph, iPodTimeNode, 0, converterNode, 0);
result = AUGraphConnectNodeInput(mGraph, converterNode, 0, outputNode, 0);
// open the graph -- AudioUnits are open but not initialized
// (no resource allocation occurs here)
result = AUGraphOpen(mGraph);
// grab audio unit instances from the nodes
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterAU);
result = AUGraphNodeInfo(mGraph, converterNode2, NULL, &converterAU2);
result = AUGraphNodeInfo(mGraph, iPodTimeNode, NULL, &mIPodTime);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &mOutputUnit);
//Get EQ unit format
UInt32 size ;
CAStreamBasicDescription eqDesc;
AudioUnitGetProperty(mIPodTime, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &eqDesc, &size);
eqDesc.Print();
// setup render callback struct
AURenderCallbackStruct rcbs;
rcbs.inputProc = &renderInput;
rcbs.inputProcRefCon = &mUserData;
printf("set AUGraphSetNodeInputCallback\n");
// set a callback for the specified node's specified input bus (bus 1)
result = AUGraphSetNodeInputCallback(mGraph, converterNode2, 0, &rcbs);
//SetFormat
result = AudioUnitSetProperty(converterAU2, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &mClientFormat, sizeof(mClientFormat));
result = AudioUnitSetProperty(converterAU2, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &eqDesc, sizeof(eqDesc));
printf("set converter input bus %d client kAudioUnitProperty_StreamFormat\n", 0);
// set the input stream format, this is the format of the audio
// for the converter input bus (bus 1)
result = AudioUnitSetProperty(converterAU, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &localOutput, sizeof(localOutput));
// in an au graph, each nodes output stream format (including sample rate)
// needs to be set explicitly this stream format is propagated to its
// destination's input stream format
printf("set converter output kAudioUnitProperty_StreamFormat\n");
// set the output stream format of the converter
result = AudioUnitSetProperty(converterAU, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));
result = AudioUnitSetProperty(mOutputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &mOutputFormat, sizeof(mOutputFormat));
// set the output stream format of the iPodTime unit
result = AudioUnitSetProperty(mIPodTime, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 0, &localOutput, sizeof(localOutput));
printf("AUGraphInitialize\n");
// add a render notification, this is a callback that the graph will call every time the graph renders
// the callback will be called once before the graph’s render operation, and once after the render operation is complete