AudioUnitRender fails with GenericOutput (-10877 /Invalid Element) - objective-c

I am basically trying to obtain the samples produced by an AUGraph using a GenericOutput Node and a call to AudioUnitRender. As a starting point for my program I used the MixerHost example by Apple and changed the outputNode as follows.
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_GenericOutput;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
Later when I want to obtain my samples, I call
AudioUnitRenderActionFlags ioActionFlags = kAudioOfflineUnitRenderAction_Render;
AudioTimeStamp inTimeStamp = {0};
inTimeStamp.mHostTime = mach_absolute_time();
inTimeStamp.mFlags = kAudioTimeStampSampleHostTimeValid;
result = AudioUnitRender (
ioUnit,
&ioActionFlags,
&inTimeStamp,
1,
1024,
ioData
);
which yields an
"-10877 / Invalid Element"
error. My assumption is, that the error comes from not setting the inTimeStamp.mSampleTime field correctly. To be honest, I have not found a way to find out the sample time other than AudioQueueDeviceGetCurrentTime, which I cannot use, since I do not use an AudioQueue. However changing the ioActionFlag to kAudioTimeStampHostTimeValid does not change the the error behaviour.

The error pertaining to the element (AKA 'bus') refers to the 4th argument (1) to your AudioUnitRender call. The Generic Output unit only has one element/bus: 0 which has an input, output and global scope. If you pass 0 to the call instead of 1 for the element #, that error should disappear.

Related

pViewports is always UNUSED

I have enabled all verification layers and am error and warning free and should be able to render a colored triangle. However, currently all I am seeing is a cleared screen (black) or an empty screen (grey) depending on the machine I am running on.
Upon further inspection it seems in my call to vkCreateGraphicsPipelines the pViewports and pScissors are always set to UNUSED even though I passed in values. I do not have dynamicState and the counts are both one.
Am I missing a flag or is my binding flawed?
Code snippet:
Thread 0, Frame 0:
vkCreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines) returns VkResult VK_SUCCESS (0):
device: VkDevice = 00000000056FD350
pipelineCache: VkPipelineCache = 0000000000000000
createInfoCount: uint32_t = 1
pCreateInfos: const VkGraphicsPipelineCreateInfo* = 000000000566BB38
pCreateInfos[0]: const VkGraphicsPipelineCreateInfo = 000000000566BB38:
sType: VkStructureType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO (28)
pNext: const void* = NULL
...
pViewportState: const VkPipelineViewportStateCreateInfo* = 000000000725A920:
sType: VkStructureType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO (22)
pNext: const void* = NULL
flags: VkPipelineViewportStateCreateFlags = 0
viewportCount: uint32_t = 1
pViewports: const VkViewport* = UNUSED
scissorCount: uint32_t = 1
pScissors: const VkRect2D* = UNUSED
Debugging prints:
Pipeline_Info.pViewportState.pScissors before assign = 0
Pipeline_Info.pViewportState.pViewports before assign = 0
Pipeline_Info.pViewportState.pScissors after assign = 2350856
Pipeline_Info.pViewportState.pViewports after assign = 2348296
vkCreateGraphicsPipelines call result = 0
Full API Dump (see line 2107):
https://pastebin.com/MmXUBnk0
Full code (for those who are curious - see line 791):
https://github.com/AdaDoom3/AdaDoom3/blob/master/Engine/neo-engine-renderer.adb
Just wanted to confirm: The UNUSED is definitely a VK_LAYER_LUNARG_api_dump layer bug.
There is basically if(false) ... else print UNUSED in the code.
UPDATE: Issue fix PR.
So that is a dead end. But you found a bug in the Vulkan ecosystem which is a good thing too...
As for your rendering problem, in several places you have 1x1 render target (e.g. in vkCmdBeginRenderPass), so there would be not much to see. I think Windows usually does this (initially creating 0x0 window) and then you have to react to a resize event.

Vulkan error vkCreateDevice : VK_ERROR_EXTENSION_NOT_PRESENT

I am starting with Vulkan and I follow the Niko Kauppi's tutorial on Youtube.
I have an error when creating a device with vkCreateDevice, it returns VK_ERROR_EXTENSION_NOT_PRESENT
Here some part of my code:
The call to vkCreateDevice
_gpu_count = 0;
vkEnumeratePhysicalDevices(instance, &_gpu_count, nullptr);
std::vector<VkPhysicalDevice> gpu_list(_gpu_count);
vkEnumeratePhysicalDevices(instance, &_gpu_count, gpu_list.data());
_gpu = gpu_list[0];
vkGetPhysicalDeviceProperties(_gpu, &_gpu_properties);
VkDeviceCreateInfo device_create_info = _CreateDeviceInfo();
vulkanCheckError(vkCreateDevice(_gpu, &device_create_info, nullptr, &_device));
_gpu_count = 1 and _gpu_properties seems to recognize well my nvidia gpu (which is not up to date)
device_create_info
VkDeviceCreateInfo _createDeviceInfo;
_createDeviceInfo.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
_createDeviceInfo.queueCreateInfoCount = 1;
VkDeviceQueueCreateInfo _queueInfo = _CreateDeviceQueueInfo();
_createDeviceInfo.pQueueCreateInfos = &_queueInfo;
I don't understand the meaning of the error: "A requested extension is not supported" according to Khronos' doc.
Thanks for your help
VK_ERROR_EXTENSION_NOT_PRESENT is returned when one of the extensions in [enabledExtensionCount, ppEnabledExtensionNames] vector you provided is not supported by the driver (as queried by vkEnumerateDeviceExtensionProperties()).
Extensions can also have dependencies, so VK_ERROR_EXTENSION_NOT_PRESENT is also returned when an extension dependency of extension in the list is missing there too.
If you want no device extensions, make sure enabledExtensionCount of VkDeviceCreateInfo is 0 (and not e.g. some uninitialized value).
I assume 2. is the whole body of _CreateDeviceInfo(), which would confirm the "uninitialized value" suspicion.
Usually though you would want a swapchain extension there to be able to render to screen directly.
First of all, make sure your VkDeviceCreateInfo is zero filled, otherwise it may carry garbage to your VkCreateDevice() call.
Add following line just after declaring your VkDeviceCreateInfo:
memset ( &_createDeviceInfo, 0, sizeof(VkDeviceCreateInfo) );
Some extensions are absolutely necessary, as swapchain one.
To retrieve available extensions do this:
// Available extensions and layers names
const char* const* _ppExtensionNames = NULL;
// get extension names
uint32 _extensionCount = 0;
vkEnumerateDeviceExtensionProperties( _gpu, NULL, &_extensionCount, NULL);
std::vector<const char *> extNames;
std::vector<VkExtensionProperties> extProps(_extensionCount);
vkEnumerateDeviceExtensionProperties(_gpu, NULL, &_extensionCount, extProps.data());
for (uint32_t i = 0; i < _extensionCount; i++) {
extNames.push_back(extProps[i].extensionName);
}
_ppExtensionNames = extNames.data();
Once you have all extension names in _ppExtensionNames, pass it to your deviceCreateInfo struct:
VkDeviceCreateInfo device_create_info ...
[...]
device_create_info.enabledExtensionCount = _extensionCount;
device_create_info.ppEnabledExtensionNames = _ppExtensionNames;
[...]
vulkanCheckError(vkCreateDevice(_gpu, &device_create_info, nullptr, &_device));
I hope it helps.
Please double check above code, as I'm writing it by heart.

Objective C variable value not being preserved

I'm doing some audio programming for a client and I've come across an issue which I just don't understand.
I have a render callback which is called repeatedly by CoreAudio. Inside this callback I have the following:
// Get the audio sample data
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
Float32 data;
// Loop over the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
// Convert from SInt16 to Float32 just to prove it's possible
data = (Float32) outA[frame] / (Float32) 32768;
// Convert back to SInt16 to show that everything works as expected
outA[frame] = (SInt16) round(next * 32768);
}
This works as expected which shows there aren't any unexpected rounding errors.
The next thing I want to do is add a small delay. I add a global variable to the class:
i.e. below the #implementation line
Float32 last = 0;
Then I use this variable to get a one frame delay:
// Get the audio sample data
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
Float32 data;
Float32 next;
// Loop over the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
// Convert from SInt16 to Float32 just to prove it's possible
data = (Float32) outA[frame] / (Float32) 32768;
next = last;
last = data;
// Convert back to SInt16 to show that everything works as expected
outA[frame] = (SInt16) round(next * 32768);
}
This time round there's a strange audio distortion on the signal.
I just can't see what I'm doing wrong! Any advice would be greatly appreciated.
It seems that what you've done is introduced an unintentional phaser effect on your audio.
This is because you're only delaying one channel of your audio, so the result is that you have the left channel being delayed one frame behind the right channel. This would result in some odd frequency cancellations / amplifications that would suit your description of "a strange audio distortion".
Try applying the effect to both channels:
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
AudioSampleType *outB = (AudioSampleType *)ioData->mBuffers[1].mData;
// apply the same effect to outB as you did to outA
This assumes that you are working with stereo audio (i.e ioData->mNumberBuffers == 2)
As a matter of style, it's (IMO) a bad idea to use a global like your last variable in a render callback. Use the inRefCon to pass in proper context (either as a single variable or as a struct if necessary). This likely isn't related to the problem you're having, though.

Using a Filter Audio Unit Effect in iOS5

I'm trying to use a remote IO connection and route the audio input through the built in filter effect (iOS 5 only) and then back out of the hardware. I can make it route straight from the input to the output but I can't get the filter to work. I'm not sure whether it's the filter Audio Unit or the routing that I've got wrong.
This bit is just my attempt at setting up the filter and changing the routing so that the data is processed by it.
Any help is appreciated.
// ******* BEGIN FILTER ********
NSLog(#"Begin filter");
// Creates Audio Component Description - Output Filter
AudioComponentDescription filterCompDesc;
filterCompDesc .componentType = kAudioUnitType_Effect;
filterCompDesc.componentSubType = kAudioUnitSubType_LowPassFilter;
filterCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
filterCompDesc.componentFlags = 1;
filterCompDesc.componentFlagsMask = 1;
// Create Filter Unit
AudioUnit lpFilterUnit;
AudioComponent filterComponent = AudioComponentFindNext(NULL, &filterCompDesc);
setupErr = AudioComponentInstanceNew(filterComponent, &lpFilterUnit);
NSAssert(setupErr == noErr, #"No instance of filter");
AudioUnitElement bus2 = 2;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Output, bus2, &oneFlag, sizeof(oneFlag));
AudioUnitElement bus3 = 3;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Input, bus3, &oneFlag, sizeof(oneFlag));
// ******** END FILTER ******** //
AudioUnitConnection hardInToLP;
hardInToLP.sourceAudioUnit = remoteIOunit;
hardInToLP.sourceOutputNumber = 1;
hardInToLP.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
AudioUnitConnection LPToHardOut;
LPToHardOut.sourceAudioUnit = lpFilterUnit;
LPToHardOut.sourceOutputNumber = 1;
LPToHardOut.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
/*
// Sets up the Audio Units Connection - new instance called connection
AudioUnitConnection connection;
// Connect Audio Input's out to Audio Out's in
connection.sourceAudioUnit = remoteIOunit;
connection.sourceOutputNumber = bus1;
connection.destInputNumber = bus0;
setupErr = AudioUnitSetProperty(remoteIOunit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, bus0, &connection, sizeof(connection));
*/
NSAssert(setupErr == noErr, #"No RIO connection");
A couple things going on here:
You're gonna help yourself a lot if you do an assert (or some sort of check-error-and-log-it) after every call that can return an OSStatus. That way you'll figure out how far you're getting. Probably also want to log the actual OSStatus value when it's != noErr, and then look it up (start in "Audio Unit Component Services Reference" in Xcode documentation viewer).
After you create the filter AudioUnit, I don't get what you're doing with the AudioUnitSetProperty() calls. The second parameter should be the name of a property (something that starts with kAudioUnitProperty...). That's almost certainly returning an error right there.
remoteIOunit only has two buses, and they have special meanings. bus 1 is input from the mic, bus 0 is output to hardware. Trying to connect to remote io input scope bus 3 is probably going to be another error
Suggest you roll back to when you had audio pass-through working. That would mean you had just remoteIO, and a connection from output scope / bus 1 to input scope / bus 0.
Then create the filter unit. Change your connections so you connect:
remoteIO output scope bus 1 to filter input scope bus 0
filter output scope bus 0 to remoteIO input scope bus 0
The other thing that's going to be a problem is that all these iOS 5 filters seem to want to use floating-point LPCM formats, which is not the canonical format your other units will default to. You may have to get the stream format from the filter unit (input or output scope are probably the same?) and then set that as the format that remoteIO output scope / bus 1 produces and remoteIO input scope / bus 0 accepts. Another option would be to introduce AUConverter units before and after the filter unit.
The first answer given here just saved me a lot more frustration. No where does the Apple documentation tell you that the file formats for the Effect units require floating point. I couldn't figure out why it kept failing to play my audio properly until I read this post. I followed the advice above and retrieved the stream format from the low pass filter unit, and used that to set up two converter units that I created (ie. set the output format of the pre filter converter, and the input format of the post filter converter. Once I did that and connected all the nodes together it started working as expected.
im trying to use a low pass filter and when trying to do as suggested aka set the format i keep getting an error "the operation could not be completed" what in this code is faulty?
After retrieving the lowpassUnit I also check for errors but there are none.
result = AudioUnitSetProperty(lowpassUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &stereoStreamFormat, sizeof (stereoStreamFormat));
if (noErr != result)
{
NSLog(#"%#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]);
return;
}
PS: If anyone knows of proper Audio unit documentation please share as the official documentation is really lacking

openCV cvContourArea

I'm trying to use cvFindContours, which definitely seems like the way to go. I'm having a problem with getting the largest one. There is a function call cvContourArea, which suppose to get the area of a contour in a sequence. I'm having trouble with it.
int conNum = cvFindContours(outerbox, storage, &contours, sizeof(CvContour),CV_RETR_LIST,CV_CHAIN_APPROX_NONE,cvPoint(0, 0));
CvSeq* current_contour = contours;
double largestArea = 0;
CvSeq* largest_contour = NULL;
while (current_contour != NULL){
double area = fabs(cvContourArea(&storage,CV_WHOLE_SEQ, false));
if(area > largestArea){
largestArea = area;
largest_contour = current_contour;
}
current_contour = current_contour->h_next;
}
I tried replacing storage (in the cvContourArea) with contours, but same error keeps coming up no matter what:
OpenCV Error: Bad argument (Input array is not a valid matrix) in cvPointSeqFromMat, file /Volumes/ramdisk/opencv/OpenCV-2.2.0/modules/imgproc/src/utils.cpp, line 53
I googled and could hardly find example of cvContourArea that takes 3 arguments.. as if it's changed recently.. I want to loop thru the found contours and find the biggest one and after that draw it using the cvDrawContours method.. Thanks!
Try to change &storage to current_contour in the following statement.
Change
double area = fabs(cvContourArea(&storage,CV_WHOLE_SEQ, false));
to
double area = fabs(cvContourArea(current_contour,CV_WHOLE_SEQ, 0));