CoreMediaIO, an alleged OS Bug / Memory Leak - objective-c

Environment
OS-X 10.10
xcode 6.4
C++/Obj-C OS-X application
Use-case
Capture video using CoreMediaIO, capture source is iPod5
Capturing machine is OS-X Yosemite
Capture feed consists of Video and Audio samples
Problem description
While video capture is working fine, when video samples are received, there is an accumulating memory leak, when no video samples are received ( only audio ), there is no leak ( memory consumption stops growing )
I am mixing Cocoa thread and POSIX threads, I have made sure to have [NSThread isMultiThreaded] set to YES ( by creating an empty NSThread )
"Obj-C Auto Ref Counting" is set to YES in the project properties
The following is a short code-snap of the code causing the leak:
OSStatus status = 0;
#autoreleasepool {
m_udid = udid;
if (0 != (status = Utils::CoreMediaIO::FindDeviceByUDID(m_udid, m_devId)))
return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
if (0 != (status = Utils::CoreMediaIO::GetStreamByIndex(m_devId, kCMIODevicePropertyScopeInput, 0, m_strmID)))
return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
status = Utils::CoreMediaIO::SetPtopertyData(m_devId, kCMIODevicePropertyExcludeNonDALAccess, 1U);
status = Utils::CoreMediaIO::SetPtopertyData<int>(m_devId, kCMIODevicePropertyDeviceMaster, getpid());// Exclusive access for the calling process
// Results in an infinitely accumulating memory leak
status = CMIOStreamCopyBufferQueue(m_strmID, [](CMIOStreamID streamID, void* token, void* refCon) {
#autoreleasepool {
CMSampleBufferRef sampleBuffer;
while(0 != (sampleBuffer = (CMSampleBufferRef)CMSimpleQueueDequeue(m_queueRef))) {
CFRelease(sampleBuffer);
sampleBuffer = 0;
}
}
}, this, &m_queueRef);
if(noErr != status)
return E_FAIL;
if(noErr != (status = CMIODeviceStartStream(m_devId, m_strmID)))
return E_FAIL;
}
Having sample de-queuing done in the main thread ( using 'dispatch_async(dispatch_get_main_queue(), ^{' ) didn't have any affect...
Is there anything wrong with the above code snap? might this be an OS Bug?
Reference link: https://forums.developer.apple.com/message/46752#46752
AN UPDATE
The QuickTime player support using an iOS device as a capture source ( mirroring it's A/V to the mac machine ), having a preview session running for a while reproduce the above mentioned problem w/ the OS provided QuickTime player, this strongly indicate an OS Bug, bellow is a screen-shot showing the QT player taking 140Mb of RAM after running for ~2hours ( where it starts around 20Mb ), by the end of the day it has grown to ~760Mb...
APPLE Please have this Fixed, I have standing customers commitments...

Related

bluetooth background mode IOS when the screen is locked

I would like to implement background bluetooth scanning on IOS. When the application goes in background mode it calls TestCentralManagerDelegate which implements DiscoveredPeripheral function. It is triggered when a new bluetooth peripheral device is detected. If a new bluetooth device is detected the application read the manufacture data which is presented in Dictionary advertisementData (as argument of DiscoveredPeripheral function). The manufacture data are obtained by calling ManufactureData = advertisementData["kCBAdvDataManufacturerData"].ToString(). The discovering of the manufacture data was tested on two different iPhones 5s and 6 with the same iOS 12.1. When the application goes in background mode, I locked the screen.
In the case of iPhone 5s, I observed that ManufactureData was found each time
the DiscoveredPeripheral function is triggered. This fact is not true for iPhone 6, each time I got ManufactureData = null. It is worth mentioning that the manufacture data are received in both cases if the screen is not locked.
I do not understand why the iPhone 6 does not find ManufactureData, meanwhile the iPhone 5s does. I would accept the fact that phones have different operating systems and this implies different responses, but in my case this is not the case. I will appreciate any help for better understanding aforementioned problem.
Here is code I am using Xamarin.iOS.
public override void DiscoveredPeripheral(CBCentralManager central, CBPeripheral peripheral, NSDictionary advertisementData, NSNumber RSSI)
{
try
{
central.StopScan();
if (peripheral == null || advertisementData == null)
{
central.ScanForPeripherals(cbuuids);
return;
}
string ManufactureData;
if (advertisementData.ContainsKey(new NSString("kCBAdvDataManufacturerData")))
{
ManufactureData = advertisementData["kCBAdvDataManufacturerData"].ToString();
}
else
{
ManufactureData = null;
CrossLocalNotifications.Current.Show("no advertising data", "no advertising data", 10);
central.ScanForPeripherals(cbuuids);
return;
}
central.ScanForPeripherals(cbuuids);
}
catch
{
central.ScanForPeripherals(cbuuids);
}
}

Record audio from non-default audio device in objective C

We know there is no API for recording live streaming audio of OS X ( output sound). And we should install a kernel extension like SoundFlower and then record audio of output channel through that kernel extension.
I'm aware that some open source apps like audacity use Port Audio library to capture audio of another audio devices other than default audio device of system. But I couldn't compile Port Audio because of too much errors when building it on Xcode.
I want to know there is a straight forward Core Audio API which has capability of choosing record device? AudioQueue API has no ability to determine type of record device.
How to record output sound which is being played through SoundFLower in objective C with use of some specific Mac provided API?
Thanks in advance for your responses.
This is the relevant code used in SoundFlowerBed to find and select specific audio devices:
/* From Interface */
AudioDeviceID mSoundflower2Device;
AudioDeviceID mSoundflower16Device;
/* Implementation */
// find soundflower devices, store and remove them from our output list
AudioDeviceList::DeviceList &thelist = mOutputDeviceList->GetList();
int index = 0;
for (AudioDeviceList::DeviceList::iterator i = thelist.begin(); i != thelist.end(); ++i, ++index) {
if (0 == strcmp("Soundflower (2ch)", (*i).mName)) {
mSoundflower2Device = (*i).mID;
AudioDeviceList::DeviceList::iterator toerase = i;
i--;
thelist.erase(toerase);
}
else if (0 == strcmp("Soundflower (16ch)", (*i).mName)) {
mSoundflower16Device = (*i).mID;
AudioDeviceList::DeviceList::iterator toerase = i;
i--;
thelist.erase(toerase);
}
else if (0 == strcmp("Soundflower (64ch)", (*i).mName)) {
mSoundflower16Device = (*i).mID;
AudioDeviceList::DeviceList::iterator toerase = i;
i--;
thelist.erase(toerase);
}
}
You'll need to download the source form here: https://github.com/mattingalls/Soundflower and include AudioDevice and AudioDeviceList files in your project.

Change OS X system volume programmatically

How can I change the volume programmatically from Objective-C?
I found this question, Controlling OS X volume in Snow Leopard which suggests to do:
Float32 volume = 0.5;
UInt32 size = sizeof(Float32);
AudioObjectPropertyAddress address = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
1 // Use values 1 and 2 here, 0 (master) does not seem to work
};
OSStatus err;
err = AudioObjectSetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, size, &volume);
NSLog(#"status is %i", err);
This does nothing for me, and prints out status is 2003332927.
I also tried using values 2 and 0 in the address structure, same result for both.
How can I fix this and make it actually decrease the volume to 50%?
You need to get the default audio device first:
#import <CoreAudio/CoreAudio.h>
AudioObjectPropertyAddress getDefaultOutputDevicePropertyAddress = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
AudioDeviceID defaultOutputDeviceID;
UInt32 volumedataSize = sizeof(defaultOutputDeviceID);
OSStatus result = AudioObjectGetPropertyData(kAudioObjectSystemObject,
&getDefaultOutputDevicePropertyAddress,
0, NULL,
&volumedataSize, &defaultOutputDeviceID);
if(kAudioHardwareNoError != result)
{
// ... handle error ...
}
You can then set your volume on channel 1 (left) and channel 2 (right). Note that channel 0 (master) does not seem to be supported (the set command returns 'who?')
AudioObjectPropertyAddress volumePropertyAddress = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
1 /*LEFT_CHANNEL*/
};
Float32 volume;
volumedataSize = sizeof(volume);
result = AudioObjectSetPropertyData(defaultOutputDeviceID,
&volumePropertyAddress,
0, NULL,
sizeof(volume), &volume);
if (result != kAudioHardwareNoError) {
// ... handle error ...
}
Hope this answers your question!
I ran the HALLab utility that comes with the developer tools (i.e. Audio Tools for Xcode). That allows you to open an info window for individual devices and that window has a tab showing notifications. When I change my system volume, I do indeed see that the kAudioDevicePropertyVolumeScalar property changes for each channel of the output device as Thomas O'Dell's answer suggests. However, I also see the property kAudioHardwareServiceDeviceProperty_VirtualMasterVolume change on the master channel. That seems much more promising since you don't have to manually set it for all channels and maintain the balance across them.
You would use the function AudioHardwareServiceSetPropertyData() from Audio Hardware Services to set that on the default output device. To be safe, you might first check that it's settable using AudioHardwareServiceIsPropertySettable().
The documentation for that property says:
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume
A Float32 value that represents the value of the volume control.
The range for this property’s value is 0.0 (silence) through 1.0 (full level). The effect of this property depends on the hardware device associated with the HAL audio object. If the device has a master volume control, this property controls it. If the device has individual channel volume controls, this property applies to those identified by the device's preferred multichannel layout, or the preferred stereo pair if the device is stereo only. This control maintains relative balance between the channels it affects.
You could run a bash script that will change the master volume. This prevents setting the audio first to one side:
Muted:
execlp("osascript", "osascript", "-e", "set volume output muted true", NULL);
Change volume (scale 0-10):
execlp("osascript", "osascript", "-e", "set volume 5", NULL);

FTDI Communication with USB device - Objective C

I'm trying to communicate with the Enttec USB DMX Pro. Mainly receiving DMX.
They released a Visual C++ version here, but I'm a little stumped on what to do to convert to Obj-c. Enttec writes, "Talk to the PRO using FTDI library for Mac, and refer to D2XX programming guide to open and talk to the device." Any example apps for Objective-C out there? Is there an easy way to communicate with the Enttec DMX USB Pro?
I've done a significant amount of work with the FTDI chips on the Mac, so I can provide a little insight here. I've used the single-channel and dual-channel variants of their USB-serial converters, and they all behave the same way.
FTDI has both their Virtual COM Port drivers, which create a serial COM port on your system representing the serial connection attached to their chip, and their D2XX direct communication libraries. You're going to want to work with the latter, which can be downloaded from their site for various platforms.
The D2XX libraries for the Mac come in a standalone .dylib (the latest being libftd2xx.1.2.2.dylib) or a new static library they started shipping recently. Included in that package will be the appropriate header files you need (ftd2xx.h and WinTypes.h) as well.
In your Xcode project, add the .dylib as a framework to be linked in, and add the ftd2xx.h, WinTypes.h, and ftd2xx.cfg files to your project. In your Copy Bundled Frameworks build phase, make sure that libftd2xx.1.2.2.dylib and ftd2xx.cfg are present in that phase. You may also need to adjust the relative path that this library expects, in order for it to function within your app bundle, so you may need to run the following command against it at the command line:
install_name_tool -id #executable_path/../Frameworks/libftd2xx.1.2.2.dylib libftd2xx.1.2.2.dylib
Once your project is all properly configured, you'll want to import the FTDI headers:
#import "ftd2xx.h"
and start to connect to your serial devices. The example you link to in your question has a downloadable C++ sample that shows how they communicate to their device. You can bring across almost all of the C code used there and place it within your Objective-C application. They just look to be using the standard FTDI D2XX commands, which are described in detail within the downloadable D2XX Programmer's Guide.
This is some code that I've lifted from one of my applications, used to connect to one of these devices:
DWORD numDevs = 0;
// Grab the number of attached devices
ftdiPortStatus = FT_ListDevices(&numDevs, NULL, FT_LIST_NUMBER_ONLY);
if (ftdiPortStatus != FT_OK)
{
NSLog(#"Electronics error: Unable to list devices");
return;
}
// Find the device number of the electronics
for (int currentDevice = 0; currentDevice < numDevs; currentDevice++)
{
char Buffer[64];
ftdiPortStatus = FT_ListDevices((PVOID)currentDevice,Buffer,FT_LIST_BY_INDEX|FT_OPEN_BY_DESCRIPTION);
NSString *portDescription = [NSString stringWithCString:Buffer encoding:NSASCIIStringEncoding];
if ( ([portDescription isEqualToString:#"FT232R USB UART"]) && (usbRelayPointer != NULL))
{
// Open the communication with the USB device
ftdiPortStatus = FT_OpenEx("FT232R USB UART",FT_OPEN_BY_DESCRIPTION,usbRelayPointer);
if (ftdiPortStatus != FT_OK)
{
NSLog(#"Electronics error: Can't open USB relay device: %d", (int)ftdiPortStatus);
return;
}
//Turn off bit bang mode
ftdiPortStatus = FT_SetBitMode(*usbRelayPointer, 0x00,0);
if (ftdiPortStatus != FT_OK)
{
NSLog(#"Electronics error: Can't set bit bang mode");
return;
}
// Reset the device
ftdiPortStatus = FT_ResetDevice(*usbRelayPointer);
// Purge transmit and receive buffers
ftdiPortStatus = FT_Purge(*usbRelayPointer, FT_PURGE_RX | FT_PURGE_TX);
// Set the baud rate
ftdiPortStatus = FT_SetBaudRate(*usbRelayPointer, 9600);
// 1 s timeouts on read / write
ftdiPortStatus = FT_SetTimeouts(*usbRelayPointer, 1000, 1000);
// Set to communicate at 8N1
ftdiPortStatus = FT_SetDataCharacteristics(*usbRelayPointer, FT_BITS_8, FT_STOP_BITS_1, FT_PARITY_NONE); // 8N1
// Disable hardware / software flow control
ftdiPortStatus = FT_SetFlowControl(*usbRelayPointer, FT_FLOW_NONE, 0, 0);
// Set the latency of the receive buffer way down (2 ms) to facilitate speedy transmission
ftdiPortStatus = FT_SetLatencyTimer(*usbRelayPointer,2);
if (ftdiPortStatus != FT_OK)
{
NSLog(#"Electronics error: Can't set latency timer");
return;
}
}
}
Disconnection is fairly simple:
ftdiPortStatus = FT_Close(*electronicsPointer);
*electronicsPointer = 0;
if (ftdiPortStatus != FT_OK)
{
return;
}
Writing to the serial device is then pretty easy:
__block DWORD bytesWrittenOrRead;
unsigned char * dataBuffer = (unsigned char *)[command bytes];
//[command getBytes:dataBuffer];
runOnMainQueueWithoutDeadlocking(^{
ftdiPortStatus = FT_Write(electronicsCommPort, dataBuffer, (DWORD)[command length], &bytesWrittenOrRead);
});
if((bytesWrittenOrRead < [command length]) || (ftdiPortStatus != FT_OK))
{
NSLog(#"Bytes written: %d, should be:%d, error: %d", bytesWrittenOrRead, (unsigned int)[command length], ftdiPortStatus);
return NO;
}
(command is an NSData instance, and runOnMainQueueWithoutDeadlocking() is merely a convenience function I use to guarantee execution of a block on the main queue).
You can read raw bytes from the serial interface using something like the following:
NSData *response = nil;
DWORD numberOfCharactersToRead = size;
__block DWORD bytesWrittenOrRead;
__block unsigned char *serialCommunicationBuffer = malloc(numberOfCharactersToRead);
runOnMainQueueWithoutDeadlocking(^{
ftdiPortStatus = FT_Read(electronicsCommPort, serialCommunicationBuffer, (DWORD)numberOfCharactersToRead, &bytesWrittenOrRead);
});
if ((bytesWrittenOrRead < numberOfCharactersToRead) || (ftdiPortStatus != FT_OK))
{
free(serialCommunicationBuffer);
return nil;
}
response = [[NSData alloc] initWithBytes:serialCommunicationBuffer length:numberOfCharactersToRead];
free(serialCommunicationBuffer);
At the end of the above, response will be an NSData instance containing the bytes you've read from the port.
Additionally, I'd suggest that you should always access the FTDI device from the main thread. Even though they say they support multithreaded access, I've found that any kind of non-main-thread access (even guaranteed exclusive accesses from a single thread) cause intermittent crashes on the Mac.
Beyond the cases I've described above, you can consult the D2XX programming guide for the other functions FTDI provides in their C library. Again, you should just need to move over the appropriate code from the samples that have been provided to you by your device manufacturer.
I was running into a similar issue (trying to write to the EntTec Open DMX using Objective-C), without any success. After following #Brad's great answer, I realized that you also need to toggle the BREAK state each time you send a DMX packet.
Here's an example of my loop in some testing code that sends packets with a 20 millisecond delay between frames.
while (1) {
FT_SetBreakOn(usbRelayPointer);
FT_SetBreakOff(usbRelayPointer);
ftdiPortStatus = FT_Write(usbRelayPointer, startCode, 1, &bytesWrittenOrRead);
ftdiPortStatus = FT_Write(usbRelayPointer, dataBuffer, (DWORD)[command length], &bytesWrittenOrRead);
usleep(20000);
}
Hope this helps someone else out there!

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping.
Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still clipped at time of input.
How do I set the input level or gain for that built-in input to avoid the clipping problem?
This works for me to set the input volume on my MacBook Pro (2011 model). It is a bit funky, I had to try setting the master channel volume, then each independent stereo channel volume until I found one that worked. Look through the comments in my code, I suspect the best way to tell if your code is working is to find a get/set-property combination that works, then do something like get/set (something else)/get to verify that your setter is working.
Oh, and I'll point out of course that I wouldn't rely on the values in address staying the same across getProperty calls as I'm doing here. It seems to work but it's definitely bad practice to rely on struct values being the same when you pass one by reference to a function. This is of course sample code so please forgive my laziness. ;)
//
// main.c
// testInputVolumeSetter
//
#include <CoreFoundation/CoreFoundation.h>
#include <CoreAudio/CoreAudio.h>
OSStatus setDefaultInputDeviceVolume( Float32 toVolume );
int main(int argc, const char * argv[]) {
OSStatus err;
// Load the Sound system preference, select a default
// input device, set its volume to max. Now set
// breakpoints at each of these lines. As you step over
// them you'll see the input volume change in the Sound
// preference panel.
//
// On my MacBook Pro setting the channel[ 1 ] volume
// on the default microphone input device seems to do
// the trick. channel[ 0 ] reports that it works but
// seems to have no effect and the master channel is
// unsettable.
//
// I do not know how to tell which one will work so
// probably the best thing to do is write your code
// to call getProperty after you call setProperty to
// determine which channel(s) work.
err = setDefaultInputDeviceVolume( 0.0 );
err = setDefaultInputDeviceVolume( 0.5 );
err = setDefaultInputDeviceVolume( 1.0 );
}
// 0.0 == no volume, 1.0 == max volume
OSStatus setDefaultInputDeviceVolume( Float32 toVolume ) {
AudioObjectPropertyAddress address;
AudioDeviceID deviceID;
OSStatus err;
UInt32 size;
UInt32 channels[ 2 ];
Float32 volume;
// get the default input device id
address.mSelector = kAudioHardwarePropertyDefaultInputDevice;
address.mScope = kAudioObjectPropertyScopeGlobal;
address.mElement = kAudioObjectPropertyElementMaster;
size = sizeof(deviceID);
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &address, 0, nil, &size, &deviceID );
// get the input device stereo channels
if ( ! err ) {
address.mSelector = kAudioDevicePropertyPreferredChannelsForStereo;
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementWildcard;
size = sizeof(channels);
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &channels );
}
// run some tests to see what channels might respond to volume changes
if ( ! err ) {
Boolean hasProperty;
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
// On my MacBook Pro using the default microphone input:
address.mElement = kAudioObjectPropertyElementMaster;
// returns false, no VolumeScalar property for the master channel
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 0 ];
// returns true, channel 0 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 1 ];
// returns true, channel 1 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
}
// try to get the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 0 ];
// returns noErr, but says the volume is always zero (weird)
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 1 ];
// returns noErr, but returns the correct volume!
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
}
// try to set the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
if ( toVolume < 0.0 ) volume = 0.0;
else if ( toVolume > 1.0 ) volume = 1.0;
else volume = toVolume;
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 0 ];
// returns noErr, but doesn't affect my input gain
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 1 ];
// success! correctly sets the input device volume.
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
}
return err;
}
EDIT in response to your question, "How'd [I] figure this out?"
I've spent a lot of time using Apple's audio code over the last five or so years and I've developed some intuition/process when it comes to where and how to look for solutions. My business partner and I co-wrote the original iHeartRadio apps for the first-generation iPhone and a few other devices and one of my responsibilities on that project was the audio portion, specifically writing an AAC Shoutcast stream decoder/player for iOS. There weren't any docs or open-source examples at the time so it involved a lot of trial-and-error and I learned a ton.
At any rate, when I read your question and saw the bounty I figured this was just low-hanging fruit (i.e. you hadn't RTFM ;-). I wrote a few lines of code to set the volume property and when that didn't work I genuinely got interested.
Process-wise maybe you'll find this useful:
Once I knew it wasn't a straightforward answer I started thinking about how to solve the problem. I knew the Sound System Preference lets you set the input gain so I started by disassembling it with otool to see whether Apple was making use of old or new Audio Toolbox routines (new as it happens):
Try using:
otool -tV /System/Library/PreferencePanes/Sound.prefPane/Contents/MacOS/Sound | bbedit
then search for Audio to see what methods are called (if you don't have bbedit, which every Mac developer should IMO, dump it to a file and open in some other text editor).
I'm most familiar with the older, deprecated Audio Toolbox routines (three years to obsolescence in this industry) so I looked at some Technotes from Apple. They have one that shows how to get the default input device and set its volume using the newest CoreAudio methods but as you undoubtedly saw their code doesn't work properly (at least on my MBP).
Once I got to that point I fell back on the tried-and-true: Start googling for keywords that were likely to be involved (e.g. AudioObjectSetPropertyData, kAudioDevicePropertyVolumeScalar, etc.) looking for example usage.
One interesting thing I've found about CoreAudio and using the Apple Toolbox in general is that there is a lot of open-source code out there where people try various things (tons of pastebins and GoogleCode projects etc.). If you're willing to dig through a bunch of this code you'll typically either find the answer outright or get some very good ideas.
In my search the most relevant things I found were the Apple technote showing how to get the default input device and set the master input gain using the new Toolbox routines (even though it didn't work on my hardware), and I found some code that showed setting the gain by channel on an output device. Since input devices can be multichannel I figured this was the next logical thing to try.
Your question is really good because at least right now there is no correct documentation from Apple that shows how to do what you asked. It's goofy too because both channels report that they set the volume but clearly only one of them does (the input mic is a mono source so this isn't surprising, but I consider having a no-op channel and no documentation about it a bit of a bug on Apple).
This happens pretty consistently when you start dealing with Apple's cutting-edge technologies. You can do amazing things with their toolbox and it blows every other OS I've worked on out of the water but it doesn't take too long to get ahead of their documentation, particularly if you're trying to do anything moderately sophisticated.
If you ever decide to write a kernel driver for example you'll find the documentation on IOKit to be woefully inadequate. Ultimately you've got to get online and dig through source code, either other people's projects or the OS X source or both, and pretty soon you'll conclude as I have that the source is really the best place for answers (even though StackOverflow is pretty awesome).
Thanks for the points and good luck with your project :)