Cocoa - detect event when camera started recording - objective-c

In my OSX application I'm using code below to show preview from camera.
[[self session] beginConfiguration];
NSError *error = nil;
AVCaptureDeviceInput *newVideoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (captureDevice != nil) {
[[self session] removeInput: [self videoDeviceInput]];
if([[self session] canAddInput: newVideoDeviceInput]) {
[[self session] addInput:newVideoDeviceInput];
[self setVideoDeviceInput:newVideoDeviceInput];
} else {
DLog(#"WTF?");
}
}
[[self session] commitConfiguration];
Yet, I need to detect the exact time when the preview from the camera becomes available.
In other words I'm trying to detect the same moment like in Facetime under OSX, where animation starts once the camera provides the preview.
What is the best way to achieve this?

I know this question is really old, but I stumbled upon it too when I was looking for this same question, and I have found answers so here goes.
For starters, AVFoundation is too high level, you'll need to drop down to a lower level, CoreMediaIO. There's not a lot of documentation on this, but basically you need to perform a couple queries.
To do this, we'll use a combination of calls. First, CMIOObjectGetPropertyDataSize lets us get the size of the data we'll query for next, which we can then use when we call CMIOObjectGetPropertyData. To set up the get property data size call, we need to start at the top, using this property address:
var opa = CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIOHardwarePropertyDevices),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeGlobal),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementMaster)
)
Next, we'll set up some variables to keep the data we'll need:
var (dataSize, dataUsed) = (UInt32(0), UInt32(0))
var result = CMIOObjectGetPropertyDataSize(CMIOObjectID(kCMIOObjectSystemObject), &opa, 0, nil, &dataSize)
var devices: UnsafeMutableRawPointer? = nil
From this point on, we'll need to wait until we get some data out, so let's busy loop:
repeat {
if devices != nil {
free(devices)
devices = nil
}
devices = malloc(Int(dataSize))
result = CMIOObjectGetPropertyData(CMIOObjectID(kCMIOObjectSystemObject), &opa, 0, nil, dataSize, &dataUsed, devices);
} while result == OSStatus(kCMIOHardwareBadPropertySizeError)
Once we get past this point in our execution, devices will point to potentially many devices. We need to loop through them, somewhat like this:
if let devices = devices {
for offset in stride(from: 0, to: dataSize, by: MemoryLayout<CMIOObjectID>.size) {
let current = devices.advanced(by: Int(offset)).assumingMemoryBound(to: CMIOObjectID.self)
// current.pointee is your object ID you will want to keep track of somehow
}
}
Finally, clean up devices
free(devices)
Now at this point, you'll want to use that object ID you saved above to make another query. We need a new property address:
var CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIODevicePropertyDeviceIsRunningSomewhere),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeWildcard),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementWildcard)
)
This tells CoreMediaIO that we want to know if the device is currently running somewhere (read: in any app), wildcarding the rest of the fields. Next we get to the meat of the query, camera below corresponds to the ID you saved before:
var (dataSize, dataUsed) = (UInt32(0), UInt32(0))
var result = CMIOObjectGetPropertyDataSize(camera, &opa, 0, nil, &dataSize)
if result == OSStatus(kCMIOHardwareNoError) {
if let data = malloc(Int(dataSize)) {
result = CMIOObjectGetPropertyData(camera, &opa, 0, nil, dataSize, &dataUsed, data)
let on = data.assumingMemoryBound(to: UInt8.self)
// on.pointee != 0 means that it's in use somewhere, 0 means not in use anywhere
}
}
With the above code samples you should have enough to test whether or not the camera is in use. You only need to get the device once (the first part of the answer); the check for if it's in use however, you'll have to do at any time you want this information. As an extra exercise, consider playing with CMIOObjectAddPropertyListenerBlock to be notified on event changes for the in use property address we used above.
While this answer is nearly 3 years too late for the OP, I hope it helps someone in the future. Examples here are given with Swift 3.0.

The previous answer from the user jer is definitely the correct answer, but I just wanted to add one additional important information.
If a listener block is registered with CMIOObjectAddPropertyListenerBlock, the current run loop must be run, otherwise no event will be received and the listener block will never fire.

Related

Notification when display gets connected or disconnected

I'm working on an OS X application that displays custom windows on all available spaces of all the connected displays.
I can get an array of the available display objects by calling [NSScreen screens].
What I'm currently missing is a way of telling if the user connects a display to or disconnects a screen from their system.
I have searched the Cocoa documentation for notifications that deal with a scenario like that without much luck, and I refuse to believe that there isn't some sort of system notification that gets posted when changing the number of displays connected to the system.
Any suggestions on how to solve this problem?
There are several ways to achieve that:
You could implement applicationDidChangeScreenParameters: in your app delegate (the method is part of the NSApplicationDelegateProtocol).
Another way is to listen for the NSApplicationDidChangeScreenParametersNotification sent by the default notification center [NSNotificationCenter defaultCenter].
Whenever your delegate method is called or you receive the notification, you can iterate over [NSScreen screens] and see if a display got connected or removed (you have to maintain a display list you can check against at program launch).
A non-Cocoa approach would be via Core Graphics Display services:
You have to implement a reconfiguration function and register it with CGDisplayRegisterReconfigurationCallback(CGDisplayReconfigurationCallBack cb, void* obj);
In your reconfiguration function you can query the state of the affected display. E.g.:
void DisplayReconfigurationCallBack(CGDirectDisplayID display, CGDisplayChangeSummaryFlags flags, void* userInfo)
{
if(display == someDisplayYouAreInterestedIn)
{
if(flags & kCGDisplaySetModeFlag)
{
...
}
if(flags & kCGDisplayRemoveFlag)
{
...
}
if(flags & kCGDisplayDisabledFlag)
{
...
}
}
if(flags & kCGDisplaySetModeFlag || flags & kCGDisplayDisabledFlag || flags & kCGDisplayRemoveFlag)
{
...
}
}
in swift 3.0:
let nc = NotificationCenter.default
nc.addObserver(self,
selector: #selector(screenDidChange),
name: NSNotification.Name.NSApplicationDidChangeScreenParameters,
object: nil)
NC call back:
final func screenDidChange(notification: NSNotification){
let userInfo = notification.userInfo
print(userInfo)
}

How would you write fetching a collection the "Reactive Cocoa" way?

The client I'm building is using Reactive Cocoa with Octokit and so far it has been going very well. However now I'm at a point where I want to fetch a collection of repositories and am having trouble wrapping my head around doing this the "RAC way"
// fire this when an authenticated client is set
[[RACAbleWithStart([GHDataStore sharedStore], client)
filter:^BOOL (OCTClient *client) {
return client != nil && client.authenticated;
}]
subscribeNext:^(OCTClient *client) {
[[[client fetchUserRepositories] deliverOn:RACScheduler.mainThreadScheduler]
subscribeNext:^(OCTRepository *fetchedRepo) {
NSLog(#" Received new repo: %#",fetchedRepo.name);
}
error:^(NSError *error) {
NSLog(#"Error fetching repos: %#",error.localizedDescription);
}];
} completed:^{
NSLog(#"Completed fetching repos");
}];
I originally assumed that -subscribeNext: would pass an NSArray, but now understand that it sends the message every "next" object returned, which in this case is an OCTRepository.
Now I could do something like this:
NSMutableArray *repos = [NSMutableArray array];
// most of that code above
subscribeNext:^(OCTRepository *fetchedRepo) {
[repos addObject:fetchedRepo];
}
// the rest of the code above
Sure, this works, but it doesn't seem to follow the functional principles that RAC enables. I'm really trying to stick to conventions here. Any light on capabilities of RAC/Octokit are greatly appreciated!
It largely depends on what you want to do with the repositories afterward. It seems like you want to do something once you have all the repositories, so I'll set up an example that does that.
// Watch for the client to change
RAC(self.repositories) = [[[[[RACAbleWithStart([GHDataStore sharedStore], client)
// Ignore clients that aren't authenticated
filter:^ BOOL (OCTClient *client) {
return client != nil && client.authenticated;
}]
// For each client, execute the block. Returns a signal that sends a signal
// to fetch the user repositories whenever a new client comes in. A signal of
// of signals is often used to do some work in response to some other work.
// Often times, you'd want to use `-flattenMap:`, but we're using `-map:` with
// `-switchToLatest` so the resultant signal will only send repositories for
// the most recent client.
map:^(OCTClient *client) {
// -collect will send a single value--an NSArray with all of the values
// that were send on the original signal.
return [[client fetchUserRepositories] collect];
}]
// Switch to the latest signal that was returned from the map block.
switchToLatest]
// Execute a block when an error occurs, but don't alter the values sent on
// the original signal.
doError:^(NSError *error) {
NSLog(#"Error fetching repos: %#",error.localizedDescription);
}]
deliverOn:RACScheduler.mainThreadScheduler];
Now self.repositories will change (and fire a KVO notification) whenever the repositories are updated from the client.
A couple things to note about this:
It's best to avoid subscribeNext: whenever possible. Using it steps outside of the functional paradigm (as do doNext: and doError:, but they're also helpful tools at times). In general, you want to think about how you can transform the signal into something that does what you want.
If you want to chain one or more pieces of work together, you often want to use flattenMap:. More generally, you want to start thinking about signals of signals--signals that send other signals that represent the other work.
You often want to wait as long as possible to move work back to the main thread.
When thinking through a problem, it's sometimes valuable to start by writing out each individual signal to think about a) what you have, b) what you want, and c) how to get from one to the other.
EDIT: Updated to address #JustinSpahrSummers' comment below.
There is a -collect operator that should do exactly what you're looking for.
// Collect all receiver's `next`s into a NSArray. nil values will be converted
// to NSNull.
//
// This corresponds to the `ToArray` method in Rx.
//
// Returns a signal which sends a single NSArray when the receiver completes
// successfully.
- (RACSignal *)collect;

Pre-processing a loop in Objective-C

I am currently writing a program to help me control complex lights installations. The idea is I tell the program to start a preset, then the app has three options (depending on the preset type)
1) the lights go to one position (so only one group of data sent when the preset starts)
2) the lights follows a mathematical equation (ex: sinus with a timer to make smooth circles)
3) the lights respond to a flow of data (ex midi controller)
So I decided to go with an object I call the AppBrain, that receive data from the controllers and the templates, but also is able to send processed data to the lights.
Now, I come from non-native programming, and I kinda have trust issues concerning working with a lot of processing, events and timing; as well as troubles with understanding 100% the Cocoa logic.
This is where the actual question starts, sorry
What I want to do, would be when I load the preset, I parse it to prepare the timer/data receive event so it doesn't have to go trough every option for 100 lights 100 times per second.
To explain more deeply, here's how I would do it in Javascript (crappy pseudo code, of course)
var lightsFunctions = {};
function prepareTemplate(theTemplate){
//Let's assume here the template is just an array, and I won't show all the processing
switch(theTemplate.typeOfTemplate){
case "simpledata":
sendAllDataTooLights(); // Simple here
break;
case "periodic":
for(light in theTemplate.lights){
switch(light.typeOfEquation){
case "sin":
lightsFunctions[light.id] = doTheSinus; // doTheSinus being an existing function
break;
case "cos":
...
}
}
function onFrame(){
for(light in lightsFunctions){
lightsFunctions[light]();
}
}
var theTimer = setTimeout(onFrame, theTemplate.delay);
break;
case "controller":
//do the same pre-processing without the timer, to know which function to execute for which light
break;
}
}
}
So, my idea is to store the processing function I need in an NSArray, so I don't need to test on each frame the type and loose some time/CPU.
I don't know if I'm clear, or if my idea is possible/the good way to go. I'm mostly looking for algorithm ideas, and if you have some code that might direct me in the good direction... (I know of PerformSelector, but I don't know if it is the best for this situation.
Thanks;
I_
First of all, don't spend time optimizing what you don't know is a performance problem. 100 iterations of the type is nothing in the native world, even on the weaker mobile CPUs.
Now, to your problem. I take it you are writing some kind of configuration / DSL to specify the light control sequences. One way of doing it is to store blocks in your NSArray. A block is the equivalent of a function object in JavaScript. So for example:
typedef void (^LightFunction)(void);
- (NSArray*) parseProgram ... {
NSMutableArray* result = [NSMutableArray array];
if(...) {
LightFunction simpledata = ^{ sendDataToLights(); };
[result addObject:simpleData];
} else if(...) {
Light* light = [self getSomeLight:...];
LightFunction periodic = ^{
// Note how you can access the local scope of the outside function.
// Make sure you use automatic reference counting for this.
[light doSomethingWithParam:someParam];
};
[result addObject:periodic];
}
return result;
}
...
NSArray* program = [self parseProgram:...];
// To run your program
for(LightFunction func in program) {
func();
}

open al sounds don't play after the incoming call, until app restart

I'm playing game sounds using OpenAL, and bg music using standard AV. Recently i've found that after the
incoming call all openal sounds don't work while bg music is still playing. If I force stop app and start again
sounds appear again. Do smbd happen to know what's happening to openal during/after the incoming call?
Ok, it seems I've found a solution.
I'm using obj-c sound manager, so I just added beginInterruption and endInterruption delegate methods of AVAudioSession (and AVAudioPlayer) to my class.
beginInterruption looks like:
alcMakeContextCurrent(NULL);
and endInterruption looks something like:
NSError * audioSessionError = NULL;
[audioSession setCategory:soundCategory error:&audioSessionError];
if (audioSessionError)
{
Log(#"ERROR - SoundManager: Unable to set the audio session category");
return;
}
// Set the audio session state to true and report any errors
audioSessionError = NULL;
[audioSession setActive:YES error:&audioSessionError];
if (audioSessionError)
{
Log(#"ERROR - SoundManager: Unable to set the audio session state to YES with error %d.", (int) result);
return;
}
//music players handling
bool plays = false;
if (musicPlayer[currentPlayer] != nil)
plays = [musicPlayer[currentPlayer] isPlaying];
if (musicPlayer[currentPlayer] != nil && !plays)
[musicPlayer[currentPlayer] play];
alcMakeContextCurrent(context);
Yes, this works if you're using only openAL sounds. But to play long tracks you should use AVAudioPlayer.
But here's the Apple magic again! If you play music along with OpenAL sounds something odd happens.
Cancel the incoming call and AVAudioSessionDelegate::endInterruption with AVAudioPlayerDelegate::audioPlayerEndInterruption will never called. Only beginInterruption, not the end.
Even AppDelegate::applicationWillEnterForeground will not be called, and app just don't know that we've returned.
But the good news is that you can call your endInterruption in AppDelegate::applicationDidBecomeActive method, and openAL context will be restored. And this works!
- (void)applicationDidBecomeActive:(UIApplication *)application
{
if (MySoundMngr != nil)
{
[MySoundMngr endInterruption];
}
// Restart any tasks that were paused and so on....
}
I had a hard time figuring this out so wanted to add my answer here. This is all specifically in Xamarin, but I suspect it applies generally and is similar to #Tertium's answer
You can prevent iOS from interrupting your audio in some situations (e.g., getting a phone call but declining it), using AVAudioSession.SharedInstance().SetPrefersNoInterruptionsFromSystemAlerts(true, out NSError err);
You will still be interrupted in some situations (e.g., you accept a phone call). To catch these you must AVAudioSession.Notifications.ObserveInteruption(myAudioInterruptionHandler); when you launch your app.
Inside this handler, you can determine if you're shutting down or coming back like so:
void myAudioInterruptionHandler(object sender, AVAudioSessionInterruptionEventArgs args) {
args.Notification.UserInfo.TryGetValue(
new NSString("AVAudioSessionInterruptionTypeKey"),
out NSObject typeKey
);
bool isBeginningInterruption = (typeKey.ToString() == "1");
// ...
}
Interruption Begins
When the interruption begins, stop whatever audio is playing (you'll need to handle this on your own based on your app, but probably by calling AL.SourceStop on everything).
Then, critically,
ContextHandle audioContextHandle = Alc.GetCurrentContext();
Alc.MakeContextCurrent(ContextHandle.Zero);
If you don't do this right away, iOS will fry your ALC context and you are doomed. Note that if you have a handler for AudioRouteChanged this is too late, you must do it in the AudioInterruption handler.
Interruption Ends
When you're coming back from the interruption, first reboot your iOS audio session:
AVAudioSession.SharedInstance().SetActive(true);
You may also need to reset your preferred input (I think this step is optional if you always use the default input) AVAudioSession.SharedInstance().SetPreferredInput(Input, out NSError err)
Then restore your context
Alc.MakeContextCurrent(audioContextHandle);

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping.
Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still clipped at time of input.
How do I set the input level or gain for that built-in input to avoid the clipping problem?
This works for me to set the input volume on my MacBook Pro (2011 model). It is a bit funky, I had to try setting the master channel volume, then each independent stereo channel volume until I found one that worked. Look through the comments in my code, I suspect the best way to tell if your code is working is to find a get/set-property combination that works, then do something like get/set (something else)/get to verify that your setter is working.
Oh, and I'll point out of course that I wouldn't rely on the values in address staying the same across getProperty calls as I'm doing here. It seems to work but it's definitely bad practice to rely on struct values being the same when you pass one by reference to a function. This is of course sample code so please forgive my laziness. ;)
//
// main.c
// testInputVolumeSetter
//
#include <CoreFoundation/CoreFoundation.h>
#include <CoreAudio/CoreAudio.h>
OSStatus setDefaultInputDeviceVolume( Float32 toVolume );
int main(int argc, const char * argv[]) {
OSStatus err;
// Load the Sound system preference, select a default
// input device, set its volume to max. Now set
// breakpoints at each of these lines. As you step over
// them you'll see the input volume change in the Sound
// preference panel.
//
// On my MacBook Pro setting the channel[ 1 ] volume
// on the default microphone input device seems to do
// the trick. channel[ 0 ] reports that it works but
// seems to have no effect and the master channel is
// unsettable.
//
// I do not know how to tell which one will work so
// probably the best thing to do is write your code
// to call getProperty after you call setProperty to
// determine which channel(s) work.
err = setDefaultInputDeviceVolume( 0.0 );
err = setDefaultInputDeviceVolume( 0.5 );
err = setDefaultInputDeviceVolume( 1.0 );
}
// 0.0 == no volume, 1.0 == max volume
OSStatus setDefaultInputDeviceVolume( Float32 toVolume ) {
AudioObjectPropertyAddress address;
AudioDeviceID deviceID;
OSStatus err;
UInt32 size;
UInt32 channels[ 2 ];
Float32 volume;
// get the default input device id
address.mSelector = kAudioHardwarePropertyDefaultInputDevice;
address.mScope = kAudioObjectPropertyScopeGlobal;
address.mElement = kAudioObjectPropertyElementMaster;
size = sizeof(deviceID);
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &address, 0, nil, &size, &deviceID );
// get the input device stereo channels
if ( ! err ) {
address.mSelector = kAudioDevicePropertyPreferredChannelsForStereo;
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementWildcard;
size = sizeof(channels);
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &channels );
}
// run some tests to see what channels might respond to volume changes
if ( ! err ) {
Boolean hasProperty;
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
// On my MacBook Pro using the default microphone input:
address.mElement = kAudioObjectPropertyElementMaster;
// returns false, no VolumeScalar property for the master channel
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 0 ];
// returns true, channel 0 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 1 ];
// returns true, channel 1 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
}
// try to get the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 0 ];
// returns noErr, but says the volume is always zero (weird)
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 1 ];
// returns noErr, but returns the correct volume!
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
}
// try to set the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
if ( toVolume < 0.0 ) volume = 0.0;
else if ( toVolume > 1.0 ) volume = 1.0;
else volume = toVolume;
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 0 ];
// returns noErr, but doesn't affect my input gain
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 1 ];
// success! correctly sets the input device volume.
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
}
return err;
}
EDIT in response to your question, "How'd [I] figure this out?"
I've spent a lot of time using Apple's audio code over the last five or so years and I've developed some intuition/process when it comes to where and how to look for solutions. My business partner and I co-wrote the original iHeartRadio apps for the first-generation iPhone and a few other devices and one of my responsibilities on that project was the audio portion, specifically writing an AAC Shoutcast stream decoder/player for iOS. There weren't any docs or open-source examples at the time so it involved a lot of trial-and-error and I learned a ton.
At any rate, when I read your question and saw the bounty I figured this was just low-hanging fruit (i.e. you hadn't RTFM ;-). I wrote a few lines of code to set the volume property and when that didn't work I genuinely got interested.
Process-wise maybe you'll find this useful:
Once I knew it wasn't a straightforward answer I started thinking about how to solve the problem. I knew the Sound System Preference lets you set the input gain so I started by disassembling it with otool to see whether Apple was making use of old or new Audio Toolbox routines (new as it happens):
Try using:
otool -tV /System/Library/PreferencePanes/Sound.prefPane/Contents/MacOS/Sound | bbedit
then search for Audio to see what methods are called (if you don't have bbedit, which every Mac developer should IMO, dump it to a file and open in some other text editor).
I'm most familiar with the older, deprecated Audio Toolbox routines (three years to obsolescence in this industry) so I looked at some Technotes from Apple. They have one that shows how to get the default input device and set its volume using the newest CoreAudio methods but as you undoubtedly saw their code doesn't work properly (at least on my MBP).
Once I got to that point I fell back on the tried-and-true: Start googling for keywords that were likely to be involved (e.g. AudioObjectSetPropertyData, kAudioDevicePropertyVolumeScalar, etc.) looking for example usage.
One interesting thing I've found about CoreAudio and using the Apple Toolbox in general is that there is a lot of open-source code out there where people try various things (tons of pastebins and GoogleCode projects etc.). If you're willing to dig through a bunch of this code you'll typically either find the answer outright or get some very good ideas.
In my search the most relevant things I found were the Apple technote showing how to get the default input device and set the master input gain using the new Toolbox routines (even though it didn't work on my hardware), and I found some code that showed setting the gain by channel on an output device. Since input devices can be multichannel I figured this was the next logical thing to try.
Your question is really good because at least right now there is no correct documentation from Apple that shows how to do what you asked. It's goofy too because both channels report that they set the volume but clearly only one of them does (the input mic is a mono source so this isn't surprising, but I consider having a no-op channel and no documentation about it a bit of a bug on Apple).
This happens pretty consistently when you start dealing with Apple's cutting-edge technologies. You can do amazing things with their toolbox and it blows every other OS I've worked on out of the water but it doesn't take too long to get ahead of their documentation, particularly if you're trying to do anything moderately sophisticated.
If you ever decide to write a kernel driver for example you'll find the documentation on IOKit to be woefully inadequate. Ultimately you've got to get online and dig through source code, either other people's projects or the OS X source or both, and pretty soon you'll conclude as I have that the source is really the best place for answers (even though StackOverflow is pretty awesome).
Thanks for the points and good luck with your project :)