Change OS X system volume programmatically - objective-c

How can I change the volume programmatically from Objective-C?
I found this question, Controlling OS X volume in Snow Leopard which suggests to do:
Float32 volume = 0.5;
UInt32 size = sizeof(Float32);
AudioObjectPropertyAddress address = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
1 // Use values 1 and 2 here, 0 (master) does not seem to work
};
OSStatus err;
err = AudioObjectSetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, size, &volume);
NSLog(#"status is %i", err);
This does nothing for me, and prints out status is 2003332927.
I also tried using values 2 and 0 in the address structure, same result for both.
How can I fix this and make it actually decrease the volume to 50%?

You need to get the default audio device first:
#import <CoreAudio/CoreAudio.h>
AudioObjectPropertyAddress getDefaultOutputDevicePropertyAddress = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
AudioDeviceID defaultOutputDeviceID;
UInt32 volumedataSize = sizeof(defaultOutputDeviceID);
OSStatus result = AudioObjectGetPropertyData(kAudioObjectSystemObject,
&getDefaultOutputDevicePropertyAddress,
0, NULL,
&volumedataSize, &defaultOutputDeviceID);
if(kAudioHardwareNoError != result)
{
// ... handle error ...
}
You can then set your volume on channel 1 (left) and channel 2 (right). Note that channel 0 (master) does not seem to be supported (the set command returns 'who?')
AudioObjectPropertyAddress volumePropertyAddress = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
1 /*LEFT_CHANNEL*/
};
Float32 volume;
volumedataSize = sizeof(volume);
result = AudioObjectSetPropertyData(defaultOutputDeviceID,
&volumePropertyAddress,
0, NULL,
sizeof(volume), &volume);
if (result != kAudioHardwareNoError) {
// ... handle error ...
}
Hope this answers your question!

I ran the HALLab utility that comes with the developer tools (i.e. Audio Tools for Xcode). That allows you to open an info window for individual devices and that window has a tab showing notifications. When I change my system volume, I do indeed see that the kAudioDevicePropertyVolumeScalar property changes for each channel of the output device as Thomas O'Dell's answer suggests. However, I also see the property kAudioHardwareServiceDeviceProperty_VirtualMasterVolume change on the master channel. That seems much more promising since you don't have to manually set it for all channels and maintain the balance across them.
You would use the function AudioHardwareServiceSetPropertyData() from Audio Hardware Services to set that on the default output device. To be safe, you might first check that it's settable using AudioHardwareServiceIsPropertySettable().
The documentation for that property says:
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume
A Float32 value that represents the value of the volume control.
The range for this property’s value is 0.0 (silence) through 1.0 (full level). The effect of this property depends on the hardware device associated with the HAL audio object. If the device has a master volume control, this property controls it. If the device has individual channel volume controls, this property applies to those identified by the device's preferred multichannel layout, or the preferred stereo pair if the device is stereo only. This control maintains relative balance between the channels it affects.

You could run a bash script that will change the master volume. This prevents setting the audio first to one side:
Muted:
execlp("osascript", "osascript", "-e", "set volume output muted true", NULL);
Change volume (scale 0-10):
execlp("osascript", "osascript", "-e", "set volume 5", NULL);

Related

CoreMediaIO, an alleged OS Bug / Memory Leak

Environment
OS-X 10.10
xcode 6.4
C++/Obj-C OS-X application
Use-case
Capture video using CoreMediaIO, capture source is iPod5
Capturing machine is OS-X Yosemite
Capture feed consists of Video and Audio samples
Problem description
While video capture is working fine, when video samples are received, there is an accumulating memory leak, when no video samples are received ( only audio ), there is no leak ( memory consumption stops growing )
I am mixing Cocoa thread and POSIX threads, I have made sure to have [NSThread isMultiThreaded] set to YES ( by creating an empty NSThread )
"Obj-C Auto Ref Counting" is set to YES in the project properties
The following is a short code-snap of the code causing the leak:
OSStatus status = 0;
#autoreleasepool {
m_udid = udid;
if (0 != (status = Utils::CoreMediaIO::FindDeviceByUDID(m_udid, m_devId)))
return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
if (0 != (status = Utils::CoreMediaIO::GetStreamByIndex(m_devId, kCMIODevicePropertyScopeInput, 0, m_strmID)))
return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
status = Utils::CoreMediaIO::SetPtopertyData(m_devId, kCMIODevicePropertyExcludeNonDALAccess, 1U);
status = Utils::CoreMediaIO::SetPtopertyData<int>(m_devId, kCMIODevicePropertyDeviceMaster, getpid());// Exclusive access for the calling process
// Results in an infinitely accumulating memory leak
status = CMIOStreamCopyBufferQueue(m_strmID, [](CMIOStreamID streamID, void* token, void* refCon) {
#autoreleasepool {
CMSampleBufferRef sampleBuffer;
while(0 != (sampleBuffer = (CMSampleBufferRef)CMSimpleQueueDequeue(m_queueRef))) {
CFRelease(sampleBuffer);
sampleBuffer = 0;
}
}
}, this, &m_queueRef);
if(noErr != status)
return E_FAIL;
if(noErr != (status = CMIODeviceStartStream(m_devId, m_strmID)))
return E_FAIL;
}
Having sample de-queuing done in the main thread ( using 'dispatch_async(dispatch_get_main_queue(), ^{' ) didn't have any affect...
Is there anything wrong with the above code snap? might this be an OS Bug?
Reference link: https://forums.developer.apple.com/message/46752#46752
AN UPDATE
The QuickTime player support using an iOS device as a capture source ( mirroring it's A/V to the mac machine ), having a preview session running for a while reproduce the above mentioned problem w/ the OS provided QuickTime player, this strongly indicate an OS Bug, bellow is a screen-shot showing the QT player taking 140Mb of RAM after running for ~2hours ( where it starts around 20Mb ), by the end of the day it has grown to ~760Mb...
APPLE Please have this Fixed, I have standing customers commitments...

Microsoft Kinect and background/environmental noise

I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.

Cocoa - detect event when camera started recording

In my OSX application I'm using code below to show preview from camera.
[[self session] beginConfiguration];
NSError *error = nil;
AVCaptureDeviceInput *newVideoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (captureDevice != nil) {
[[self session] removeInput: [self videoDeviceInput]];
if([[self session] canAddInput: newVideoDeviceInput]) {
[[self session] addInput:newVideoDeviceInput];
[self setVideoDeviceInput:newVideoDeviceInput];
} else {
DLog(#"WTF?");
}
}
[[self session] commitConfiguration];
Yet, I need to detect the exact time when the preview from the camera becomes available.
In other words I'm trying to detect the same moment like in Facetime under OSX, where animation starts once the camera provides the preview.
What is the best way to achieve this?
I know this question is really old, but I stumbled upon it too when I was looking for this same question, and I have found answers so here goes.
For starters, AVFoundation is too high level, you'll need to drop down to a lower level, CoreMediaIO. There's not a lot of documentation on this, but basically you need to perform a couple queries.
To do this, we'll use a combination of calls. First, CMIOObjectGetPropertyDataSize lets us get the size of the data we'll query for next, which we can then use when we call CMIOObjectGetPropertyData. To set up the get property data size call, we need to start at the top, using this property address:
var opa = CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIOHardwarePropertyDevices),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeGlobal),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementMaster)
)
Next, we'll set up some variables to keep the data we'll need:
var (dataSize, dataUsed) = (UInt32(0), UInt32(0))
var result = CMIOObjectGetPropertyDataSize(CMIOObjectID(kCMIOObjectSystemObject), &opa, 0, nil, &dataSize)
var devices: UnsafeMutableRawPointer? = nil
From this point on, we'll need to wait until we get some data out, so let's busy loop:
repeat {
if devices != nil {
free(devices)
devices = nil
}
devices = malloc(Int(dataSize))
result = CMIOObjectGetPropertyData(CMIOObjectID(kCMIOObjectSystemObject), &opa, 0, nil, dataSize, &dataUsed, devices);
} while result == OSStatus(kCMIOHardwareBadPropertySizeError)
Once we get past this point in our execution, devices will point to potentially many devices. We need to loop through them, somewhat like this:
if let devices = devices {
for offset in stride(from: 0, to: dataSize, by: MemoryLayout<CMIOObjectID>.size) {
let current = devices.advanced(by: Int(offset)).assumingMemoryBound(to: CMIOObjectID.self)
// current.pointee is your object ID you will want to keep track of somehow
}
}
Finally, clean up devices
free(devices)
Now at this point, you'll want to use that object ID you saved above to make another query. We need a new property address:
var CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIODevicePropertyDeviceIsRunningSomewhere),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeWildcard),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementWildcard)
)
This tells CoreMediaIO that we want to know if the device is currently running somewhere (read: in any app), wildcarding the rest of the fields. Next we get to the meat of the query, camera below corresponds to the ID you saved before:
var (dataSize, dataUsed) = (UInt32(0), UInt32(0))
var result = CMIOObjectGetPropertyDataSize(camera, &opa, 0, nil, &dataSize)
if result == OSStatus(kCMIOHardwareNoError) {
if let data = malloc(Int(dataSize)) {
result = CMIOObjectGetPropertyData(camera, &opa, 0, nil, dataSize, &dataUsed, data)
let on = data.assumingMemoryBound(to: UInt8.self)
// on.pointee != 0 means that it's in use somewhere, 0 means not in use anywhere
}
}
With the above code samples you should have enough to test whether or not the camera is in use. You only need to get the device once (the first part of the answer); the check for if it's in use however, you'll have to do at any time you want this information. As an extra exercise, consider playing with CMIOObjectAddPropertyListenerBlock to be notified on event changes for the in use property address we used above.
While this answer is nearly 3 years too late for the OP, I hope it helps someone in the future. Examples here are given with Swift 3.0.
The previous answer from the user jer is definitely the correct answer, but I just wanted to add one additional important information.
If a listener block is registered with CMIOObjectAddPropertyListenerBlock, the current run loop must be run, otherwise no event will be received and the listener block will never fire.

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping.
Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still clipped at time of input.
How do I set the input level or gain for that built-in input to avoid the clipping problem?
This works for me to set the input volume on my MacBook Pro (2011 model). It is a bit funky, I had to try setting the master channel volume, then each independent stereo channel volume until I found one that worked. Look through the comments in my code, I suspect the best way to tell if your code is working is to find a get/set-property combination that works, then do something like get/set (something else)/get to verify that your setter is working.
Oh, and I'll point out of course that I wouldn't rely on the values in address staying the same across getProperty calls as I'm doing here. It seems to work but it's definitely bad practice to rely on struct values being the same when you pass one by reference to a function. This is of course sample code so please forgive my laziness. ;)
//
// main.c
// testInputVolumeSetter
//
#include <CoreFoundation/CoreFoundation.h>
#include <CoreAudio/CoreAudio.h>
OSStatus setDefaultInputDeviceVolume( Float32 toVolume );
int main(int argc, const char * argv[]) {
OSStatus err;
// Load the Sound system preference, select a default
// input device, set its volume to max. Now set
// breakpoints at each of these lines. As you step over
// them you'll see the input volume change in the Sound
// preference panel.
//
// On my MacBook Pro setting the channel[ 1 ] volume
// on the default microphone input device seems to do
// the trick. channel[ 0 ] reports that it works but
// seems to have no effect and the master channel is
// unsettable.
//
// I do not know how to tell which one will work so
// probably the best thing to do is write your code
// to call getProperty after you call setProperty to
// determine which channel(s) work.
err = setDefaultInputDeviceVolume( 0.0 );
err = setDefaultInputDeviceVolume( 0.5 );
err = setDefaultInputDeviceVolume( 1.0 );
}
// 0.0 == no volume, 1.0 == max volume
OSStatus setDefaultInputDeviceVolume( Float32 toVolume ) {
AudioObjectPropertyAddress address;
AudioDeviceID deviceID;
OSStatus err;
UInt32 size;
UInt32 channels[ 2 ];
Float32 volume;
// get the default input device id
address.mSelector = kAudioHardwarePropertyDefaultInputDevice;
address.mScope = kAudioObjectPropertyScopeGlobal;
address.mElement = kAudioObjectPropertyElementMaster;
size = sizeof(deviceID);
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &address, 0, nil, &size, &deviceID );
// get the input device stereo channels
if ( ! err ) {
address.mSelector = kAudioDevicePropertyPreferredChannelsForStereo;
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementWildcard;
size = sizeof(channels);
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &channels );
}
// run some tests to see what channels might respond to volume changes
if ( ! err ) {
Boolean hasProperty;
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
// On my MacBook Pro using the default microphone input:
address.mElement = kAudioObjectPropertyElementMaster;
// returns false, no VolumeScalar property for the master channel
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 0 ];
// returns true, channel 0 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 1 ];
// returns true, channel 1 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
}
// try to get the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 0 ];
// returns noErr, but says the volume is always zero (weird)
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 1 ];
// returns noErr, but returns the correct volume!
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
}
// try to set the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
if ( toVolume < 0.0 ) volume = 0.0;
else if ( toVolume > 1.0 ) volume = 1.0;
else volume = toVolume;
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 0 ];
// returns noErr, but doesn't affect my input gain
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 1 ];
// success! correctly sets the input device volume.
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
}
return err;
}
EDIT in response to your question, "How'd [I] figure this out?"
I've spent a lot of time using Apple's audio code over the last five or so years and I've developed some intuition/process when it comes to where and how to look for solutions. My business partner and I co-wrote the original iHeartRadio apps for the first-generation iPhone and a few other devices and one of my responsibilities on that project was the audio portion, specifically writing an AAC Shoutcast stream decoder/player for iOS. There weren't any docs or open-source examples at the time so it involved a lot of trial-and-error and I learned a ton.
At any rate, when I read your question and saw the bounty I figured this was just low-hanging fruit (i.e. you hadn't RTFM ;-). I wrote a few lines of code to set the volume property and when that didn't work I genuinely got interested.
Process-wise maybe you'll find this useful:
Once I knew it wasn't a straightforward answer I started thinking about how to solve the problem. I knew the Sound System Preference lets you set the input gain so I started by disassembling it with otool to see whether Apple was making use of old or new Audio Toolbox routines (new as it happens):
Try using:
otool -tV /System/Library/PreferencePanes/Sound.prefPane/Contents/MacOS/Sound | bbedit
then search for Audio to see what methods are called (if you don't have bbedit, which every Mac developer should IMO, dump it to a file and open in some other text editor).
I'm most familiar with the older, deprecated Audio Toolbox routines (three years to obsolescence in this industry) so I looked at some Technotes from Apple. They have one that shows how to get the default input device and set its volume using the newest CoreAudio methods but as you undoubtedly saw their code doesn't work properly (at least on my MBP).
Once I got to that point I fell back on the tried-and-true: Start googling for keywords that were likely to be involved (e.g. AudioObjectSetPropertyData, kAudioDevicePropertyVolumeScalar, etc.) looking for example usage.
One interesting thing I've found about CoreAudio and using the Apple Toolbox in general is that there is a lot of open-source code out there where people try various things (tons of pastebins and GoogleCode projects etc.). If you're willing to dig through a bunch of this code you'll typically either find the answer outright or get some very good ideas.
In my search the most relevant things I found were the Apple technote showing how to get the default input device and set the master input gain using the new Toolbox routines (even though it didn't work on my hardware), and I found some code that showed setting the gain by channel on an output device. Since input devices can be multichannel I figured this was the next logical thing to try.
Your question is really good because at least right now there is no correct documentation from Apple that shows how to do what you asked. It's goofy too because both channels report that they set the volume but clearly only one of them does (the input mic is a mono source so this isn't surprising, but I consider having a no-op channel and no documentation about it a bit of a bug on Apple).
This happens pretty consistently when you start dealing with Apple's cutting-edge technologies. You can do amazing things with their toolbox and it blows every other OS I've worked on out of the water but it doesn't take too long to get ahead of their documentation, particularly if you're trying to do anything moderately sophisticated.
If you ever decide to write a kernel driver for example you'll find the documentation on IOKit to be woefully inadequate. Ultimately you've got to get online and dig through source code, either other people's projects or the OS X source or both, and pretty soon you'll conclude as I have that the source is really the best place for answers (even though StackOverflow is pretty awesome).
Thanks for the points and good luck with your project :)

Objective-C - Passing Streamed Data to Audio Queue

I am currently developing an app on iOS that reads IMA-ADPCM audio data in over through a TCP socket and converts it to PCM and then plays the stream. At this stage, I have completed the class that pulls (or should I say reacts to pushes) in the data from the stream and decoded it to PCM. I have also setup the Audio Queue class and have it playing a test tone. Where I need assistance is the best way to pass the data into the Audio Queue.
The audio data comes out of the ADPCM decoder as 8 Khz 16bit LPCM at 640 bytes a chunk. (it originates as 160 bytes of ADPCM data but decompresses to 640). It comes into the function as uint_8t array and passes out an NSData object. The stream is a 'push' stream, so everytime the audio is sent it will create/flush the object.
-(NSData*)convertADPCM:(uint8_t[]) adpcmdata {
The Audio Queue callback of course is a pull function that goes looking for data on each pass of the run loop, on each pass it runs:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
I've been working on this for a few days and the PCM conversion was quite taxing and I am having a little bit of trouble assembling in my head the best way to bridge the data between the two. It's not like I am creating the data, then I could simply incorporate data creation into the fillbuffer routine, rather the data is being pushed.
I did setup a circular buffer, of 0.5 seconds in a uint16_t[] ~ but I think I have worn my brain out and couldn't work out a neat way to push and pull from the buffer, so I ended up with snap crackle pop.
I have completed the project mostly on Android, but found AudioTrack a very different beast to Core-Audio Queues.
At this stage I will also say I picked up a copy of Learning Core Audio by Adamson and Avila and found this an excellent resource for anyone looking to demystify core audio.
UPDATE:
Here is the buffer management code:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
int frame = 0;
double frameCount = bufferSize / self.streamFormat.mBytesPerFrame;
// buffersize = framecount = 8000 / 2 = 4000
//
// incoming buffer uint16_t[] convAudio holds 64400 bytes (big I know - 100 x 644 bytes)
// playedHead is set by the function to say where in the buffer the
// next starting point should be
if (playHead > 99) {
playHead = 0;
}
// Playstep factors playhead to get starting position
int playStep = playHead * 644;
// filling the buffer
for (frame = 0; frame < frameCount; ++frame)
// framecount = 4000
{
// pointer to buffer
SInt16 *data = (SInt16*)buffer->mAudioData;
// load data from uint16_t[] convAudio array into frame
(data)[frame] = convAudio[(frame + playStep)];
}
// set buffersize
buffer->mAudioDataByteSize = bufferSize;
// return no Error - Osstatus will return an error otherwise if there is one. (I think)
return noErr;
}
As I said, my brain was fuzzy when I wrote this, and there's probably something glaringly obvious I am missing.
Above code is called by the callback:
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
soundHandler *sHandler = (__bridge soundHandler*)inUserData;
CheckError([sHandler fillBuffer: inCompleteAQBuffer],
"can't refill buffer",
"buffer refilled");
CheckError(AudioQueueEnqueueBuffer(inAQ,
inCompleteAQBuffer,
0,
NULL),
"Couldn't enqueue buffer (refill)",
"buffer enqued (refill)");
}
On the convAudio array side of things I have dumped the it to log and it is getting filled and refilled in a circular fashion, so I know at least that bit is working.
The hard part in managing rates, and what to do if they don't match. At first, try using a huge circular buffer (many many seconds) and mostly fill it before starting the Audio Queue to pull from it. Then monitor the buffer level to see his big a rate matching problem you have.