How do I access the current volume level of a Mac from the Cocoa API?
For example: when I'm using Spotify.app on OS X 10.7 and a sound advertisement comes up, and I turn down my Mac's volume, the app will pause the ad until I turn it back up to an average level. I find this incredibly obnoxious and a violation of user privacy, but somehow Spotify has found a way to do this.
Is there any way I can do this with Cocoa? I'm making an app where it might come in useful to warn the user if the volume is low.
There are two options available. The first step is to determine what device you'd like and get its ID. Assuming the default output device, the code will look something like:
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
AudioDeviceID deviceID;
UInt32 dataSize = sizeof(deviceID);
OSStatus result = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, &deviceID);
if(kAudioHardwareNoError != result)
// Handle the error
Next, you can use the kAudioHardwareServiceDeviceProperty_VirtualMasterVolume property to get the device's virtual master volume:
AudioObjectPropertyAddress propertyAddress = {
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
if(!AudioHardwareServiceHasProperty(deviceID, &propertyAddress))
// An error occurred
Float32 volume;
UInt32 dataSize = sizeof(volume);
OSStatus result = AudioHardwareServiceGetPropertyData(deviceID, &propertyAddress, 0, NULL, &dataSize, &volume);
if(kAudioHardwareNoError != result)
// An error occurred
Alternatively, you can use kAudioDevicePropertyVolumeScalar to get the volume for a specific channel:
UInt32 channel = 1; // Channel 0 is master, if available
AudioObjectPropertyAddress propertyAddress = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
channel
};
if(!AudioObjectHasProperty(deviceID, &propertyAddress))
// An error occurred
Float32 volume;
UInt32 dataSize = sizeof(volume);
OSStatus result = AudioObjectGetPropertyData(deviceID, &propertyAddress, 0, NULL, &dataSize, &volume);
if(kAudioHardwareNoError != result)
// An error occurred
The difference between the two is explained in Apple's docs:
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume
A Float32 value that represents the value of the volume control. The
range for this property’s value is 0.0 (silence) through 1.0 (full
level). The effect of this property depends on the hardware device
associated with the HAL audio object. If the device has a master
volume control, this property controls it. If the device has
individual channel volume controls, this property applies to those
identified by the device's preferred multichannel layout, or the
preferred stereo pair if the device is stereo only. This control
maintains relative balance between the channels it affects.
So it can be tricky to define exactly what a device's volume is, especially for multichannel devices with non-standard channel maps.
From CocoaDev, these class methods look like they should work, though it's not particularly Cocoa-like:
#import <AudioToolbox/AudioServices.h>
+(AudioDeviceID)defaultOutputDeviceID
{
AudioDeviceID outputDeviceID = kAudioObjectUnknown;
// get output device device
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultOutputDevice;
if (!AudioHardwareServiceHasProperty(kAudioObjectSystemObject, &propertyAOPA))
{
NSLog(#"Cannot find default output device!");
return outputDeviceID;
}
propertySize = sizeof(AudioDeviceID);
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, &propertySize, &outputDeviceID);
if(status)
{
NSLog(#"Cannot find default output device!");
}
return outputDeviceID;
}
// getting system volume
+(float)volume
{
Float32 outputVolume;
UInt32 propertySize = 0;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mSelector = kAudioHardwareServiceDeviceProperty_VirtualMasterVolume;
propertyAOPA.mScope = kAudioDevicePropertyScopeOutput;
AudioDeviceID outputDeviceID = [[self class] defaultOutputDeviceID];
if (outputDeviceID == kAudioObjectUnknown)
{
NSLog(#"Unknown device");
return 0.0;
}
if (!AudioHardwareServiceHasProperty(outputDeviceID, &propertyAOPA))
{
NSLog(#"No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
propertySize = sizeof(Float32);
status = AudioHardwareServiceGetPropertyData(outputDeviceID, &propertyAOPA, 0, NULL, &propertySize, &outputVolume);
if (status)
{
NSLog(#"No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
if (outputVolume < 0.0 || outputVolume > 1.0) return 0.0;
return outputVolume;
}
Related
I am using FFmpeg to access an RTSP stream in my macOS app.
REACHED GOALS: I have created a tone generator which creates single channel audio and returns a CMSampleBuffer. The tone generator is used to test my audio pipeline when the video's fps and audio sample rates are changed.
GOAL: The goal is to merge multi-channel audio buffers into a single CMSampleBuffer.
Audio data lifecyle:
AVCodecContext* audioContext = self.rtspStreamProvider.audioCodecContext;
if (!audioContext) { return; }
// Getting audio settings from FFmpegs audio context (AVCodecContext).
int samplesPerChannel = audioContext->frame_size;
int frameNumber = audioContext->frame_number;
int sampleRate = audioContext->sample_rate;
int fps = [self.rtspStreamProvider fps];
int calculatedSampleRate = sampleRate / fps;
// NSLog(#"\nSamples per channel = %i, frames = %i.\nSample rate = %i, fps = %i.\ncalculatedSampleRate = %i.", samplesPerChannel, frameNumber, sampleRate, fps, calculatedSampleRate);
// Decoding the audio data from a encoded AVPacket into a AVFrame.
AVFrame* audioFrame = [self.rtspStreamProvider readDecodedAudioFrame];
if (!audioFrame) { return; }
// Extracting my audio buffers from FFmpegs AVFrame.
uint8_t* leftChannelAudioBufRef = audioFrame->data[0];
uint8_t* rightChannelAudioBufRef = audioFrame->data[1];
// Creating the CMSampleBuffer with audio data.
CMSampleBufferRef leftSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:leftChannelAudioBufRef channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
// CMSampleBufferRef rightSampleBuffer = [CMSampleBufferFactory createAudioSampleBufferUsingData:packet->data[1] channelCount:1 framesCount:samplesPerChannel sampleRate:sampleRate];
if (!leftSampleBuffer) { return; }
if (!self.audioQueue) { return; }
if (!self.audioDelegates) { return; }
// All audio consumers will receive audio samples via delegation.
dispatch_sync(self.audioQueue, ^{
NSHashTable *audioDelegates = self.audioDelegates;
for (id<AudioDataProviderDelegate> audioDelegate in audioDelegates)
{
[audioDelegate provider:self didOutputAudioSampleBuffer:leftSampleBuffer];
// [audioDelegate provider:self didOutputAudioSampleBuffer:rightSampleBuffer];
}
});
CMSampleBuffer containing audio data creation:
import Foundation
import CoreMedia
#objc class CMSampleBufferFactory: NSObject
{
#objc static func createAudioSampleBufferUsing(data: UnsafeMutablePointer<UInt8> ,
channelCount: UInt32,
framesCount: CMItemCount,
sampleRate: Double) -> CMSampleBuffer? {
/* Prepare for sample Buffer creation */
var sampleBuffer: CMSampleBuffer! = nil
var osStatus: OSStatus = -1
var audioFormatDescription: CMFormatDescription! = nil
var absd: AudioStreamBasicDescription! = nil
let sampleDuration = CMTimeMake(value: 1, timescale: Int32(sampleRate))
let presentationTimeStamp = CMTimeMake(value: 0, timescale: Int32(sampleRate))
// NOTE: Change bytesPerFrame if you change the block buffer value types. Currently we are using double.
let bytesPerFrame: UInt32 = UInt32(MemoryLayout<Float32>.size) * channelCount
let memoryBlockByteLength = framesCount * Int(bytesPerFrame)
// var acl = AudioChannelLayout()
// acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo
/* Sample Buffer Block buffer creation */
var blockBuffer: CMBlockBuffer?
osStatus = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: nil,
blockLength: memoryBlockByteLength,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: memoryBlockByteLength,
flags: 0,
blockBufferOut: &blockBuffer
)
assert(osStatus == kCMBlockBufferNoErr)
guard let eBlock = blockBuffer else { return nil }
osStatus = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: memoryBlockByteLength)
assert(osStatus == kCMBlockBufferNoErr)
TVBlockBufferHelper.fillAudioBlockBuffer(blockBuffer,
audioData: data,
frames: Int32(framesCount))
/* Audio description creations */
absd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsFloat,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: channelCount,
mBitsPerChannel: 32,
mReserved: 0
)
guard absd != nil else {
print("\nCreating AudioStreamBasicDescription Failed.")
return nil
}
osStatus = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault,
asbd: &absd,
layoutSize: 0,
layout: nil,
// layoutSize: MemoryLayout<AudioChannelLayout>.size,
// layout: &acl,
magicCookieSize: 0,
magicCookie: nil,
extensions: nil,
formatDescriptionOut: &audioFormatDescription)
guard osStatus == noErr else {
print("\nCreating CMFormatDescription Failed.")
return nil
}
/* Create sample Buffer */
var timmingInfo = CMSampleTimingInfo(duration: sampleDuration, presentationTimeStamp: presentationTimeStamp, decodeTimeStamp: .invalid)
osStatus = CMSampleBufferCreate(allocator: kCFAllocatorDefault,
dataBuffer: eBlock,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: audioFormatDescription,
sampleCount: framesCount,
sampleTimingEntryCount: 1,
sampleTimingArray: &timmingInfo,
sampleSizeEntryCount: 0, // Must be 0, 1, or numSamples.
sampleSizeArray: nil, // Pointer ot Int. Don't know the size. Don't know if its bytes or bits?
sampleBufferOut: &sampleBuffer)
return sampleBuffer
}
}
CMSampleBuffer gets filled with raw audio data from FFmpeg's data:
#import Foundation;
#import CoreMedia;
#interface BlockBufferHelper : NSObject
+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
audioData:(uint8_t *)data
frames:(int)framesCount;
#end
#import "TVBlockBufferHelper.h"
#implementation BlockBufferHelper
+(void)fillAudioBlockBuffer:(CMBlockBufferRef)blockBuffer
audioData:(uint8_t *)data
frames:(int)framesCount
{
// Possibly dev error.
if (framesCount == 0) {
NSAssert(false, #"\nfillAudioBlockBuffer/audioData/frames will not be able to fill an blockBuffer which has no frames.");
return;
}
char *rawBuffer = NULL;
size_t size = 0;
OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, &size, NULL, &rawBuffer);
if(status != noErr)
{
return;
}
memcpy(rawBuffer, data, framesCount);
}
#end
The LEARNING Core Audio book from Chris Adamson/Kevin Avila points me toward a multi channel mixer.
The multi channel mixer should have 2-n inputs and 1 output. I assume the output could be a buffer or something that could be put into a CMSampleBuffer for further consumption.
This direction should lead me to AudioUnits, AUGraph and the AudioToolbox. I don't understand all of these classes and how they work together. I have found some code snippets on SO which could help me but most of them use AudioToolBox classes and don't use CMSampleBuffers as much as I need.
Is there another way to merge audio buffers into a new one?
Is creating a multi channel mixer using AudioToolBox the right direction?
Currently, I have code that successfully returns the value of the users system audio value that they can set with the volume keys.
However, what I want is a value of the audio the speakers are playing. So if the user is watching Netflix and a character starts screaming, the value would return higher than if the character was whispering.
Code I have now:
+ (AudioDeviceID)defaultOutputDeviceID {
OSStatus status = noErr;
AudioDeviceID outputDeviceID = kAudioObjectUnknown;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
UInt32 propertySize = sizeof(outputDeviceID);
if (!AudioHardwareServiceHasProperty(kAudioObjectSystemObject, &propertyAOPA)) {
NSLog(#"Cannot find default output device!");
return outputDeviceID;
}
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, &propertySize, &outputDeviceID);
if(status) {
NSLog(#"Cannot find default output device!");
}
return outputDeviceID;
}
+ (float)volume {
OSStatus status = noErr;
AudioDeviceID outputDeviceID = [[self class] defaultOutputDeviceID];
if (outputDeviceID == kAudioObjectUnknown) {
NSLog(#"Unknown device");
return 0.0;
}
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mScope = kAudioDevicePropertyScopeOutput;
propertyAOPA.mSelector = kAudioHardwareServiceDeviceProperty_VirtualMasterVolume;
Float32 outputVolume;
UInt32 propertySize = sizeof(outputVolume);
if (!AudioHardwareServiceHasProperty(outputDeviceID, &propertyAOPA)) {
NSLog(#"No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
status = AudioHardwareServiceGetPropertyData(outputDeviceID, &propertyAOPA, 0, NULL, &propertySize, &outputVolume);
if (status) {
NSLog(#"No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
if (outputVolume < 0.0 || outputVolume > 1.0)
return 0.0;
return outputVolume;
}
Get the volume level and set it as you want (as max. volume) and then revert it back again to users original volume level. For more details you can see the below link;
https://stackoverflow.com/a/27743599/1351327
Hope it will help. (Be careful that Apple will reject your app)
I need my app to be notified when the OS X sound volume has changed. This is for a Desktop app, not for iOS. How can I register for this notification?
This can be a tiny bit tricky because some audio devices support a master channel, but most don't so the volume will be a per-channel property. Depending on what you need to do you could observe only one channel and assume that all other channels the device supports have the same volume. Regardless of how many channels you want to watch, you observe the volume by registering a property listener for the AudioObject in question:
// Some devices (but not many) support a master channel
AudioObjectPropertyAddress propertyAddress = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
kAudioObjectPropertyElementMaster
};
if(AudioObjectHasProperty(deviceID, &propertyAddress)) {
OSStatus result = AudioObjectAddPropertyListener(deviceID, &propertyAddress, myAudioObjectPropertyListenerProc, self);
// Error handling omitted
}
else {
// Typically the L and R channels are 1 and 2 respectively, but could be different
propertyAddress.mElement = 1;
OSStatus result = AudioObjectAddPropertyListener(deviceID, &propertyAddress, myAudioObjectPropertyListenerProc, self);
// Error handling omitted
propertyAddress.mElement = 2;
result = AudioObjectAddPropertyListener(deviceID, &propertyAddress, myAudioObjectPropertyListenerProc, self);
// Error handling omitted
}
Your listener proc should be something like:
static OSStatus
myAudioObjectPropertyListenerProc(AudioObjectID inObjectID,
UInt32 inNumberAddresses,
const AudioObjectPropertyAddress inAddresses[],
void *inClientData)
{
for(UInt32 addressIndex = 0; addressIndex < inNumberAddresses; ++addressIndex) {
AudioObjectPropertyAddress currentAddress = inAddresses[addressIndex];
switch(currentAddress.mSelector) {
case kAudioDevicePropertyVolumeScalar:
{
Float32 volume = 0;
UInt32 dataSize = sizeof(volume);
OSStatus result = AudioObjectGetPropertyData(inObjectID, ¤tAddress, 0, NULL, &dataSize, &volume);
if(kAudioHardwareNoError != result) {
// Handle the error
continue;
}
// Process the volume change
break;
}
}
}
}
I've known that Airport can be turned off by CoreWLAN framework.
So, I think there are probably functions or frameworks related with bluetooth device and sound device.
How can I turn off that devices?
I assume you by "cannot have power so that it cannot speak", you mean you simply want to mute the speaker. I found some neat sample code here, using CoreAudio to mute the system's default speaker: http://cocoadev.com/index.pl?SoundVolume
I took the liberty of converting it to pure C and trying it out.
#import <CoreAudio/CoreAudio.h>
#import <stdio.h>
// getting system volume
float getVolume() {
float b_vol;
OSStatus err;
AudioDeviceID device;
UInt32 size;
UInt32 channels[2];
float volume[2];
// get device
size = sizeof device;
err = AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice, &size, &device);
if(err!=noErr) {
printf("audio-volume error get device\n");
return 0.0;
}
// try set master volume (channel 0)
size = sizeof b_vol;
err = AudioDeviceGetProperty(device, 0, 0, kAudioDevicePropertyVolumeScalar, &size, &b_vol); //kAudioDevicePropertyVolumeScalarToDecibels
if(noErr==err) return b_vol;
// otherwise, try seperate channels
// get channel numbers
size = sizeof(channels);
err = AudioDeviceGetProperty(device, 0, 0,kAudioDevicePropertyPreferredChannelsForStereo, &size,&channels);
if(err!=noErr) printf("error getting channel-numbers\n");
size = sizeof(float);
err = AudioDeviceGetProperty(device, channels[0], 0, kAudioDevicePropertyVolumeScalar, &size, &volume[0]);
if(noErr!=err) printf("error getting volume of channel %d\n",channels[0]);
err = AudioDeviceGetProperty(device, channels[1], 0, kAudioDevicePropertyVolumeScalar, &size, &volume[1]);
if(noErr!=err) printf("error getting volume of channel %d\n",channels[1]);
b_vol = (volume[0]+volume[1])/2.00;
return b_vol;
}
// setting system volume
void setVolume(float involume) {
OSStatus err;
AudioDeviceID device;
UInt32 size;
Boolean canset = false;
UInt32 channels[2];
//float volume[2];
// get default device
size = sizeof device;
err = AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice, &size, &device);
if(err!=noErr) {
printf("audio-volume error get device\n");
return;
}
// try set master-channel (0) volume
size = sizeof canset;
err = AudioDeviceGetPropertyInfo(device, 0, false, kAudioDevicePropertyVolumeScalar, &size, &canset);
if(err==noErr && canset==true) {
size = sizeof involume;
err = AudioDeviceSetProperty(device, NULL, 0, false, kAudioDevicePropertyVolumeScalar, size, &involume);
return;
}
// else, try seperate channes
// get channels
size = sizeof(channels);
err = AudioDeviceGetProperty(device, 0, false, kAudioDevicePropertyPreferredChannelsForStereo, &size,&channels);
if(err!=noErr) {
printf("error getting channel-numbers\n");
return;
}
// set volume
size = sizeof(float);
err = AudioDeviceSetProperty(device, 0, channels[0], false, kAudioDevicePropertyVolumeScalar, size, &involume);
if(noErr!=err) printf("error setting volume of channel %d\n",channels[0]);
err = AudioDeviceSetProperty(device, 0, channels[1], false, kAudioDevicePropertyVolumeScalar, size, &involume);
if(noErr!=err) printf("error setting volume of channel %d\n",channels[1]);
}
int main() {
printf("The system's volume is currently %f\n", getVolume());
printf("Setting volume to 0.\n");
setVolume(0.0f);
return 0;
}
I ran it and got this:
[04:29:03] [william#enterprise ~/Documents/Programming/c]$ gcc -framework CoreAudio -o mute.o coreaudio.c
.. snipped compiler output..
[04:29:26] [william#enterprise ~/Documents/Programming/c]$ ./mute.o
The system's volume is currently 0.436749
Setting volume to 0.
Hopefully this sends you in the right direction.
Is it possible for xcode to have an audio level indicator?
I want to do something like this:
if (audioLevel = 100) {
}
or something similar...
Any ideas?? Example code please?
I'm VERY new to objective c so the more explaining the beter! :D
Unfortunately, there isn't a very straightforward API to do this. You need to use the low level AudioToolbox.framework.
Luckily, others have already solved this problem for you. Here's some code I simplified slightly to be straight C functions, from CocoaDev. You need to link to the AudioToolbox to compile this code (see here for documentation on how to do so).
#import <AudioToolbox/AudioServices.h>
AudioDeviceID getDefaultOutputDeviceID()
{
AudioDeviceID outputDeviceID = kAudioObjectUnknown;
// get output device device
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mScope = kAudioObjectPropertyScopeGlobal;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mSelector = kAudioHardwarePropertyDefaultOutputDevice;
if (!AudioHardwareServiceHasProperty(kAudioObjectSystemObject, &propertyAOPA))
{
printf("Cannot find default output device!");
return outputDeviceID;
}
status = AudioHardwareServiceGetPropertyData(kAudioObjectSystemObject, &propertyAOPA, 0, NULL, (UInt32[]){sizeof(AudioDeviceID)}, &outputDeviceID);
if (status != 0)
{
printf("Cannot find default output device!");
}
return outputDeviceID;
}
float getVolume ()
{
Float32 outputVolume;
OSStatus status = noErr;
AudioObjectPropertyAddress propertyAOPA;
propertyAOPA.mElement = kAudioObjectPropertyElementMaster;
propertyAOPA.mSelector = kAudioHardwareServiceDeviceProperty_VirtualMasterVolume;
propertyAOPA.mScope = kAudioDevicePropertyScopeOutput;
AudioDeviceID outputDeviceID = getDefaultOutputDeviceID();
if (outputDeviceID == kAudioObjectUnknown)
{
printf("Unknown device");
return 0.0;
}
if (!AudioHardwareServiceHasProperty(outputDeviceID, &propertyAOPA))
{
printf("No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
status = AudioHardwareServiceGetPropertyData(outputDeviceID, &propertyAOPA, 0, NULL, (UInt32[]){sizeof(Float32)}, &outputVolume);
if (status)
{
printf("No volume returned for device 0x%0x", outputDeviceID);
return 0.0;
}
if (outputVolume < 0.0 || outputVolume > 1.0) return 0.0;
return outputVolume;
}
int main (int argc, char const *argv[])
{
printf("%f", getVolume());
return 0;
}
Note that there's also a setVolume function there, too.