SoundBunny does exactly what I need, the ability to mute/unmute and change the volume of the audio, on demand, at an individual application level. Unfortunately I can't control it via AppleScript as it's not scriptable.
What other options do I have?
I found a way to mute a particular audio queue based on a reference to it by using :
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 0);
The problem is that I can't find a way to retrieve all audio queues currently in use by other applications(don't know if that's possible). I only found documentation how to create and manipulate one.
Related
After searching for a very long time for a way to play notification noises only through the headphones (when plugged in), on a stream separate from STREAM_MUSIC, in a way that could interrupt and be completely audible over any background music, Android finally came out with the AudioAttributes API. By using the following code, I'm able to achieve exactly what I want for notifications, at least in API 21 or higher (STREAM_MUSIC is the best option I've found for lower versions):
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_SONIFICATION)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION)
.build();
Unfortunately, there doesn't appear to be any way to adjust the volume of the sonification in my app's settings. I currently use the AudioManager in the following way, but it only allows volume adjustments to streams, and none of STREAM_ALARM, STREAM_NOTIFICATION, STREAM_RING, or STREAM_MUSIC applies to whatever routing strategy is used for the sonification:
audioManager.setStreamVolume(AudioManager.STREAM_NOTIFICATION, originalVolume, 0);
Does anyone have any suggestion on how to set the volume corresponding to the AudioAttributes output? Keep in mind that the audio is actually played in a BroadcastReceiver that's used for the actual notification, and the audio setting would be specified in just some settings Activity.
Well, it appears that I missed a critical table in the API documentation:
https://source.android.com/devices/audio/attributes.html
It seems that STREAM_SYSTEM is the equivalent of what I was attempting to do with AudioAttributes. Basically, using the code I have above is sufficient for API 21 and forward, and use of STREAM_SYSTEM does everything necessary for the AudioManager and APIs prior to 21.
I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.
I successfully use WasapiLoopbackCapture() for recording audio played on system, but I'm looking for a way to record what the user would actually hear through the speakers.
I'll explain: If a certain application plays music, WASAPI Loopback shall intercept music samples, even if Windows main volume-control is set to 0, meaning: even if no sound is actually heard through audio-card's output-jack (speakers/headphone/etc).
I'd like to intercept the audio actually "reaching" the output-jack (after ALL mixers on the audio-path have "done their job").
Is this possible using NAudio (or other infrastructure)?
A code-sample or a link to a such could come in handy.
Thanks much.
No, this is not directly possible. The loopback capture provided by WASAPI is the stream of data being sent to the audio hardware. It is the hardware that controls the actual output sound, and this is where the volume level is applied to change the output signal strength. Apart from some hardware- and driver-specific options - or some interesting hardware solutions like loopback cables or external ADC - there is no direct method to get the true output data.
One option is to get the volume level from the mixer and apply it as a scaling factor on any data you receive from the loopback stream. This is not a perfect solution, but possibly the best you can do without specific hardware support.
A client of mine wants to continually record audio and when he clicks submit he wants to submit the last 10 seconds only. So he wants a continual recording and only keeping the last x seconds.
I would think this requires something like a circular buffer, but (as a somewhat newbie for iOS) it looks like AVAudioRecorder can only write to a file.
Are there any options for me to implement this?
Can anyone give a pointer?
I would use the Audio Queue Services. This will allow you isolate certain parts of the buffer. Here is the guide to it: http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW1
I would like to control the playback rate of a song while it is playing. Basically I want to make it play a little faster or slower, when I tell it to do so.
Also, is it possible to playback two different tracks at the same time. Imagine a recording with the instruments in one track and the vocal in a different track. One of these tracks should then be able to change the playback rate in "realtime".
Is this possible on Symbian/S60?
It's possible, but you would have to:
Convert the audio data into PCM, if it is not already in this format
Process this PCM stream in the application, in order to change its playback rate
Render the audio via CMdaAudioOutputStream or CMMFDevSound (or QAudioOutput, if you are using Qt)
In other words, the platform itself does not provide any APIs for changing the audio playback rate - your application would need to process the audio stream directly.
As for playing multiple tracks together, depending on the device, the audio subsystem may let you play two or more streams simultaneously using either of the above APIs. The problem you may have however is that they are unlikely to be synchronised. Your app would probably therefore have to mix all of the individual tracks into one stream before rendering.