Set AudioAttributes Volume - notifications

After searching for a very long time for a way to play notification noises only through the headphones (when plugged in), on a stream separate from STREAM_MUSIC, in a way that could interrupt and be completely audible over any background music, Android finally came out with the AudioAttributes API. By using the following code, I'm able to achieve exactly what I want for notifications, at least in API 21 or higher (STREAM_MUSIC is the best option I've found for lower versions):
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_SONIFICATION)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION)
.build();
Unfortunately, there doesn't appear to be any way to adjust the volume of the sonification in my app's settings. I currently use the AudioManager in the following way, but it only allows volume adjustments to streams, and none of STREAM_ALARM, STREAM_NOTIFICATION, STREAM_RING, or STREAM_MUSIC applies to whatever routing strategy is used for the sonification:
audioManager.setStreamVolume(AudioManager.STREAM_NOTIFICATION, originalVolume, 0);
Does anyone have any suggestion on how to set the volume corresponding to the AudioAttributes output? Keep in mind that the audio is actually played in a BroadcastReceiver that's used for the actual notification, and the audio setting would be specified in just some settings Activity.

Well, it appears that I missed a critical table in the API documentation:
https://source.android.com/devices/audio/attributes.html
It seems that STREAM_SYSTEM is the equivalent of what I was attempting to do with AudioAttributes. Basically, using the code I have above is sufficient for API 21 and forward, and use of STREAM_SYSTEM does everything necessary for the AudioManager and APIs prior to 21.

Related

Simple time-based chest push notification setup

Hello I am trying to create a simple push-notification system similar to this common use case:
1. The user gets a chest and can either watch an ad to skip the wait time or wait one hours for the chest to open. The app sends an upstream request which sets up a downstream push notification that shall be delivered in one hour to let the user know the chest is ready.
2a. The user then waits an hour, gets a push notification (outside of the app) to open their chest and they do!
or
2b. They wait 20 minutes then decide to watch the ad. The app sends an upstream request which cancels the pending push notification which would have otherwise been delivered in 40 minutes.
Okay awesome so that is the problem and I am having a hard time understanding how to do this. I have looked over the documentation for each of these programs but they seem designed for downstream push notifications. It just seems odd there is no built-in support for this use case. It seems like such a common use case.
I so far found 3 solutions that will integrate into my cross-platform Unity setup and provide services for free or super-cheap:
Amazon Simple Notification Service (SNS)
Google Firebase Cloud Messaging (FCM)
OneSignal
Amazon seems to group clients into "Topics" so I guess I would be setting up a one-device-topic and essentially. I can subscribe and unsubscribe from them but it doesn't seem to support a topic with a 60 minute delay.
2a. Create a topic: https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-topic.html (it would just include the current device)
2b. Subscribe to it
2c. Send a message to it https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-publish-message-with-attributes.html
So basically I can add attributes to my message but it would seem I need to implement the server-side code to read a delay attribute then somehow queue a message for delay. Maybe I am missing something?
For Firebase I pretty much see the same thing as Amazon. There are topics https://firebase.google.com/docs/cloud-messaging/android/topic-messaging and a means to send upstream messages https://firebase.google.com/docs/cloud-messaging/android/send-with-console but with the messages I don't see anyway here to get the time delay https://firebase.google.com/docs/cloud-messaging/unity/topic-messaging I see conditions towards the bottom of that article but I don't know if it is meant for this use case.
OneSignal has the easiest to scroll-through API. I'll refer to some strings that you can CTRL-F by using the format ("Create Notif") because everything is on this one page: https://documentation.onesignal.com/reference
So basically I can ("Send to Specific Devices") which I guess would be the sending device, then I can ("Schedule notification for future delivery.") using the send_after parameter. And finally, if need be, I can ("Cancel notification"). So this appears to be everything I need. I'm currently looking at this option and trying to figure out how to actually get this working.
So there is my progress over the last few hours researching each of these options. I am hoping you can help me better understand how I may be misunderstanding the above options as this seems to me a very common use-case. Perhaps I am just not googling the question correctly. Any help appreciated.
Whenever there's a likelihood that you'll need to cancel a significant percent of the notifications you send, you should use local notifications. That way you can easily schedule and cancel them locally without making any network requests. Also, this solution works for offline devices which is great for games (played on planes, etc...)

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

Can I programmatically control the sound volume on a application level?

SoundBunny does exactly what I need, the ability to mute/unmute and change the volume of the audio, on demand, at an individual application level. Unfortunately I can't control it via AppleScript as it's not scriptable.
What other options do I have?
I found a way to mute a particular audio queue based on a reference to it by using :
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 0);
The problem is that I can't find a way to retrieve all audio queues currently in use by other applications(don't know if that's possible). I only found documentation how to create and manipulate one.

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

NAudio: Recording Audio-Card's Actual Output

I successfully use WasapiLoopbackCapture() for recording audio played on system, but I'm looking for a way to record what the user would actually hear through the speakers.
I'll explain: If a certain application plays music, WASAPI Loopback shall intercept music samples, even if Windows main volume-control is set to 0, meaning: even if no sound is actually heard through audio-card's output-jack (speakers/headphone/etc).
I'd like to intercept the audio actually "reaching" the output-jack (after ALL mixers on the audio-path have "done their job").
Is this possible using NAudio (or other infrastructure)?
A code-sample or a link to a such could come in handy.
Thanks much.
No, this is not directly possible. The loopback capture provided by WASAPI is the stream of data being sent to the audio hardware. It is the hardware that controls the actual output sound, and this is where the volume level is applied to change the output signal strength. Apart from some hardware- and driver-specific options - or some interesting hardware solutions like loopback cables or external ADC - there is no direct method to get the true output data.
One option is to get the volume level from the mixer and apply it as a scaling factor on any data you receive from the loopback stream. This is not a perfect solution, but possibly the best you can do without specific hardware support.