Errors when using Dolby Audio API - android-mediaplayer

I get the following errors when using the Dolby Audio API. I'm purposely using a loop to play an *.mp3 file quickly and I'm getting the following error.
01-03 20:42:04.109: E/AndroidRuntime(2913): FATAL EXCEPTION: DsClientHandlerThread
01-03 20:42:04.109: E/AndroidRuntime(2913): java.lang.RuntimeException: java.lang.RuntimeException: Internal DSClient.setDsOn(true) Failed!
01-03 20:42:04.109: E/AndroidRuntime(2913): at com.dolby.dap.DsClientManager.setDolbySurroundEnabled(DsClientManager.java:525)
If I load the *.mp3 via soundpool or mediaplayer class the error will be seen.
What's interesting is that *.ogg or *.wav is fine. Looks isolated to *.mp3 file format

Currently in Dolby API v1.1.1.0, this is a known issue.
Please check out the API and read the release note:
6. Revision History
Version 1.1.1.0
Known issues:
• On Kindle Fire HD/HDX devices, in an application leveraging the Dolby Audio
Plug-in, multiple calls to the Android™ MediaPlayer start/pause/stop APIs in
quick succession may result in the Dolby Audio Plug-in state getting out of
sync with the system-wide Dolby audio processing state. Subsequent calls to
the Dolby Audio Plug-in will rectify this state sync issue.
• Using the MediaPlayer interface for audio playback may exhibit this issue,
with the exception of Ogg Vorbis streams. For gaming audio use cases,
playback using SoundPool or writing raw (PCM) audio directly to an AudioTrack
does not exhibit this issue. You can work around this issue by checking the
current Dolby audio processing state using isEnabled() to ensure the Dolby
Audio Plug-in has the desired state after the audio playback has started.
The issue might be rectified in future release.

Related

I cannot play Ant Media Server recorded mp4 file in iOS. How to fix?

I am using Ant Media Server as a WebRTC media server. I publish my webcam with WebRTC from browser. I have also enabled mp4 recording.
I can play recorded files on Linux and Windows. But I can't play some of them on iOS. How can I solve this?
This can be fixed by encoding file. But learning why it happens is better.
Since Ant Media Server records WebRTC streams, WebRTC changes internal resolution dynamically which means mp4 files resolution is also changed on-the-fly. This is handled well in other platforms but not on the IOS yet.
So, encoding the file again would likely fix it.

Error 2001 AUDIO_INPUT_LEVEL_TOO_LOW in Agora video Call

Everything was working fine but After updating Agora to 3.1.2., once remote user joined the video call, After a few seconds, Video call disconnected and getting this error in the log
"type":"exception","code":2001,"msg":"AUDIO_INPUT_LEVEL_TOO_LOW"
Version info.
"ngx-agora": "2.0.1",
"agora-rtc-sdk": "3.1.2",
Angular 10.0.8
It is a known issue by the developer and the team is working on fixing it and is an open bug on the Agora IO Community Repo here.
In the words of the developer:
How to reproduce
If you create and publish your microphone audio track without any user interaction, the remote user may not hear you. In this case, the console will print some logs like SEND_AUDIO_BITRATE_TO_LOW and AUDIO_INPUT_LEVEL_TOO_LOW.
And once you interact with the webpage, the remote user will hear you.
Root cause
Agora Web SDK NG uses the AudioContext API to do some audio pre-processing by default. However, the AudioContext is restricted by the browser's autoplay policy. If user has not interacted with your webpage, the AudioContext will not run. So there is no audio data produced from the SDK's pre-processing module in this case.
How to avoid
We will fix this issue in v4.0.2, and it will be released next month.
For now, we recommend that you should ensure that the user has interacted with the webpage before the audio track is published. For example, the user is required to click the accpet or confirm button to start a call.

Fatal error in ../../webrtc/modules/utility/source/jvm_android.cc

We are facing one issue related to Twilio Programmable SDK & AppRTC version 57 for Android. As we have integrated both in existing Android application. You can have a look at the below link for your reference on Gradle dependencies and log cat crash logs.
Logcat crash logs -
E/rtc: #
# Fatal error in ../../webrtc/modules/utility/source/jvm_android.cc, line 233
# last system error: 88
# Check failed: !g_jvm
#
#
08-01 16:54:30.975 9534-9534/? A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 9534
Twilio Programmable Video SDK
While we use Twilio Programmable multi-party video call, it's get crashed for the first time and when we perform same Twilio Programmable multi-party video call for the second time, it's get connected but AppRTC P2P video call gets crashed.
AppRTC
While we use AppRTC P2P video call, first it gets crashed and when we perform same AppRTC P2P video call for the second time, it's get connected but Twilio Multiparty call gets crashed.
As we need both AppRTC & Twilio Programmable Video SDK in our existing project.
Steps to reproduce
Perform AppRTC P2P/Twilio Video call.
When the video call is connected, app crashes.
Perform Twilio/AppRTC P2P Video call.
When the video call is connected, app crashes.
Thanks!
Twilio developer evangelist here.
I believe you've been in contact with Twilio support regarding this issue. I just thought I'd update this publicly too.
Currently, Twilio Video Android SDK is not side-by-side compatible with AppRTC. There is likely to be work in the future to make this possible, but for now it won't work.

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers.
Are there any WebRTC libraries for Java or C#?
That receiver will be running on a linux machine.
--
I've been thinking about using getUserMedia() to access the microphone. But then:
In what format will such a stream be transmitted?
Let's say I use WebRTC2SIP and build a Java endpoint using JSIP;
or I just use a socket and send the stream over http.
What audio format will I get on the receiver side? So far I have read WebRTC does compress the stream somehow.
I guess there are two ways for you:
build the whole WebRTC voice engine for android/iOS or Mac etc., and just use the API provide by VOE.
build standalone NS/VAD/AECM/AGC modules and using it in your project. for example, you build standalone NS module for android mobile, you use AudioRecord(java layer, android things) to record sound from MIC, and do the noise suppression process on these data(jni layer, WebRTC things), and finally playback the processed data by using AudioTrack(java layer, android things).
EDIT:
for the 2nd situation, the format is PCM raw data.
Check out the working Audio demo and code at demo.easyrtc.com
The code is all open source and can be checked out at https://github.com/priologic/easyrtc
You can look for any known issues around easyRTC at our forum at
https://groups.google.com/forum/#!forum/easyrtc
Also check out our main site at easyrtc.com

mp3 files not supported by phonon

I am using Qt SDK 4.6 for developing a simple music player on windows XP.
I have checked available mime types supported by phonon and according to it my phonon supports mp3 files
Yet as i try to play an audio file of .mp3 format using my music player mediaObject moves in Error State and the error i get is
Fatal Error : No combination of filters could be found to render the stream
Secondly, I want to know how can i provide support for other audio files which are not currently supported by phonon like as .ogg file.
Please help.
Phonon is just an abstraction layer over a media player backend. Check the overview here and look for the Backends section.
You need to install mp3 codecs for your target backend. If you're on Windows, this is making sure you have mp3 playback in DirectShow/DirectX. Your error implies that you don't have any DirectShow filters for mp3