We are facing one issue related to Twilio Programmable SDK & AppRTC version 57 for Android. As we have integrated both in existing Android application. You can have a look at the below link for your reference on Gradle dependencies and log cat crash logs.
Logcat crash logs -
E/rtc: #
# Fatal error in ../../webrtc/modules/utility/source/jvm_android.cc, line 233
# last system error: 88
# Check failed: !g_jvm
#
#
08-01 16:54:30.975 9534-9534/? A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 9534
Twilio Programmable Video SDK
While we use Twilio Programmable multi-party video call, it's get crashed for the first time and when we perform same Twilio Programmable multi-party video call for the second time, it's get connected but AppRTC P2P video call gets crashed.
AppRTC
While we use AppRTC P2P video call, first it gets crashed and when we perform same AppRTC P2P video call for the second time, it's get connected but Twilio Multiparty call gets crashed.
As we need both AppRTC & Twilio Programmable Video SDK in our existing project.
Steps to reproduce
Perform AppRTC P2P/Twilio Video call.
When the video call is connected, app crashes.
Perform Twilio/AppRTC P2P Video call.
When the video call is connected, app crashes.
Thanks!
Twilio developer evangelist here.
I believe you've been in contact with Twilio support regarding this issue. I just thought I'd update this publicly too.
Currently, Twilio Video Android SDK is not side-by-side compatible with AppRTC. There is likely to be work in the future to make this possible, but for now it won't work.
Related
I've been trying to debug this issue, I'm creating a peer to peer audio streaming app.
Each time a mobile device using Safari receives a peers incoming MediaStream, it plays the stream for a fraction of a second and then stops.
I've checked the actual MediaStream & Audio object I'm using to play the incoming stream, but nothing of its status suggests that's it's paused, inactive or stopped.
I know there are some issues with Safari when it comes to programmatically playing Audio, but I thought that was on the initial playback of the clients first stream and then all subsequent streams should work.
I used the implementation from this article.
Anyone know of the reason for this?
Everything was working fine but After updating Agora to 3.1.2., once remote user joined the video call, After a few seconds, Video call disconnected and getting this error in the log
"type":"exception","code":2001,"msg":"AUDIO_INPUT_LEVEL_TOO_LOW"
Version info.
"ngx-agora": "2.0.1",
"agora-rtc-sdk": "3.1.2",
Angular 10.0.8
It is a known issue by the developer and the team is working on fixing it and is an open bug on the Agora IO Community Repo here.
In the words of the developer:
How to reproduce
If you create and publish your microphone audio track without any user interaction, the remote user may not hear you. In this case, the console will print some logs like SEND_AUDIO_BITRATE_TO_LOW and AUDIO_INPUT_LEVEL_TOO_LOW.
And once you interact with the webpage, the remote user will hear you.
Root cause
Agora Web SDK NG uses the AudioContext API to do some audio pre-processing by default. However, the AudioContext is restricted by the browser's autoplay policy. If user has not interacted with your webpage, the AudioContext will not run. So there is no audio data produced from the SDK's pre-processing module in this case.
How to avoid
We will fix this issue in v4.0.2, and it will be released next month.
For now, we recommend that you should ensure that the user has interacted with the webpage before the audio track is published. For example, the user is required to click the accpet or confirm button to start a call.
We are facing a strange issue in webrtc call i.e. in a connected webrtc audio only call when some one upgrade the call(add video) the audio tracks will drops from orignator side.
steps to reproduce the problem
1.make a audio only call between two peers A and B.
2.updgrade call to video by calling getUserMedia again from peer A.
3.call established.
4.A can hear audio and view video.
5.B cant hear audio.
What is the expected result?
onaddstream(e) e.stream should contain both audio and video tracks
What do you see instead?
only video track is there at B's side (recipient)
What version of the product are you using? On what operating system?
Chrome 51/WIndows7
Please find the webrtc dump from below link
Webrtc dump
I develop a BLE application, using Core Bluetooth in my iPhone app.
My iPhone communicates with linux with bleno launched.
The problem is that it cannot work with more than one characteristic or service, though just one characteristic works nice.
Here is my code for iOS and .js code for Node.js:
http://pastebin.com/k5pUrbLt
http://pastebin.com/biCWLmJ3
Thank you!
P.S. That's what i get in console:
2014-07-24 13:18:02.819 lexy[142:60b] didDiscoverPeripheral
2014-07-24 13:18:04.503 lexy[142:60b] didDiscoverServices
2014-07-24 13:18:04.506 lexy[142:60b] D61191C0-FCE8-4F5A-912C-15EE39D927B4
I.e. I successfully discover and connect to the peripheral, but I do not find any characteristics. In that case I have one service with many characteristics.
iOS can work with multiple characteristics and multiple services on a remote device.
Your ObjC code seems okay (1 service with 2 characteristics).
Console output suggests that you send a discoverCharacteristics request but that it hangs and never returns. You could add logging to the didDisconnectPeripheral delegate method to see if there is a disconnect happening.
What you could also do is to look at the logs of the Bluetooth stack and to compare them to how the logs look like if you access a TI Sensor Tag (those tags are the hello world of BLE).
iOS 7.1 instructions for Bluetooth logs are located here, for iOS 8 you'll have to resort to Apple's official way to get the logs (installation of a configuration profile) but their logs contain less information than the ones by BluetoothCompanion.
Apparently, it works on Ubuntu 14.04, but doesn't work on OpenSuse 13.1.
I get the following errors when using the Dolby Audio API. I'm purposely using a loop to play an *.mp3 file quickly and I'm getting the following error.
01-03 20:42:04.109: E/AndroidRuntime(2913): FATAL EXCEPTION: DsClientHandlerThread
01-03 20:42:04.109: E/AndroidRuntime(2913): java.lang.RuntimeException: java.lang.RuntimeException: Internal DSClient.setDsOn(true) Failed!
01-03 20:42:04.109: E/AndroidRuntime(2913): at com.dolby.dap.DsClientManager.setDolbySurroundEnabled(DsClientManager.java:525)
If I load the *.mp3 via soundpool or mediaplayer class the error will be seen.
What's interesting is that *.ogg or *.wav is fine. Looks isolated to *.mp3 file format
Currently in Dolby API v1.1.1.0, this is a known issue.
Please check out the API and read the release note:
6. Revision History
Version 1.1.1.0
Known issues:
• On Kindle Fire HD/HDX devices, in an application leveraging the Dolby Audio
Plug-in, multiple calls to the Android™ MediaPlayer start/pause/stop APIs in
quick succession may result in the Dolby Audio Plug-in state getting out of
sync with the system-wide Dolby audio processing state. Subsequent calls to
the Dolby Audio Plug-in will rectify this state sync issue.
• Using the MediaPlayer interface for audio playback may exhibit this issue,
with the exception of Ogg Vorbis streams. For gaming audio use cases,
playback using SoundPool or writing raw (PCM) audio directly to an AudioTrack
does not exhibit this issue. You can work around this issue by checking the
current Dolby audio processing state using isEnabled() to ensure the Dolby
Audio Plug-in has the desired state after the audio playback has started.
The issue might be rectified in future release.