Webrtc audio level goes crazy randomly - webrtc

I have a remote Webrtc endpoint (based on libwebrtc) that produces audio from a pulseaudio sink monitor. I can successfully consume it in Chrome via a video html tag. The problem that I am facing is that the audio seems to go "crazy" randomly, meaning that the audio level suddenly goes very high and it's impossible to listen to it.
On the sending side, I am using this to disable audio processing:
cricket::AudioOptions options;
options.highpass_filter = false;
options.auto_gain_control = false;
options.noise_suppression = false;
options.echo_cancellation = false;
options.residual_echo_detector = false;
options.experimental_agc = false;
options.experimental_ns = false;
options.typing_detection = false;
rtc::scoped_refptr<webrtc::AudioSourceInterface> source = webrtcFactory->CreateAudioSource(options);
But on the receiving side (Chrome browser), there seems to be no way to disable it.
Running mediaDevices.getSupportedConstraints() in Chrome returns:
aspectRatio: true
autoGainControl: true
brightness: true
channelCount: true
colorTemperature: true
contrast: true
deviceId: true
echoCancellation: true
exposureCompensation: true
exposureMode: true
exposureTime: true
facingMode: true
focusDistance: true
focusMode: true
frameRate: true
groupId: true
height: true
iso: true
latency: true
noiseSuppression: true
pointsOfInterest: true
resizeMode: true
sampleRate: true
sampleSize: true
saturation: true
sharpness: true
torch: true
whiteBalanceMode: true
width: true
zoom: true
Then running track.getCapabilities() in Chrome returns:
autoGainControl: [false]
deviceId: "7254143d-7c85-4567-9d95-94f2c79060fe"
echoCancellation: [false]
noiseSuppression: [false]
sampleSize: {max: 16, min: 16}
And finally track.getConstraints() in Chrome returns and empty object.
What I understand from the above is that Chrome supports audio processing cancellation, but the track does not. At this moment I am confused regarding as to at which point (sending or receiving) webrtc's audio processing makes the sound level go crazy.
I've read here that (when using a file as test input in Chrome) "all audio processing has to be disabled or the audio will be distorted". Which is exactly what I want to do for my track as well, with the difference being that since my track is remote, I am not obtaining it via the browser's getUserMedia().
I have been reading a lot about disabling Webrtc's audio processing and trying various things both on the producing and consuming side, but the same problem still appears randomly.
Do you have any idea on which side (sender or Chrome consumer) the problem might be?

The problem was on the sending side, and unrelated to libwebrtc.

Related

uploading large file on google cloud bucket failing

I am uploading a large file from my local system or an remote_url to google bucket. However everytime I am getting the below error.
/usr/local/lib/ruby/3.0.0/openssl/buffering.rb:345:in `syswrite': execution expired (Google::Apis::TransmissionError)
/usr/local/lib/ruby/3.0.0/openssl/buffering.rb:345:in `syswrite': execution expired (HTTPClient::SendTimeoutError)
I am using carrierwave initializer to define configuration of my google service account and its details. Please suggest if there is any configuration I am missing or if I add to increase the timeout or retries.
My Carrierwave initializer:
begin
CarrierWave.configure do |config|
config.fog_provider = 'fog/google'
config.fog_credentials = {
provider: 'Google',
google_project: '{project name}',
#google_json_key_string: Rails.application.secrets.google_cloud_storage_credential_content
google_json_key_location: '{My json key-file location}'
}
config.fog_attributes = {
expires: 600000,
open_timeout_sec: 600000,
send_timeout_sec: 600000,
read_timeout_sec: 600000,
fog_authenticated_url_expiration: 600000
}
config.fog_authenticated_url_expiration = 600000
config.fog_directory = 'test-bucket'
end
#
rescue => error
puts error.message
end
This might have to do with the duration of time between when the connection and initialized and when it actually gets used. Adding: persistent: false to the fog_credentials should make it create a new connection for each request. This is a bit less performant, but it should at least work consistently, unlike what you appear to be running into presently.

WebRTC many to many, how to identify user?

User started with DataChannel only.
AudioChannel is added through renegotiation later.
var mLocalAudio;
navigator.getUserMedia({ video: false, audio: true },
function (myStream) {
mLocalAudio = myStream;
mConn.addStream(myStream);
}, function (e) { });
On the remote peer, ontrack will be triggered and we add the stream on to an <audio> element.
But since this is a many to many connection, there will have multiple peers trying to swith on / off their audio channel from time to time.
My problem is, how can I identify which audio track is belongs to which user?

What webrtc media constraints should I use to remove all processing / effects on the audio?

I am currently using
var mediaOptions = {
audio: {
optional: {
sourceId: this.get('audioInputId'),
googAutoGainControl: false,
googNoiseSuppression: false,
googEchoCancellation: false,
googHighpassFilter: false
}
}
}
Is there anything else I should be turning off? I'm recording the audio so it needs to be untouched by any processing.
I'm noticing that sometimes there is a ducking effect on one end when others are talking.
Also, are there any flags for Firefox? Does Firefox respect any of these?
On Firefox you should be fine only with
audio : {
"mandatory": {
"echoCancellation": "false"
}
}
and in Chrome
audio : {
"mandatory": {
"googEchoCancellation": "false",
"googNoiseSuppression": "false",
"googHighpassFilter": "false",
"googTypingNoiseDetection": "false"
},
"optional": []
}
But disabling these features it's usually done if you want to stream music. If you stream voice I think it's recommended to leave them on.
The ducking effect on voice streaming is not because of any processing, rather of a slow network (low bandwidth or high latency).

How to use kurento-media-server for audio only stream?

I want to have only audio stream communication between peers , I changed the parts of kurento.utils.js to get only audio stream via getusermedia
but it's not working
I used this example node-hello-world example
WebRtcPeer.prototype.userMediaConstraints = {
audio : true,
video : {
mandatory : {
maxWidth : 640,
maxFrameRate : 15,
minFrameRate : 15
}
}
};
to
WebRtcPeer.prototype.userMediaConstraints = {
audio : true,
video : false
};
is it possible use kurento service for only audio stream?
This is indeed possible with Kurento. There are two ways of doing this, depending on the desired scope of the modification:
Per webrtc endpoint: when you process the SDP offer sent by the client, you get an SDP answer from KMS that you have to send back. After invoking the processOffer method call, you can tamper the SDP to remove all video parts. That way, your client will send back only audio.
Globally: You can edit /etc/kurento/sdp_pattern.txt file removing all video related parts, this will force SdpEndpoints (parent class of WebrtcEndpoint) to only use audio.
EDIT 1
The file sdp_pattern.txt is deprecated in KMS 6.1.0, so method 2 shouldn't be used.
EDIT 2
There was an issue with the kurento-utils library, and the client was not correctly setting the OfferToReceiveAudio. It was fixed some time ago, and you shouldn't need tampering the SDPs now.
git origin: https://github.com/Kurento/kurento-tutorial-js.git
git branch: 6.6.0
My solution is only changing var offerVideo = true; to var offerVideo = false; in generateOffer function of kurento-utils.js file.
My approach is to modify the options that you pass to the WebRtcPeer.
var options = {
onicecandidate: onIceCandidate,
iceServers: iceServers,
mediaConstraints: {
audio:true,
video:false
}
}
Besides, in the kurento-utils.js, the mediaContraints is overidden by this line:
constraints.unshift(MEDIA_CONSTRAINTS);
So comment it.

Customize Request Timeout Message in IBM Worklight v6.0

When the request to the worklight server from the Mobile device times-out, I get the following Error and it looks like it is being pushed from the Worklight framework
http://serveraddress:portno/console/apps/services/api/app_title...
Make sure the host address is available to the app (especially relevant for android and iphone apps
Now It's not ideal to reveal the server address to the end user. And I'm not able to figure out as to where this can be customized. Need suggestion on how to modify this error Message
AFSIK Try Change the log level from your wlInitOptions to error and check the logs to disable the timeout error messages
`var wlInitOptions = {
connectOnStartup : true,
timeout : 60000,
logger : {enabled: true, level: 'error', stringify: true, pretty: false,
tag: {level: false, pkg: true}, whitelist: [], blacklist: []},
analytics : {
enabled: false
}
};`