setSinkId change muliple audio ouputs - webrtc

Here is the problem,
First I enumerate all the devices that I have available with in select elements:
navigator.mediaDevices.enumerateDevices()
When I change one output, it sounds in the device that I choose.
HTMLMediaElement.setSinkId(deviceId)
After if I play another audio and change the device output (setSinkId), it changes also the first one to the last deviceId. So I have both sounds in the same device.
Do I need to have the last adapter.js version to implement properly that problem?
********* EDITED **********
Following the above comment, it try the web audio, but not success. With getUserMedia everything is fine.
navigator.getUserMedia( { audio: true, video: false },
function (mediaStream) {
// Create an audio context for the audio
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
//var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(mediaStream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
},
function (error) {
console.log(error);
}
);
But using my remote stream, I cannot get any noise
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(stream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
// Element is my HTMLMediaElement
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});

this is most likely caused by how Chrome renders audio. See here for a description which also suggests using webaudio to workaround the problem.
adapter.js can not fix this.

Related

How to add Video track and remove it using simple-peer

I am using simple-peer in my video chat web application. If both the users are in audio call how can I add Video track and how can I disable it. If I use replaceTrack I am again which is giving this issue
error Error: [object RTCErrorEvent]
at makeError (index.js:17)
at RTCDataChannel._channel.onerror (index.js:490)
I am showing a profile picture if the video is not enabled for users. if Video is enabled I want to replace this picture with video and replace it for all people in the call
If both the users enabled audio only, stream contain only audio track so here we can add black space (ended video track ).so we can easily solve this issue
for more info visit this
https://blog.mozilla.org/webrtc/warm-up-with-replacetrack/
Code from the above link
let silence = () => {
let ctx = new AudioContext(), oscillator = ctx.createOscillator();
let dst = oscillator.connect(ctx.createMediaStreamDestination());
oscillator.start();
return Object.assign(dst.stream.getAudioTracks()[0], {enabled: false});
}
let black = ({width = 640, height = 480} = {}) => {
let canvas = Object.assign(document.createElement("canvas"), {width, height});
canvas.getContext('2d').fillRect(0, 0, width, height);
let stream = canvas.captureStream();
return Object.assign(stream.getVideoTracks()[0], {enabled: false});
}
let blackSilence = (...args) => new MediaStream([black(...args), silence()]);
video.srcObject = blackSilence();

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

WebRTC video/audio streams out of sync (MediaStream -> MediaRecorder -> MediaSource -> Video Element)

I am taking a MediaStream and merging two separate tracks (video and audio) using a canvas and the WebAudio API. The MediaStream itself does not seem to fall out of sync, but after reading it into a MediaRecorder and buffering it into a video element the audio will always seem to play much earlier than the video Here's the code that seems to have the issue:
let stream = new MediaStream();
// Get the mixed sources drawn to the canvas
this.canvas.captureStream().getVideoTracks().forEach(track => {
stream.addTrack(track);
});
// Add mixed audio tracks to the stream
// https://stackoverflow.com/questions/42138545/webrtc-mix-local-and-remote-audio-steams-and-record
this.audioMixer.dest.stream.getAudioTracks().forEach(track => {
stream.addTrack(track);
});
// stream = stream;
let mediaRecorder = new MediaRecorder(stream, { mimeType: 'video/webm;codecs=opus,vp8' });
let mediaSource = new MediaSource();
let video = document.createElement('video');
video.src = URL.createObjectURL(mediaSource);
document.body.appendChild(video);
video.controls = true;
video.autoplay = true;
// Source open
mediaSource.onsourceopen = () => {
let sourceBuffer = mediaSource.addSourceBuffer(mediaRecorder.mimeType);
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
const reader = new FileReader();
reader.readAsArrayBuffer(event.data);
reader.onloadend = () => {
sourceBuffer.appendBuffer(reader.result);
console.log(mediaSource.sourceBuffers);
console.log(event.data);
}
}
}
mediaRecorder.start(1000);
}
AudioMixer.js
export default class AudioMixer {
constructor() {
// Initialize an audio context
this.audioContext = new AudioContext();
// Destination outputs one track of mixed audio
this.dest = this.audioContext.createMediaStreamDestination();
// Array of current streams in mixer
this.sources = [];
}
// Add an audio stream to the mixer
addStream(id, stream) {
// Get the audio tracks from the stream and add them to the mixer
let sources = stream.getAudioTracks().map(track => this.audioContext.createMediaStreamSource(new MediaStream([track])));
sources.forEach(source => {
// Add it to the current sources being mixed
this.sources.push(source);
source.connect(this.dest);
// Connect to analyser to update volume slider
let analyser = this.audioContext.createAnalyser();
source.connect(analyser);
...
});
}
// Remove all current sources from the mixer
flushAll() {
this.sources.forEach(source => {
source.disconnect(this.dest);
});
this.sources = [];
}
// Clean up the audio context for the mixer
cleanup() {
this.audioContext.close();
}
}
I assume it has to do with how the data is pushed into the MediaSource buffer but I'm not sure. What am I doing that de-syncs the stream?
A late reply to an old post, but it might help someone ...
I had exactly the same problem: I have a video stream, which should be supplemented by an audio stream. In the audio stream short sounds (AudioBuffer) are played from time to time. The whole thing is recorded via MediaRecorder.
Everything works fine on Chrome. But on Chrome for Android, all sounds were played back in quick succession. The "when" parameter for "play()" was ignored on Android. (audiocontext.currentTime continued to increase over time ... - that was not the point).
My solution is similar to Jacob's comment Sep 2 '18 at 7:41:
I created and connected a sine wave oscillator with inaudible 48,000 Hz, which played permanently in the audio stream during recording. Apparently this leads to the proper time progress.
An RTP endpoint that is emitting multiple related RTP streams that
require synchronization at the other endpoint(s) MUST use the same
RTCP CNAME for all streams that are to be synchronized. This
requires a short-term persistent RTCP CNAME that is common across
several RTP streams, and potentially across several related RTP
sessions. A common example of such use occurs when lip-syncing audio
and video streams in a multimedia session, where a single participant
has to use the same RTCP CNAME for its audio RTP session and for its
video RTP session. Another example might be to synchronize the
layers of a layered audio codec, where the same RTCP CNAME has to be
used for each layer.
https://datatracker.ietf.org/doc/html/rfc6222#page-2
There is a bug in Chrome, that plays buffered media stream audio with 44100KHz, even when it's encoded with 48000 (which leads to gaps and video desync). All other browsers seem to play it fine. You can choose to change codec to the one which supports 44.1KHz encoding or play a file from web link as a source (this way Chrome can play it correctly)

How to stream audio file with opentok?

In opentok, with OT.initPublisher, you only can pass a deviceId to the audioSource. Does someone know a method to stream an audio file ?
For example, I have done this:
navigator.getUserMedia({audio: true, video: false},
function(stream) {
var context = new AudioContext();
var microphone = context.createMediaStreamSource(stream);
var backgroundMusic = context.createMediaElementSource(document.getElementById("song"));
var mixedOutput = context.createMediaStreamDestination();
microphone.connect(mixedOutput);
backgroundMusic.connect(mixedOutput);
},
handleError);
Like this, I can have a stream with the voice and my music but how to put this stream to a publisher ? Is it possible or is there another way to do this ?
Update: There is now an official way to do this, using the videoSource and audioSource properties provided to OT.initPublisher, please see the documentation: https://tokbox.com/developer/sdks/js/reference/OT.html#initPublisher
This is an example of how to stream a canvas element as a video track: https://github.com/opentok/opentok-web-samples/tree/master/Publish-Canvas
You can apply the same technique to stream an audio track.
Old Answer:
It's not currently possible with the officially supported API but there is a way to do it.
Please see the TokBox blog post about Camera Filters: https://tokbox.com/blog/camera-filters-in-opentok-for-web/
In order to modify the stream before it reaches the OpenTok JS SDK we use the mockGetUserMedia function to intercept the stream:
https://github.com/aullman/opentok-camera-filters/blob/master/src/mock-get-user-media.js
You could invoke mockGetUserMedia with a function which does your audio mixing. Something like this:
mockGetUserMedia(function(originalStream) {
var context = new AudioContext();
var microphone = context.createMediaStreamSource(originalStream);
var backgroundMusic = context.createMediaElementSource(document.getElementById("song"));
var mixedOutput = context.createMediaStreamDestination();
microphone.connect(mixedOutput);
backgroundMusic.connect(mixedOutput);
var stream = mixedOutput.stream;
originalStream.getVideoTracks().map(function(track) {
stream.addTrack(track);
});
return stream;
});
Note: I have not tested this function but it should lead you in the right direction. Remember that this technique is error prone and not officially supported by TokBox.
We are currently working on a new feature which will enable this use case but I cannot give a time estimate of when it will be available.
Thank you for the help but we cannot make it work since this morning.
So we made a different file with this code which is implemented before the opentok library in our html :
function mockGetUserMedia(mockOnStreamAvailable) {
var oldGetUserMedia = void 0;
if (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia) {
oldGetUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
navigator.webkitGetUserMedia = navigator.getUserMedia = navigator.mozGetUserMedia = function getUserMedia(constraints, onStreamAvailable, onStreamAvailableError, onAccessDialogOpened, onAccessDialogClosed, onAccessDenied) {
return oldGetUserMedia.call(navigator, constraints, function (stream) {
onStreamAvailable(mockOnStreamAvailable(stream));
}, onStreamAvailableError, onAccessDialogOpened, onAccessDialogClosed, onAccessDenied);
};
} else {
console.warn('Could not find getUserMedia function to mock out');
}
};
mockGetUserMedia(function(stream) {
var context = new AudioContext();
var bgMusic = context.createMediaElementSource(document.getElementById("song"));
var microphone = context.createMediaStreamSource(stream);
var destination = context.createMediaStreamDestination();
bgMusic.connect(destination);
microphone.connect(destination);
var mixedStream = destination.stream;
stream.getVideoTracks().map(function(track) {
mixedStream.addTrack(track);
});
return mixedStream;
});
In our angular, we init the session, create a publisher and publish it but get the error :
Uncaught DOMException: Failed to execute 'createMediaElementSource' on 'BaseAudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode.
This error, I think, is throw because the function is executed twice. When the js load, and when we publish.
I am not sure how to use this mockGetUserMedia function, do you know what is wrong with our code ?
EDIT
We made it work with some if condition. Thank you so much man, very appreciated.

WebRTC mix local and remote audio steams and record

So far i've found a way only to record either local or remote using MediaRecorder API but is it possible to mix and record both steams and get a blob?
Please note its audio steam only and i don't want to mix/record in server side.
I've a RTCPeerConnection as pc.
var local_stream = pc.getLocalStreams()[0];
var remote_stream = pc.getRemoteStreams()[0];
var audioChunks = [];
var rec = new MediaRecorder(local_stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive")
// Play audio using new blob
}
rec.start();
Even i tried adding multiple tracks in MediaStream API but it still gives only first track audio. Any help or insight 'd be appreciated!
The WebAudio API can do mixing for you. Consider this code if you want to record all the audio tracks in the array audioTracks:
const ac = new AudioContext();
// WebAudio MediaStream sources only use the first track.
const sources = audioTracks.map(t => ac.createMediaStreamSource(new MediaStream([t])));
// The destination will output one track of mixed audio.
const dest = ac.createMediaStreamDestination();
// Mixing
sources.forEach(s => s.connect(dest));
// Record 10s of mixed audio as an example
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = e => console.log("Got data", e.data);
recorder.onstop = () => console.log("stopped");
setTimeout(() => recorder.stop(), 10000);