How to get multiple streams in a peerConnection - webrtc

Hello I am going to create a surveillance system. I would like to get a webcam video and a shared screen, but using addtrack will only get the media stream I declared later. Is there any way to get both streams.
thanks.
here is code offer side
let stream = video.srcObject;
let stream2 = shareVideo.srcObject;
stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));
stream2.getTracks().forEach(track => peerConnection.addTrack(track, stream2));
and here is answer side
peerConnections[id].ontrack = (event) => {
console.log(event);
when i checked log. event has one track and stream[0] has mediastream bu steam[1] has no mediastream

Related

WebRTC: Detecting muted track faster post warm-up

I'm warming up my transceiver like so:
pc.addTranceiver('video')
This creates a dummy track in the transceiver's receiver. Soon after, the unmute event fires on that track.
Then, ~3 seconds later, the mute event fires.
My goal is to detect that a track is a dummy track as fast as possible.
ideas
send a message via the data channel telling the peer that the track is void. this is a pain since i'll have to send another message when I later call replaceTrack
write a frame of the track to canvas & see if it's an image. This seems really barbaric, but it's faster than 3 seconds.
anything better? it feels like this should be pretty simple.
This is a bug in Chrome (please ★ it so they'll fix it).
The spec says receiver tracks must start out muted and should stay that way until packets arrive. But Chrome fires the unmute event immediately, followed a few seconds later by a mute event due to inactivity (another bug):
const config = {sdpSemantics: "unified-plan"};
const pc1 = new RTCPeerConnection(), pc2 = new RTCPeerConnection();
pc1.addTransceiver("video");
pc2.ontrack = ({track}) => {
console.log(`track starts out ${track.muted? "muted":"unmuted"}`);
track.onmute = () => console.log("muted");
track.onunmute = () => console.log("unmuted");
};
pc1.onicecandidate = e => pc2.addIceCandidate(e.candidate);
pc2.onicecandidate = e => pc1.addIceCandidate(e.candidate);
pc1.onnegotiationneeded = async e => {
await pc1.setLocalDescription(await pc1.createOffer());
await pc2.setRemoteDescription(pc1.localDescription);
await pc2.setLocalDescription(await pc2.createAnswer());
await pc1.setRemoteDescription(pc2.localDescription);
}
In Chrome you'll see incorrect behavior:
track starts out muted
unmuted
muted
In Firefox you'll see correct behavior:
track starts out muted
Chrome workaround:
Until Chrome fixes this, I'd use this workaround:
const video = document.createElement("video");
video.srcObject = new MediaStream([track]);
video.onloadedmetadata = () => log("unmuted workaround!");
Until this fires, assume the track to be muted.

WebRTC video/audio streams out of sync (MediaStream -> MediaRecorder -> MediaSource -> Video Element)

I am taking a MediaStream and merging two separate tracks (video and audio) using a canvas and the WebAudio API. The MediaStream itself does not seem to fall out of sync, but after reading it into a MediaRecorder and buffering it into a video element the audio will always seem to play much earlier than the video Here's the code that seems to have the issue:
let stream = new MediaStream();
// Get the mixed sources drawn to the canvas
this.canvas.captureStream().getVideoTracks().forEach(track => {
stream.addTrack(track);
});
// Add mixed audio tracks to the stream
// https://stackoverflow.com/questions/42138545/webrtc-mix-local-and-remote-audio-steams-and-record
this.audioMixer.dest.stream.getAudioTracks().forEach(track => {
stream.addTrack(track);
});
// stream = stream;
let mediaRecorder = new MediaRecorder(stream, { mimeType: 'video/webm;codecs=opus,vp8' });
let mediaSource = new MediaSource();
let video = document.createElement('video');
video.src = URL.createObjectURL(mediaSource);
document.body.appendChild(video);
video.controls = true;
video.autoplay = true;
// Source open
mediaSource.onsourceopen = () => {
let sourceBuffer = mediaSource.addSourceBuffer(mediaRecorder.mimeType);
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
const reader = new FileReader();
reader.readAsArrayBuffer(event.data);
reader.onloadend = () => {
sourceBuffer.appendBuffer(reader.result);
console.log(mediaSource.sourceBuffers);
console.log(event.data);
}
}
}
mediaRecorder.start(1000);
}
AudioMixer.js
export default class AudioMixer {
constructor() {
// Initialize an audio context
this.audioContext = new AudioContext();
// Destination outputs one track of mixed audio
this.dest = this.audioContext.createMediaStreamDestination();
// Array of current streams in mixer
this.sources = [];
}
// Add an audio stream to the mixer
addStream(id, stream) {
// Get the audio tracks from the stream and add them to the mixer
let sources = stream.getAudioTracks().map(track => this.audioContext.createMediaStreamSource(new MediaStream([track])));
sources.forEach(source => {
// Add it to the current sources being mixed
this.sources.push(source);
source.connect(this.dest);
// Connect to analyser to update volume slider
let analyser = this.audioContext.createAnalyser();
source.connect(analyser);
...
});
}
// Remove all current sources from the mixer
flushAll() {
this.sources.forEach(source => {
source.disconnect(this.dest);
});
this.sources = [];
}
// Clean up the audio context for the mixer
cleanup() {
this.audioContext.close();
}
}
I assume it has to do with how the data is pushed into the MediaSource buffer but I'm not sure. What am I doing that de-syncs the stream?
A late reply to an old post, but it might help someone ...
I had exactly the same problem: I have a video stream, which should be supplemented by an audio stream. In the audio stream short sounds (AudioBuffer) are played from time to time. The whole thing is recorded via MediaRecorder.
Everything works fine on Chrome. But on Chrome for Android, all sounds were played back in quick succession. The "when" parameter for "play()" was ignored on Android. (audiocontext.currentTime continued to increase over time ... - that was not the point).
My solution is similar to Jacob's comment Sep 2 '18 at 7:41:
I created and connected a sine wave oscillator with inaudible 48,000 Hz, which played permanently in the audio stream during recording. Apparently this leads to the proper time progress.
An RTP endpoint that is emitting multiple related RTP streams that
require synchronization at the other endpoint(s) MUST use the same
RTCP CNAME for all streams that are to be synchronized. This
requires a short-term persistent RTCP CNAME that is common across
several RTP streams, and potentially across several related RTP
sessions. A common example of such use occurs when lip-syncing audio
and video streams in a multimedia session, where a single participant
has to use the same RTCP CNAME for its audio RTP session and for its
video RTP session. Another example might be to synchronize the
layers of a layered audio codec, where the same RTCP CNAME has to be
used for each layer.
https://datatracker.ietf.org/doc/html/rfc6222#page-2
There is a bug in Chrome, that plays buffered media stream audio with 44100KHz, even when it's encoded with 48000 (which leads to gaps and video desync). All other browsers seem to play it fine. You can choose to change codec to the one which supports 44.1KHz encoding or play a file from web link as a source (this way Chrome can play it correctly)

WebRTC mix local and remote audio steams and record

So far i've found a way only to record either local or remote using MediaRecorder API but is it possible to mix and record both steams and get a blob?
Please note its audio steam only and i don't want to mix/record in server side.
I've a RTCPeerConnection as pc.
var local_stream = pc.getLocalStreams()[0];
var remote_stream = pc.getRemoteStreams()[0];
var audioChunks = [];
var rec = new MediaRecorder(local_stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive")
// Play audio using new blob
}
rec.start();
Even i tried adding multiple tracks in MediaStream API but it still gives only first track audio. Any help or insight 'd be appreciated!
The WebAudio API can do mixing for you. Consider this code if you want to record all the audio tracks in the array audioTracks:
const ac = new AudioContext();
// WebAudio MediaStream sources only use the first track.
const sources = audioTracks.map(t => ac.createMediaStreamSource(new MediaStream([t])));
// The destination will output one track of mixed audio.
const dest = ac.createMediaStreamDestination();
// Mixing
sources.forEach(s => s.connect(dest));
// Record 10s of mixed audio as an example
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = e => console.log("Got data", e.data);
recorder.onstop = () => console.log("stopped");
setTimeout(() => recorder.stop(), 10000);

how to get running mediaStream

I've created a webCam Stream with
navigator.getUserMedia({ "video": true }, function(stream){
videoTag.src = window.URL.createObjectURL(stream);
videoTag.play();
}
Can I access the MediaStream object in stream in global scope?*
(something like navigator.getAllMediaStreams[0])
*edit: ...without adding logic to the getUserMedia function. My problem case is a qr-decoder-library, that gets the stream for me and I don't want to change the third party code
There is no list of active media streams kept by the browser.
You can save the stream to for example window.stream.
Sure:
navigator.allMediaStreams = [];
navigator.mediaDevices.getUserMedia({ "video": true }).then(stream => {
navigator.allMediaStreams.push(stream);
console.log(navigator.allMediaStreams.length); // 1
})
.catch(e => console.error(e));
console.log(navigator.allMediaStreams.length); // 0
Just understand that the array will be empty until the success callback fires.
It's like any other JavaScript object. As long as you keep a reference to a stream (so it doesn't get garbage collected), and don't call stop() on its tracks, you'll have a live video stream at your disposal, and the active-camera light, if there is one, will be on during this time.
Also like any other JavaScript variable, it is still tied to the page, even if it hangs off navigator, so it won't survive page navigation.

Edit localStream

Is there a way to edit the local video stream 'localStream' before sending it to another peer via peerConnection() ?
navigator.getUserMedia({video: true, audio: true}, function(localMediaStream) {
var video = document.querySelector('video');
//How do I say edit a few pixes in the localMediaSttream before
//using peerConnection() to send it to another peer?
}, onFailSoHard);
Here are a few assumptions for tomorrow!
You can getUserMedia. Render stream in a video element. Use MediaSource APIs, get buffers; manipulate them. Do whatever you want!
Then capture stream from that "video" element.
It would be nice, if MediaSource APIs itself generate streams for us like WebAudio APIs.
Well, you can attach streams like this (after applying some affects on audio/video tracks):
peer.addStream ( new webkitStream (
yourStream.audioTracks || yourStream.getAudioTracks(),
yourStream.videoTracks || yourStream.getVideoTracks()
));