When I receive an onaddstream event from the other side, how can I determine the MediaStream only has audio, no video?
In other words, can I know a MediaStream object only has audio, no video?
peer.onaddstream = function (event) {
var stream = event.stream;
if (stream.getAudioTracks().length) alert('Peer has audio stream.');
if (stream.getVideoTracks().length) alert('Peer has video stream.');
};
Related
I have tried two use cases to play webrtc playback on videojs.
After getting MediaStream from webrtc and add like the following:
player.src({src:webRTCAdaptor.remoteVideo.srcObject});
I'm getting (CODE:4 MEDIA_ERR_SRC_NOT_SUPPORTED) error.
If i do like following, I'm not getting any error but video does not play either.
var vid=player.tech().el();
vid.srcObject=webRTCAdaptor.remoteVideo.srcObject;
Calling player.play() doesn't change anything.
Does anybody has any insight about it?
You can rewrite the play function to achieve,like this
if (player) {
const videoDom = player.tech().el()
videoDom && (videoDom.srcObject = stream)
player.play = () => {
videoDom.play()
}
player.play()
}
Hello I am going to create a surveillance system. I would like to get a webcam video and a shared screen, but using addtrack will only get the media stream I declared later. Is there any way to get both streams.
thanks.
here is code offer side
let stream = video.srcObject;
let stream2 = shareVideo.srcObject;
stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));
stream2.getTracks().forEach(track => peerConnection.addTrack(track, stream2));
and here is answer side
peerConnections[id].ontrack = (event) => {
console.log(event);
when i checked log. event has one track and stream[0] has mediastream bu steam[1] has no mediastream
I am taking a MediaStream and merging two separate tracks (video and audio) using a canvas and the WebAudio API. The MediaStream itself does not seem to fall out of sync, but after reading it into a MediaRecorder and buffering it into a video element the audio will always seem to play much earlier than the video Here's the code that seems to have the issue:
let stream = new MediaStream();
// Get the mixed sources drawn to the canvas
this.canvas.captureStream().getVideoTracks().forEach(track => {
stream.addTrack(track);
});
// Add mixed audio tracks to the stream
// https://stackoverflow.com/questions/42138545/webrtc-mix-local-and-remote-audio-steams-and-record
this.audioMixer.dest.stream.getAudioTracks().forEach(track => {
stream.addTrack(track);
});
// stream = stream;
let mediaRecorder = new MediaRecorder(stream, { mimeType: 'video/webm;codecs=opus,vp8' });
let mediaSource = new MediaSource();
let video = document.createElement('video');
video.src = URL.createObjectURL(mediaSource);
document.body.appendChild(video);
video.controls = true;
video.autoplay = true;
// Source open
mediaSource.onsourceopen = () => {
let sourceBuffer = mediaSource.addSourceBuffer(mediaRecorder.mimeType);
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
const reader = new FileReader();
reader.readAsArrayBuffer(event.data);
reader.onloadend = () => {
sourceBuffer.appendBuffer(reader.result);
console.log(mediaSource.sourceBuffers);
console.log(event.data);
}
}
}
mediaRecorder.start(1000);
}
AudioMixer.js
export default class AudioMixer {
constructor() {
// Initialize an audio context
this.audioContext = new AudioContext();
// Destination outputs one track of mixed audio
this.dest = this.audioContext.createMediaStreamDestination();
// Array of current streams in mixer
this.sources = [];
}
// Add an audio stream to the mixer
addStream(id, stream) {
// Get the audio tracks from the stream and add them to the mixer
let sources = stream.getAudioTracks().map(track => this.audioContext.createMediaStreamSource(new MediaStream([track])));
sources.forEach(source => {
// Add it to the current sources being mixed
this.sources.push(source);
source.connect(this.dest);
// Connect to analyser to update volume slider
let analyser = this.audioContext.createAnalyser();
source.connect(analyser);
...
});
}
// Remove all current sources from the mixer
flushAll() {
this.sources.forEach(source => {
source.disconnect(this.dest);
});
this.sources = [];
}
// Clean up the audio context for the mixer
cleanup() {
this.audioContext.close();
}
}
I assume it has to do with how the data is pushed into the MediaSource buffer but I'm not sure. What am I doing that de-syncs the stream?
A late reply to an old post, but it might help someone ...
I had exactly the same problem: I have a video stream, which should be supplemented by an audio stream. In the audio stream short sounds (AudioBuffer) are played from time to time. The whole thing is recorded via MediaRecorder.
Everything works fine on Chrome. But on Chrome for Android, all sounds were played back in quick succession. The "when" parameter for "play()" was ignored on Android. (audiocontext.currentTime continued to increase over time ... - that was not the point).
My solution is similar to Jacob's comment Sep 2 '18 at 7:41:
I created and connected a sine wave oscillator with inaudible 48,000 Hz, which played permanently in the audio stream during recording. Apparently this leads to the proper time progress.
An RTP endpoint that is emitting multiple related RTP streams that
require synchronization at the other endpoint(s) MUST use the same
RTCP CNAME for all streams that are to be synchronized. This
requires a short-term persistent RTCP CNAME that is common across
several RTP streams, and potentially across several related RTP
sessions. A common example of such use occurs when lip-syncing audio
and video streams in a multimedia session, where a single participant
has to use the same RTCP CNAME for its audio RTP session and for its
video RTP session. Another example might be to synchronize the
layers of a layered audio codec, where the same RTCP CNAME has to be
used for each layer.
https://datatracker.ietf.org/doc/html/rfc6222#page-2
There is a bug in Chrome, that plays buffered media stream audio with 44100KHz, even when it's encoded with 48000 (which leads to gaps and video desync). All other browsers seem to play it fine. You can choose to change codec to the one which supports 44.1KHz encoding or play a file from web link as a source (this way Chrome can play it correctly)
I am able to make a direct call between a Circuit WebClient and the example SDK app at https://output.jsbin.com/posoko.
When running the SDK example on a PC with a second camera (USB), the switching between the built-in camera and the USB camera works fine. But trying the same on my Android device (Samsung Galaxy S6) the switching does not work.
My code uses navigator.mediaDevices.enumerateDevices() to get the cameras and then uses the Circuit SDK function setMediaDevices to switch to the other camera.
async function switchCam() {
let availDevices = await navigator.mediaDevices.enumerateDevices();
availDevices = availDevices.filter(si => si.kind === 'videoinput');
let newDevice = availDevices[1]; // secondary camera
await client.setMediaDevices({video: newDevice.deviceId})
}
Can somebody explain why this doesn’t work on an Android device?
We have seen Android devices that don't allow calling navigator.getUserMedia while a video track (and therefore a stream) is still active. I tried your example above with a Pixel 2 without any issues though.
If you remove the video track from the stream and stop the track before calling client.setMediaDevices, the switch should work.
async function switchCam() {
const stream = await client.getLocalAudioVideoStream();
const currTrack = stream.getVideoTracks()[0];
console.log(`Remove and stop current track: ${currTrack.label}`);
stream.removeTrack(currTrack);
currTrack.stop();
let availDevices = await navigator.mediaDevices.enumerateDevices();
availDevices = availDevices.filter(si => si.kind === 'videoinput');
let newDevice = availDevices[1]; // secondary camera
await client.setMediaDevices({video: newDevice.deviceId})
}
There is a complete switch camera example on JSBin at https://output.jsbin.com/wuniwec/
Is there a way to edit the local video stream 'localStream' before sending it to another peer via peerConnection() ?
navigator.getUserMedia({video: true, audio: true}, function(localMediaStream) {
var video = document.querySelector('video');
//How do I say edit a few pixes in the localMediaSttream before
//using peerConnection() to send it to another peer?
}, onFailSoHard);
Here are a few assumptions for tomorrow!
You can getUserMedia. Render stream in a video element. Use MediaSource APIs, get buffers; manipulate them. Do whatever you want!
Then capture stream from that "video" element.
It would be nice, if MediaSource APIs itself generate streams for us like WebAudio APIs.
Well, you can attach streams like this (after applying some affects on audio/video tracks):
peer.addStream ( new webkitStream (
yourStream.audioTracks || yourStream.getAudioTracks(),
yourStream.videoTracks || yourStream.getVideoTracks()
));