According to RTCPeerConnection.ontrack documentation, "ontrack" event suppose to fire for each incoming streams. I have a PeerConnection with two video streams, after connection, "ontrack" fires two times (up to here everything is OK). But both times it sends same stream out, so I end up with two identical video, I am sure sender is sending two different streams, dimension and frame rate of them are different and I can clearly see in chrome://webrtc-internals/ that 2 video streams have different frame size/rate.
Here is PeerConnection ontrack code:
this.peerConnection.ontrack = function(evt) {
console.log("PeerConnection OnTrack event: ", evt.streams);
that.emit('onRemoteStreamAdded', evt.streams);
};
I don't assume evt.streams has 1 object, so I did not write evt.streams[0].
Here is Chrome console log:
As it is obvious from log getRemoteStreams() returns only one object. How is it possible ontrack fires two time when it has only one stream, and why second RTCRtpTransceiver does not make a new stream?
I solved it after few hours of struggling with different browsers and reading documentations several times!
Problem starts at MediaStream.id, it suppose to be unique but <video> element in HTML5 only listens to first track inside each stream. PeerConnection adds new transceivers (as MediaStreamTrack) to same MediaStream, so no matter how many times ontrack handler fires, you get exact same MediaStream objects, but each time you have new unique MediaStreamTrack inside the RTCTrackEvent.
Solution is to create new MediaStream object for each new MediaStreamTrack inside ontrack handler.
this.peerConnection.ontrack = function(event) {
that.emit('onRemoteStreamAdded', new MediaStream([event.track]));
};
Or, more like standard examples:
pc.ontrack = function(event) {
document.getElementById("received_video").srcObject = new MediaStream([event.track]);
};
You get two tracks which are part of a single stream. You can see that in the event.track property, one of them should be audio, the other video.
See https://blog.mozilla.org/webrtc/the-evolution-of-webrtc/ for background information on how streams and tracks work.
Related
I have created an app using peer js to initiate video calls. I am using mediarecorder Api to record the incoming stream from caller. However, I need to add audio of both the caller and receiver in the call to the recording, and video should be of only the caller(incoming stream).
I have tried https://github.com/muaz-khan/MultiStreamsMixer this. However, on recording it I get an unreadable file by vlc.
I have also tried adding the local audio track to the recording stream, but that doesn't merge the 2 audio tracks into one and only the incomingstream's audio is recorded.
I was able to do this by using Web Audio API. I fetched the audio tracks from both the streams and joined them into one using audio context.
var OutgoingAudioMediaStream = new MediaStream();
OutgoingAudioMediaStream.addTrack(OutgoingStream.getAudioTracks()[0]);
var IncomingAudioMediaStream = new MediaStream();
IncomingAudioMediaStream.addTrack(IncomingStream.getAudioTracks()[0]);
const audioContext = new AudioContext();
audioIn_01 = audioContext.createMediaStreamSource(OutgoingAudioMediaStream);
audioIn_02 = audioContext.createMediaStreamSource(IncomingAudioMediaStream);
dest = audioContext.createMediaStreamDestination();
audioIn_01.connect(dest);
audioIn_02.connect(dest);
dest.stream.addTrack(IncomingStream.getVideoTracks()[0]);
var RecordingStream = dest.stream;
This worked perfectly.
I am trying to test WebRTC and want to display my own stream as well as the peer's stream. I currently have a simple shim to obtain the camera's stream and pipe that into a video element, however the frame rate is extremely low. The rare thing about this is that I can try examples from the WebRTC site and they work flawlessly.. The video is smooth and there are no problems. I go to the console and my code resembles theirs.. What could be happening? I tried to create both a fiddle and run that code within brackets but it still performs horribly.
video = document.getElementById('usr-cam');
navigator.mediaDevices.getUserMedia({video : {
width : {exact : 320},
height : {exact: 240}
}})
.then(function(stream){
if(navigator.mozGetUserMedia)
{
video.mozSrcObject = stream;
}
else
{
video.srcObject = stream;
}
})
.catch(function(e){
alert(e);
});
Pretty much everything I do. Take into account that I am using the new navigator.mediaDevices() API instead of navigator.getUserMedia() but I don't see how that would matter since 1.I am using a shim provided by the WebRTC group named adapter.js which they themselves use. 2. I don't think how you obtain hold of the video stream would affect performance.
Alright, I feel very stupid for this one... I was kind of deceived by the fact that the video element will update the displayed image without you having to do anything but pipe the output stream, which means the image will update but just at really long intervals, making it seem as if the video is lagging. What I forgot to do was actually play() the video or add autoplay as its property... it works well now.
Working on a Sonos API implementation for a streaming service.
I've got the getMetadata flow setup as:
Open Music Source: Return a list of station groups as a mediaCollection.itemType = container
Click Group: Return a list of stations for the selected group as mediaCollection.itemType = program
Click Play: Returns a single mediaMetadata with itemType = stream
I see a number of calls to getExtendedMetadata from the Windows controller on my dev machine and the player I'm trying to send the stream to.
The Now Playing shows Track and Album information, but the player does not make the getMediaURI request.
When I look at the controller log, I see the follow two errors:
<ApplicationData>#Module:asyncio #Message:Async get failed 1. Error 0x80000002</ApplicationData>
<ApplicationData>#Module:asyncio #Message:RAsyncGETIOOperation failed. Error (1000, 0x00000000)</ApplicationData>
Michael,
A program on Sonos is defined as a programmed radio station. This is a case where you can return a selection of tracks on each request and they are played sequentially. (Think Pandora, 8Tracks, Songza or similar DMCA style radio).
If you are returning a list of radio streams (even if it is only a list of one) the type for that mediaCollection should be collection, container or other.
If you do this and return the stream as a playable item and then click play on the stream you should see the appropriate calls and playback should begin.
Currently, the video mute functionality in webrtc is achieved by setting the enabled property of a video track to false
stream.getVideoTracks().forEach(function (track) {
track.enabled = false;
});
But the above code would not only mute the outgoing video, but the local self-view which is rendered using that local stream, also gets black frames.
Is there a way, to ONLY mute the outgoing video frames, but still be able to show a local self-view?
There's no easy way yet. Once MediaStreamTrack.clone() is supported by browsers, you could clone the video track to get a second instance of it with a separately controllable mute property, and send one track to your self-view and the other to the peerConnection. This would let you turn off video locally and remotely independently.
Today, the only workarounds I know of would be to call getUserMedia twice on Chrome (should work on https at least, where permissions will be persisted so the user won't be prompted twice) which would get you two tracks you could video-mute independently, or on Firefox you could use RTCRtpSender.replaceTrack() with a second "fake" video stream from getUserMedia using the non-standard { video: true, fake: true } constraint like this.
We are playing videos from a server. We attach an 'ontimeupdate' event which fires periodically, as the video plays. For slow connections, we can compare where the video currently IS, to where it SHOULD be. Then we can do some other things we need to do, if it is lagging. Everything works fine in Chrome, FF, IE. In Safari, when the connection is slow, the event only fires twice. Why does it get removed? Is there a way to add the event again, inside of the handler for the event? Thanks
The HTML5 audio/video element is still less than perfect. The biggest issues I've noticed is that it doesn't always behave the same way in every browser. I do not know why the timeupdate event stops firing in Safari, but one option you have is to monitor whether the video is playing or not and verifying the information independently. Example,
$(video).bind('play', function() {
playing = true;
}).bind('pause', function() {
playing = false;
}).bind('ended', function() {
playing = false;
})
function yourCheck() {
if (playing) {
if (video.currentTime != timeItShouldBe) {
//do something
}
} else {
return;
}
setTimeout( yourCheck(), 100);
}
Something to that effect. Its not perfect, but neither is the current HTML5 audio/video element. Good luck.
The event will not fire if the currentTime does not change, so it may not be firing if the video has stopped playing to buffer. However, there are other events you can listen for:
1) "stalled" - browser is trying to load the video file, but it's not getting anything from the network.
2) "waiting" - playback has stopped because you ran out of buffered data, but it will probably pick up again once more data comes in from the network. This is probably the most useful one for you.
3) "playing" - playback has resumed. Not to be confused with "play" which just means it's "trying" to play. This event fires when the video is actually playing.
4) "progress" - browser got more data from the network. Sometimes just fires every so often, but it can also fire after it recovers from the "stalled" state.
See the spec for reference.
I've heard some people say that these events can be unreliable in some browsers, but they seem to be working just fine here: http://www.w3.org/2010/05/video/mediaevents.html
If you want to be extra cautious, you can also poll periodically (with a timeout as tpdietz wrote) and check the state of the video. The readyState property will tell you whether you have enough data to show the current frame ( >= 2 ), enough to keep playing at least a little bit into the future ( >= 3 ) or enough to play all the way to the end (probably). You can also use the buffered property to see how much of the video has actually been buffered ahead of where you're playing, so you can roughly estimate the data rate (if you know how big the file is).
MDN has a great reference on all these properties and events.