I'm warming up my transceiver like so:
pc.addTranceiver('video')
This creates a dummy track in the transceiver's receiver. Soon after, the unmute event fires on that track.
Then, ~3 seconds later, the mute event fires.
My goal is to detect that a track is a dummy track as fast as possible.
ideas
send a message via the data channel telling the peer that the track is void. this is a pain since i'll have to send another message when I later call replaceTrack
write a frame of the track to canvas & see if it's an image. This seems really barbaric, but it's faster than 3 seconds.
anything better? it feels like this should be pretty simple.
This is a bug in Chrome (please ★ it so they'll fix it).
The spec says receiver tracks must start out muted and should stay that way until packets arrive. But Chrome fires the unmute event immediately, followed a few seconds later by a mute event due to inactivity (another bug):
const config = {sdpSemantics: "unified-plan"};
const pc1 = new RTCPeerConnection(), pc2 = new RTCPeerConnection();
pc1.addTransceiver("video");
pc2.ontrack = ({track}) => {
console.log(`track starts out ${track.muted? "muted":"unmuted"}`);
track.onmute = () => console.log("muted");
track.onunmute = () => console.log("unmuted");
};
pc1.onicecandidate = e => pc2.addIceCandidate(e.candidate);
pc2.onicecandidate = e => pc1.addIceCandidate(e.candidate);
pc1.onnegotiationneeded = async e => {
await pc1.setLocalDescription(await pc1.createOffer());
await pc2.setRemoteDescription(pc1.localDescription);
await pc2.setLocalDescription(await pc2.createAnswer());
await pc1.setRemoteDescription(pc2.localDescription);
}
In Chrome you'll see incorrect behavior:
track starts out muted
unmuted
muted
In Firefox you'll see correct behavior:
track starts out muted
Chrome workaround:
Until Chrome fixes this, I'd use this workaround:
const video = document.createElement("video");
video.srcObject = new MediaStream([track]);
video.onloadedmetadata = () => log("unmuted workaround!");
Until this fires, assume the track to be muted.
Related
in my case after get everything done and running i want to migrate from add track to addTranciever
i have 2 peerConnection
yourConn = new RTCPeerConnection(servers);
yourConn2 = new RTCPeerConnection(servers);
and with following steps i see in many example casses i addTransciever like so
yourConn.addTransceiver(streams.getAudeoTracks()[0]);
how to recieve from yourConn peer ? and can i achieve that with send from peer 1 to peer 2
and p1 recieve from p2 with no need to negotiation again
what should i do also in ontrack event on both side with , should i use addTrack there or not if i wish
here yourConn2 event side here offer to send what about offer to recieve?
yourConn2.ontrack = (e) => {
e.transceiver.direction = 'sendrecv';
await e.transceiver.sender.replaceTrack(remoteStream);
};
should i grap
RemoteAudioFromlocal = yourConn2.getTransceivers()[0];
and i upgrade" the direction to sendrecv like so ?
RemoteAudioFromlocal.direction = "sendrecv"
await RemoteAudioFromlocal.reciever.replaceTrack(remotePeerStramIn);
i will answer my question since i figuer it out
from [Jan-Ivar Bruaroey blog1 i've discover all my question that i ask for
with addTransceiver() in one side i can get Transceivers within onTrackEvent
like so
if (e.transceiver.receiver.track) {
remoteVideo = document.getElementById("wbrtcremote");
transceiversRemotePeer = new MediaStream([e.transceiver.receiver.track]);
remoteVideo.srcObject = transceiversRemotePeer
}
that's all what i need to know the same on other side but here with a minor differnce like you need to change the direction since
The transceiver created by the sender is sendrecv by default with addtranciever
side
yourConn.addTransceiver(streams.getAudeoTracks()[0]);
. This gets mirrored by a transceiver on the receiver side for the same mid. Here it's exposed in the ontrack event,
yourConn2.ontrack = await e => {
/* do something with e.track */
e.transceiver.direction = 'sendrecv';
await e.transceiver.sender.replaceTrack(receiverTrack);
};
but in an "offer to receive" use case you could obtain it via getTransceivers() or like above code with e.transceiver.sender
on the receiver side(yourConn2), the direction is "downgraded" from sendrecv to recvonly because by default this transceiver is not configured to send anything back from receiverPc(yourConn2) to senderPc(yourConn).
After all, it was just created in response to setRemoteDescription(offer).
To fix this, you "upgrade" the direction to sendrecv and set a track to send.
e.transceiver.direction = 'sendrecv';
e.transceiver.sender.replaceTrack(localStream.getAudioTracks()[0]).then(() => {
});
If you do this prior to creating the local SDP answer on receiverPc, you should be able to achieve "sendrecv" without more SDP negotiations. The ontrack event is fired before the SRD promise is resolved, so any modification you do in that event should have completed before the SDP answer is created.
I am working on the WebRTC application for video chatting. On my local network everything works well. But when I try it to test through internet RTCPeerConnection.onconnectionstatechange fires with RTCPeerConnection.connectionState = 'disconnected' without any reason after some 20-30 seconds of communication. Another very confusing thing is that for example I have peer2 and peer3 started in the same browser in different tabs connected to peer1 and peer1 videostreaming to them. And after 20-30 seconds RTCPeerConnection.connectionState = 'disconnected' can fire on peer2 and at the same time peer3 continues to receive video stream from peer1. I have googled a bit and found this solution (which doesnt work in my case):
this.myRTCMediaMediatorConnections[id][hash].onconnectionstatechange=async function(e){
log('onSignalingServerMediaMediatorOfferFunc.myRTCMediaMediatorConnections['+id+']['+hash+'].onconnectionstatechange('+This.myRTCMediaMediatorConnections[id][hash].connectionState+')',10,true)
switch(This.myRTCMediaMediatorConnections[id][hash].connectionState){
case "failed":
This.disconnectMeFromMediatorConnection(targetId,logicGroupName,streamerId,streamerHash,id,hash)
break
case "closed":
This.disconnectMeFromMediatorConnection(targetId,logicGroupName,streamerId,streamerHash,id,hash)
break
case "disconnected":
if(await This.confirmPeerDisconnection(This.myRTCMediaMediatorConnections[id][hash]))This.disconnectMeFromConnection(targetId,logicGroupName,streamerId,streamerHash,id,hash)
break
}
log('onSignalingServerMediaMediatorOfferFunc.myRTCMediaMediatorConnections['+id+']['+hash+'].onconnectionstatechange',10,false)
}
this.confirmPeerDisconnection=async function(connectionObject){
log('confirmPeerDisconnection',10,true)
var b1=await this.confirmPeerDisconnectionFunc(connectionObject);
await new Promise(resolve=>setTimeout(resolve,2000));
var b2=await this.confirmPeerDisconnectionFunc(connectionObject);
log('confirmPeerDisconnection=>'+(b2-b1),10,false)
if(b2-b1>0)return false
return true;
}
this.confirmPeerDisconnectionFunc=async function(connectionObject){
var b=0
await connectionObject.getStats(null).then(function(stats){
stats.forEach((report)=>{if(report.type=='transport')Object.keys(report).forEach((statName)=>{if(statName==='bytesReceived')b=parseInt(report[statName])})})
})
return b
}
b2-b1 always equals 0 or less than 0. Can anyone give me an advise why RTCPeerConnection.onconnectionstatechange fires and how I can get rid of this bug.
Any help appriciated!
I am using react-native-track-player package to play music files in my React Native mobile application.
There due to some issue, I need to stop the track-player once the queue of audio tracks reaches the end. For that, I use the event PlaybackQueueEnded to invoke the following code snippet. (I have used it in the useTrackPlayerEvents hook along with the PlayerTrackChanged event which when fired, sets the title, author, and background of the current audio file being played).
useTrackPlayerEvents(
// To set the title, author, and background of the current audio file being played
[Event.PlaybackTrackChanged, Event.PlaybackQueueEnded],
async event => {
if (
event.type === Event.PlaybackTrackChanged &&
event.nextTrack !== null
) {
const track = await TrackPlayer.getTrack(event.nextTrack);
const title = track?.title;
const artist = track?.artist;
const artwork: SetStateAction<any> = track?.artwork;
setTrackTitle(title);
setTrackArtist(artist);
setTrackArtwork(artwork);
}
// To stop the player once it reaches the end of the queue
if (
event.type === Event.PlaybackQueueEnded &&
event.position === progress.duration
) {
TrackPlayer.stop();
}
},
);
But the above code doesn't work as I expected. Seems the event PlaybackQueueEnded is not fired when playing the last track of the queue. Can somebody please help me to solve this issue?
Thank you.
PS: I am taking the current time and duration of the audio file being played by using the useProgress hook and have assigned its value to the progress variable. By that, I'm taking progress.position and progress.duration.
PlaybackQueueEnded will be fired when the song is finished and you dont need to check if event.position === progress.duration
Hello I am going to create a surveillance system. I would like to get a webcam video and a shared screen, but using addtrack will only get the media stream I declared later. Is there any way to get both streams.
thanks.
here is code offer side
let stream = video.srcObject;
let stream2 = shareVideo.srcObject;
stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));
stream2.getTracks().forEach(track => peerConnection.addTrack(track, stream2));
and here is answer side
peerConnections[id].ontrack = (event) => {
console.log(event);
when i checked log. event has one track and stream[0] has mediastream bu steam[1] has no mediastream
I am able to make a direct call between a Circuit WebClient and the example SDK app at https://output.jsbin.com/posoko.
When running the SDK example on a PC with a second camera (USB), the switching between the built-in camera and the USB camera works fine. But trying the same on my Android device (Samsung Galaxy S6) the switching does not work.
My code uses navigator.mediaDevices.enumerateDevices() to get the cameras and then uses the Circuit SDK function setMediaDevices to switch to the other camera.
async function switchCam() {
let availDevices = await navigator.mediaDevices.enumerateDevices();
availDevices = availDevices.filter(si => si.kind === 'videoinput');
let newDevice = availDevices[1]; // secondary camera
await client.setMediaDevices({video: newDevice.deviceId})
}
Can somebody explain why this doesn’t work on an Android device?
We have seen Android devices that don't allow calling navigator.getUserMedia while a video track (and therefore a stream) is still active. I tried your example above with a Pixel 2 without any issues though.
If you remove the video track from the stream and stop the track before calling client.setMediaDevices, the switch should work.
async function switchCam() {
const stream = await client.getLocalAudioVideoStream();
const currTrack = stream.getVideoTracks()[0];
console.log(`Remove and stop current track: ${currTrack.label}`);
stream.removeTrack(currTrack);
currTrack.stop();
let availDevices = await navigator.mediaDevices.enumerateDevices();
availDevices = availDevices.filter(si => si.kind === 'videoinput');
let newDevice = availDevices[1]; // secondary camera
await client.setMediaDevices({video: newDevice.deviceId})
}
There is a complete switch camera example on JSBin at https://output.jsbin.com/wuniwec/