My question is about webrtc negotiation.
There is a contradiction in many online tutorials and what is described in MDN.
In MDN, it says link
At the end of each generation of candidates, an end-of-candidates
notification is sent in the form of an RTCIceCandidate whose candidate
property is an empty string. This candidate should still be added to
the connection using addIceCandidate() method, as usual, in order to
deliver that notification to the remote peer.
When there are no more candidates at all to be expected during the
current negotiation exchange, an end-of-candidates notification is
sent by delivering a RTCIceCandidate whose candidate property is null.
This message does not need to be sent to the remote peer. It's a
legacy notification of a state which can be detected instead by
watching for the iceGatheringState to change to complete, by watching
for the icegatheringstatechange event.
However, in the tutorial here, they introduce the following code
function handleICECandidateEvent(event) {
if (event.candidate) {
sendToServer({
type: "new-ice-candidate",
target: targetUsername,
candidate: event.candidate
});
}
}
If candidate is an empty string, it will be evaluated falsy and not be sent via sendToServer.
More interestingly, even in a same article here
They have following sample code
rtcPeerConnection.onicecandidate = (event) => {
if (event.candidate) {
sendCandidateToRemotePeer(event.candidate)
}
}
But right below this snippet, they say
When an ICE negotiation session runs out of candidates to propose for
a given RTCIceTransport, it has completed gathering for a generation
of candidates. That this has occurred is indicated by an icecandidate
event whose candidate string is empty ("").
You should deliver this to the remote peer just like any standard candidate, as described under Sharing a new candidate above. This
ensures that the remote peer is given the end-of-candidates
notification as well.
Actually, I read many online tutorials but I have never seen anywhere where they handle the empty string candidate.
The old spec did not require sending an empty candidate, but the new spec require send and addIceCandidate() an empty candidate.
Since Chrome is still an old specification, an empty candidate will cause an error when addedIceCandidate(), so I will not send it.
Related
I am seeing these iceCandidates:
[{"candidate":"","sdpMid":"audio","sdpMLineIndex":0,"usernameFragment":"f18ab8e6"}]
[{"candidate":"","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"b06cfb12"}]
Namely, candidate field is empty string. It's not that empty cadidate which means, no more candidates are comming, this is normal iceCandidate object except it doesn't have the candidate field. How is that possible?
This is valid and emitted by Firefox, Chrome does not implement it yet (but should be silently ignoring it when fed into addIceCandidate. It means the ICE engine has finished gathering all candidates it needs for this sdpMid/sdpMLineIndex.
I'm building a video calling app using WebRTC which allows one peer to call another by selecting someone in the lobby. When peer A sends a call request, the other peer B can accept. At this point, WebRTC signaling starts:
Both peers get their local media using MediaDevices.getUserMedia()
Both peers create an RTCPeerConnection and attach event listeners
Both peers calls RTCPeerConnection.addTrack() to add their local media
One peer A (the impolite user) creates an offer, calls RTCPeerConnection.setLocalDescription() to set that offer as the local description, and sends it to the WebSocket server, which forwards it to the other peer B.
The other peer B receives this offer and adds calls RTCPeerConnection.setRemoteDescription() to record it as the remote description
The other peer B then creates an answer and transmits it again to the first peer A.
(Steps based on https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Connectivity)
This flow is almost working well. In 1 out of 10 calls, I receive no video/audio from one of the peers (while both peers have working local video). In such a case, I have noticed that the answer SDP contains a=recvonly while this should be a=sendrecv under normal circumstances.
I have further determined that by the time the other peer receives the offer and needs to reply with an answer, the localMedia of this side has sometimes not yet been added, because MediaDevices.getUserMedia can take a while to complete. I have also confirmed this order of operations by logging and observing that the offer sometimes arrives before local tracks were added.
I'm assuming that I shouldn't send an answer before the local media has been added?
I'm thinking of two ways to fix this, but I am not sure which option is best, if any:
Create the RTCPeerConnection only after MediaDevices.getUserMedia() completes. In the meantime, when receiving an offer, there is no peer connection yet, so we save offers in a buffer to process them later once the RTCPeerConnection is created.
When receiving an offer, and there are no localMedia tracks yet, hold off on creating the answer until the localMedia tracks have been added.
I am having difficulties to decide which solution (or another) matches best with the "Perfect Negotiation" pattern.
Thanks in advance!
Yes, it is good to add the stream before creating an offer if you do it 'statically', but the best way to do it is to do it in the onnegotiationneeded event because the addtrack event triggers an onnegotiationneeded event. So you should add the stream and then use the createoffer inside onnegotiationneeded. As far as the answer you can do it before with no issues, but remember that a well-established connection will let you add/remove tracks with no problems (even after the SDP has been set). You didn't post any code but remember that you also MUST exchange ice candidates.
The last piece of advice, remember that all of the above IS asynchronous! so you should use promises, and await until the description is set, only THEN create an offer/answer.
Hopefully, this will help
I am using Simple-peer to build a webrtc application. To establish connection, we need to first send the offer and receive the answer. After that onicecandidate event gets triggered generating the candidate, we are required to send the candidate data to remote peer. The remote peer will than run addicecandidate and send back the remote candidate data which need to be added on localpeer using addicecandidate and connection gets established.
I want to understand how simple-peer is handling transfer of candidate data. The SDP data related to OFFER and ANSWER is required to be transferred using server in between, in one of the example socket-io has been used. But how the candidate data is getting transferred?
In simple-peer the signal from peer.on('signal', data=>{}) contains all the webrtc signaling data. If you print out the value of the signal you'll see it contains sdp, offer and answer all labeled to identify which is which.
In WebRTC, how are the ICE candidates selected and used? If a new candidate is added, it will be used and if yes, when?
Newly gathered candidates needs to be advertised to other end using signalling then it will be paired with the remote candidates for both ends and restart the ICE processing for the new pairs. If any of the pairs gets succeeded and the priority value set by ICE is higher than the existing selected candidates then offerer end will select this new pair as favoured and selected candidate and start using this new pair of selected candidates. Some time it may require to send updated offer using the selected candidates through signalling.
Sending updated offer/answer depends on the design or implementation of the client or ICE implementation. Basically for anything changes related with the ICE attributes from initial offer/answer requires updated offer. As per RFC when ICE candidates are selected then offerer end shall include "a=remote-candidates" with the selected candidates and exchange the updated offer/answer. If client doesn't requires explicitly the updated offer as the indication of remote-candidates then its fine not to send any updated offer, you can try by not sending this and see what happens. It's should be harmless if you send updated offer all the time as well as long as the other client can parse this and recognises the attributes.
In FF or Chrome browser default implementation of WebRTC (Trickle ICE) you don't need to get worried about updated candidates, as client end will receive events when an ICE candidate is available to retrieve so you will just retrieve that and send to other end. For re-Invite case, call createOffer once the ICE state goes to "completed" state on the Controlling (Offerer) side.
Let's just accept for a moment that it is not a horrible idea to implement RPC over message queues (like RabbitMQ) -- sometimes it might be necessary when interfacing with legacy systems.
In case of RPC over RabbitMQ, clients send a message to the broker, broker routes the message to a worker, worker returns the result through the broker to the client. However, if a worker implements more than one remote method, then somehow the different calls need to be routed to different listeners.
What is the general practice in this case? All RPC over MQ examples show only one remote method. It would be nice and easy to just set the method name as the routing rule/queue name, but I don't know whether this is the right way to do it.
Let's just accept for a moment that it is not a horrible idea to implement RPC over message queues (like RabbitMQ)
it's not horrible at all! it's common, and recommended in many situations - not just legacy integration.
... ok, to your actual question now :)
from a very high level perspective, here is what you need to do.
Your request and response need to have two key pieces of information:
a correlation-id
a reply-to queue
These bits of information will allow you to correlate the original request and the response.
Before you send the request
have your requesting code create an exclusive queue for itself. This queue will be used to receive the replies.
create a new correlation id - typically a GUID or UUID to guarantee uniqueness.
When Sending The Request
Attach the correlation id that you generated, to the message properties. there is a correlationId property that you should use for this.
store the correlation id with the associated callback function (reply handler) for the request, somewhere inside of the code that is making the request. you will need to this when the reply comes in.
attach the name of the exclusive queue that you created, to the replyTo property of the message, as well.
with all this done, you can send the message across rabbitmq
when replying
the reply code needs to use both the correlationId and the replyTo fields from the original message. so be sure to grab those
the reply should be sent directly to the replyTo queue. don't use standard publishing through an exchange. instead, send the reply message directly to the queue using the "send to queue" feature of whatever library you're using, and send the response directly to the replyTo queue.
be sure to include the correlationId in the response, as well. this is the critical part to answer your question
when handling the reply
The code that made the original request will receive the message from the replyTo queue. it will then pull the correlationId out of the message properties.
use the correlation id to look up the callback method for the request... the code that handles the response. pass the message to this callback method, and you're pretty much done.
the implementation details
this works, from a high level perspective. when you get down into the code, the implementation details will vary depending on the language and driver / library you are using.
most of the good RabbitMQ libraries for any given language will have Request/Response built in to them. If yours doesn't, you might want to look for a different library. Unless you are writing a patterns based library on top of the AMQP protocol, you should look for a library that has common patterns implemented for you.
If you need more information on the Request/Reply pattern, including all of the details that I've provided here (and more), check out these resources:
My own RabbitMQ Patterns email course / ebook
RabbitMQ Tutorials
Enterprise Integration Patterns - be sure to buy the book for the complete description / implementation pattern. it's worth having this book
If you're working in Node.js, I recommend using the wascally library, which includes the Request/Reply feature you need. For Ruby, check out bunny. For Java or .NET, look at some of the many service bus implementations around. In .NET, I recommend NServiceBus or MassTransit.
I've found that using a new reply-to queue per request can get really inefficient, specially when running RabbitMQ on a cluster.
As suggested in the comments direct reply-to seems to be the way to go. I've documented here all the options I tried before settling to that one.
I wrote an npm package amq.rabbitmq.reply-to.js that:
Uses direct reply-to - a feature that allows RPC (request/reply) clients with a design similar to that demonstrated in tutorial 6 (https://www.rabbitmq.com/direct-reply-to.html) to avoid declaring a response queue per request.
Creates an event emitter where rpc responses will be published by correlationId
as suggested by https://github.com/squaremo/amqp.node/issues/259#issuecomment-230165144
Usage:
const rabbitmqreplyto = require('amq.rabbitmq.reply-to.js');
const serverCallbackTimesTen = (message, rpcServer) => {
const n = parseInt(message);
return Promise.resolve(`${n * 10}`);
};
let rpcServer;
let rpcClient;
Promise.resolve().then(() => {
const serverOptions = new rabbitmqreplyto.RpcServerOptions(
/* url */ undefined,
/* serverId */ undefined,
/* callback */ serverCallbackTimesTen);
return rabbitmqreplyto.RpcServer.Create(serverOptions);
}).then((rpcServerP) => {
rpcServer = rpcServerP;
return rabbitmqreplyto.RpcClient.Create();
}).then((rpcClientP) => {
rpcClient = rpcClientP;
const promises = [];
for (let i = 1; i <= 20; i++) {
promises.push(rpcClient.sendRPCMessage(`${i}`));
}
return Promise.all(promises);
}).then((replies) => {
console.log(replies);
return Promise.all([rpcServer.Close(), rpcClient.Close()]);
});
//['10',
// '20',
// '30',
// '40',
// '50',
// '60',
// '70',
// '80',
// '90',
// '100',
// '110',
// '120',
// '130',
// '140',
// '150',
// '160',
// '170',
// '180',
// '190',
// '200']