mDNS Support for WebRTC at Google Chrome M74 - webrtc

Google Chrome scheduled for M74 release that the mDNS support for local ICE candidates will be involved in the browser to increase privacy.
This feature is controlled by the feature flag -enable-webrtc-hide-local-ips-with-mdns ;
I am trying to test the effect of mDNS support for WebRTC users on Google Chrome. I am testing with my custom WebRTC App and execute below test steps before making call;
1- WebRTC clients are logged in with using identical mDNS broadcast domains,
2- mDNS in Google Chrome Canary is enabled via -enable-webrtc-hide-local-ips-with-mdns flag on both party,
After the call has been generated, I saw that WebRTC agents are replaced their private IP's with anonymous local IP's which are followed by ".local";
Also I detect that the call signaling is generated and the media stream is performed successfully, but the ICE candidates are chosen from relay candidates(used TURN Server), not host candidates;
I think the parties cannot resolve their anonymous IP's and select to establish communication on existing TURN Server (use relay candidates).
I am sure that the clients are under the same subnet, but why they cannot resolve each other's anonymous IPs? I know that the nodes are broadcasted their anonymous IPs via 5353 port of mDNS and I expected that they will resolve the IPs easily. Is there anything that I missed during testing?
Your assistance would be highly appreciated. Thanks a lot

Related

Agora-SDK [ERROR]: [866CE]None Ice Candidate not allowed

I had agora rtc integrated in my website few months back and it was running smoothly until recently when the remote video is not being seen, and upon checking in the console, there is an error that says
Agora-SDK [ERROR]: [866CE]None Ice Candidate not allowed.
Agora has provided the reason and solution to this problem in this documentation,
Reason:
When establishing a WebRTC connection, the SDK fails to find any ICE (Interactive Connectivity Establishment) candidate.
NOTE: A candidate contains the IP address and port information for connecting to a remote device.
Solution:
The type of candidates used for connection depends on whether you have enabled cloud proxy or not. Choose one of the following solutions accordingly.
If you have enabled cloud proxy, the SDK gets relay candidates from a TURN server. Check whether you have whitelisted the IP addresses and ports that Agora provides for cloud proxy, and ensure that the local client can connect to the TURN server.
If you have not enabled cloud proxy, the SDK gets host candidates from the local device. In this case, the error is usually caused by the security settings of the local device.
Check whether the browser has any plugins that disable WebRTC.
Ensure that you have enabled UDP in the system firewall, and added the specified domains and ports to the whitelist.

stuck in WebRTC ICE checking state

I am trying to get a browser client to connect with my C++ linux application using WebRTC. So my environment is not the typical triangle WebRTC where 2 browsers setup a WebRTC call thru a server. Instead, the browser client side is typical, but my application is acting as the server and the remote client, so it does the signalling and also streams the SRTP media using gstreamer.
I am successful up to a point. I have successfully exchanged the ice candidates and the offer/answer SDP exchange is also successful. The browser ICE connection state successfully goes to "checking" and at that point I am stuck.
Question: Is the server or remote browser involved in the ice checking operations? That is, does the browser do the ICE checking with the STUN server or with the actual candidate address from the remote end. That would then imply that my C++ application has to be involved in that checking process.
Thanks,
-Andres
your server needs to respond to STUN binding requests at least which are sent as part of ICE.
If your server always has a public IP, using ice-lite (see RFC 5245) will make your life a lot easier.

Can I simplify WebRTC signalling for computers on the same private network?

WebRTC signalling is driving me crazy. My use-case is quite simple: a bidirectional audio intercom between a kiosk and to a control room webapp. Both computers are on the same network. Neither has internet access, all machines have known static IPs.
Everything I read wants me to use STUN/TURN/ICE servers. The acronyms for this is endless, contributing to my migraine but if this were a standard application, I'd just open a port, tell the other client about it (I can do this via the webapp if I need to) and have the other connect.
Can I do this with WebRTC? Without running a dozen signalling servers?
For the sake of examples, how would you connect a browser running on 192.168.0.101 to one running on 192.168.0.102?
STUN/TURN is different from signaling.
STUN/TURN in WebRTC are used to gather ICE candidates. Signaling is used to transmit between these two PCs the session description (offer and answer).
You can use free STUN server (like stun.l.google.com or stun.services.mozilla.org). There are also free TURN servers, but not too many (these are resource expensive). One is numb.vigenie.ca.
Now there's no signaling server, because these are custom and can be done in many ways. Here's an article that I wrote. I ended up using Stomp now on client side and Spring on server side.
I guess you can tamper with SDP and inject the ICE candidates statically, but you'll still need to exchange SDP (and that's dinamycally generated each session) between these two PCs somehow. Even though, taking into account that the configuration will not change, I guess you can exchange it once (through the means of copy-paste :) ), stored it somewhere and use it every time.
If your end-points have static IPs then you can ignore STUN, TURN and ICE, which are just power-tools to drill holes in firewalls. Most people aren't that lucky.
Due to how WebRTC is structured, end-points do need a way to exchange call setup information (SDP) like media ports and key information ahead of time. How you get that information from A to B and back to A, is entirely up to you ("signaling server" is just a fancy word for this), but most people use something like a web socket server, the tic-tac-toe of client-initiated communication.
I think the simplest way to make this work on a private network without an internet connection is to install a basic web socket server on one of the machines.
As an example I recommend the very simple https://github.com/emannion/webrtc-web-socket which worked on my private network without an internet connection.
Follow the instructions to install the web socket server on e.g. 192.168.1.101, then have both end-points connect to 192.168.0.101:1337 with Chrome or Firefox. Share camera on both ends in the basic demo web UI, and hit Connect and you should be good to go.
If you need to do this entirely without any server, then this answer to a related question at least highlights the information you'd need to send across (in a cut'n'paste demo).

Connect to specific user from STUN server in WEB RTC

I'm trying to achieve peer to peer video conference using google stun server.
I can connect anyone by stun server randomly.Because stun gives multiple and random addresses and connect with it.
But is there any way to connect specific peer by stun server for a login based system or room based system?
I want to achive something like - https://apprtc.appspot.com/
You need to design your signalling method (this is up to the application developer), which is independent of STUN.
WebRTC does not specify the mechanism for signalling. Signalling is the method whereby users discover each other and establish that a call (media streams between two peers) is going to take place.
The 'discovery' process could involve a registration-based system (eg using SIP proxy) or room based where two users have access to a 'room' (by knowing the credentials or some means of authentication). Once two peers have found each other, their browsers then need to share and negotiate network topology and media capabilities to ensure that the streams can reach the intended destination and can be encoded/decoded properly.

WebRTC HowTo PeerConnection via LAN with 2 Browsers

since few days I'm trying to build a basic webRTC Videochat. I've got some Demos running localy, even via LAN. But now I want to build one by my one at the really basics without so much overload some Demos come with.
But I still don't get a complete peer connection.
Eg. this example seems to be broken, because I can't "createSignalingChannel();" w3.org/TR/webrtc/#simple-example
Some other examples (https://webrtc-experiment.appspot.com/) want me to link their scripts, but I wont do this, because I want to understand the magic of the peer connection and how to get a handshake between 2 browsers.
I also explored examples with the Google App Engine but thats not what I want.
I want to run it in really easy JS and HTML just on the minimum of what is neccessary.
Here is my code:
https://github.com/mexx91/basicVideoRTC EDIT: Should work now
So what will I have to add to get an handshake and peer connection, so that I can send eg. the mediaStream to eachother.
Thanks a lot!
createSignalingChannel() is only pseudo-code to illustrate the existence of a separate channel. You need for the initial connection handling a separate message channel.
You can achieve that with hosted services like Pusher, Brightcontext or PubNub, or you can host your own backend with open-source projects like socket.io or SignalR.
Then you just need to send the offers, answers and iceCandidates through your separate channel.
List of Realtime Services: http://www.leggetter.co.uk/real-time-web-technologies-guide
Imagine a video conferencing web-app, which users A and B originally access from some webserver. Suppose that web app supports presence, so the web server knows who's currently on-line. Imahine the UI allows A to try and place a video call to B. Via say XMLHttpRequest(), A's browser informs the server this is wanted, and B's javascript pops up something saying that A wants to call B. No WebRTC has happened at all yet. But at this stage, A can indirecttly communicated with B by sending messages using e.g. XMLHttpeRequest. In WebRTC parlance, this is the "signalling channel". So, A and B can both interact with their ICE agents to discover candidate addresses, and SDP descriptions, and send these to each ot6her, via the server, over this signallinh channel. E.g. the web app on A calls a WebRTC API to get its ICE candidates, and packages these up as it sees fit, to send to B. B's reader receives this message from the server (e.g over a WebSocket or long poll) and hyence it can unpack this, and format as needed to send to the ICE agent on B, using the RTCPeerConnection object. Similalrly, SDP offer/answer can be sent betweent he two apps, and passe through into the ICE agnet in the browsers, to get agreed media formats etc. At that stage, media connections can get set uo by the browser (meida streams are added to the RTCPeerConnection initially (which aren't communicating, but whihc have attributes that can be queried to describe the codec etc, and when the API is asked to create an SDP description, it does that using these attributes, but adjust the IP address and port based on how the ICE agent on each local browser has figured out what addresses can reach that local browser / port (NAT traversal).