Does exchanging SDP insecurely jeopardize the security of a peer connection? [duplicate] - webrtc

i have a problem. I've developed a web-app using WebRtc for one-to-one videocall via browser using WebRtc with signalling server on node js (listening e.g. on 8181 port).
Now i would implement MITM attack. I was thinking that, wheen Peer_1 should invoke two rtc peer connection, one for the second peer (Peer_2), one to the MITM. The same thing for the second peer.
Now, i was thinking that signalling server needs to listen on another port, for each rtc peer connection received from the two peers (e.g. 8282 for Peer_1 and 8383 for Peer_2).
Am i right? I think that because signalling server's implementation is to one-to-one communication.
In this way, signalling server on port 8181 allows end-to-end communication for Peer_1 and Peer_2, on 8282 there is the signalling path for Peer_1 and the MITM, and on 8383 for MITM and Peer_2.
Am i right or not? Thanks for the support.

Man in the middle refers to interception during transmission, which WebRTC itself is secured against using DTLS and key exchange, so the weak point is usually the signaling server chosen by an application instead.
But what you describe however sounds like Man on both ends. You have to trust the service (the server) to guarantee whom you're being connected to. If that server is compromised, or either client is compromised - say by injection - then there's no guarantee whom you're talking to, since a client can easily forward a transmission to another party.

Related

Understanding SFU's, TURN servers in WebRTC

If I am building a WebRTC app and using a Selective Forwarding Unit media server, does this mean that I will have no need for STUN / TURN servers?
From what I understand, STUN servers are used for clients to discover their public IP / port, and TURN servers are used to relay data between clients when they are unable to connect directly to each other via STUN.
My question is, if I deploy my SFU media server with a public address, does this eliminate the need for STUN and TURN servers? Since data will always be relayed through the SFU and the clients / peers will never actually talk to each other directly?
However, I noticed that the installation guide for Kurento (a popular media server with SFU functionality) contains a section about configuring STUN or TURN servers. Why would STUN or TURN servers be necessary?
You should still use a TURN server when running an SFU. To understand diving into ICE a little bit will help. All SFUs work a little differently, but this is true for most.
For each PeerConnection the SFU will listen on a random UDP (and sometimes TCP port)
This IP/Port combination is giving to each peer who then attempts to contact the SFU.
The SFU then checks the incoming packets if they contain a valid hash (determined by upwd). This ensures there is no attacker connecting to this port.
A TURN server works by
Provides a single allocation port that peers can connect to. You can use UDP, DTLS, TCP or TLS. You need a valid username/password.
Once authenticated you send packets via this connection and the TURN server relays them for you.
The TURN server will then listen on a random port so that others can then send stuff back to the Peer.
So a TURN server has a few nice things that an SFU doesn't
You only have to listen on a single public port. If you are communicating with a service not on the internet you can just have your clients only connect to the allocation
You can also make your service available via UDP, DTLS, TCP and TLS. Most ICE implementations only support UDP.
These two factors are really important in government/hospital situations. You have networks that only allow TLS traffic over port 443. So a TURN server is your only solution (you run your allocation on TLS 443)
So you need to design your system to your needs. But IMO you should always run a well configured TURN server in real world environments.

Difference between STUN/TURN(coTURN) servers and Signaling servers (written with socket.io/websocket) in WebRTC?

I am building this video teaching site and did some research and got a good understanding but except for this thing. So when a user want's to connect to another user, P2P, I need signaling server to get their public IP to get them connected. Now STUN is doing that job and TURN will relay the media if the peers cannot connect. Now if I write signaling server with WebSocket to communicate the SDP messages and have ICE working, do I need coTURN installed? What will be the job of the job of them particularly?
Where exactly I am confused is the work of my simply written WebSocket Signaling server (from what I saw in different tutorials) and the work of the coTURN server I'll install. And how to connect them with the media server I'll install.
A second question, is there a way to use P2P when there is only two/three participants and get the media servers involved is there is more than that so that I don't use up the participant's bandwidth too much?
The signaling server is required to exchange messages between peers (SDP packets) until they have established a P2P connection.
A STUN server is there to help a peer discover information about its public IP and to open up firewall ports. The main problem this is solving is that a lot of devices are behind NAT routers within small private networks; NAT basically allows outgoing requests and their response, but blocks any other "unsolicited" incoming requests. You therefore have a Catch-22 scenario when both peers are behind a NAT router and could make an outgoing request, but have nowhere to send it to since the opposite peer doesn't expose anything to make a request to. STUN servers act as a temporary middleman to make requests to, which opens a port on the NAT device to allow the response to come back, which means there's now a known open port the other peer can use. It's a form of hole-punching.
A TURN server is a relay in a publicly accessible location, in case a P2P connection is impossible. There are still cases where hole-punching is unsuccessful, e.g. due to more restrictive firewalls. In those cases the two peers simply cannot talk 1-on-1 directly, and all their traffic is relayed through a TURN server. That's a 3rd party server that both peers can connect to unrestrictedly and that simply forwards data from one peer to the other. One popular implementation of a TURN server is coturn.
Yes, basically all those functions could be fulfilled by a single server, but they’re deliberately separated. The WebRTC specification has absolutely nothing to say about signaling servers, since the signaling mechanism is very unique to each application and could take many different forms. TURN is very bandwidth intensive and must usually be delegated to a larger server farm if you’re hoping to scale at all, so is impractical to mix in with any of the other two functions. So you end up with three separate components.
Regarding multi-peer connections: yes, you can set up a P2P group chat just fine. However, each peer will need to be connected to every other peer, so the number of connections and bandwidth per peer increases with each new peer. That’s probably going to work okay for 3 or 4 peers, but beyond that you may start to run into bandwidth and CPU limits of individual peers, especially if you’re doing decent quality video streaming.

Does WebRTC allow actual peer-to-peer communication?

Is the signaling server used only the first time to establish a connection between 2 peers or is it also used to send and receive data-streams between the peers?
According to the w3c proposal:
An RTCPeerConnection allows two users to communicate directly, browser to browser. Communications are coordinated via a signaling channel which is provided by unspecified means, but generally by a script in the page via the server, e.g. using XMLHttpRequest.
So the Server is only used for signalig not for data transmission. But signaling is not limited to establishing the first connection. The signaling channel is also used for transmitting error messages, metadata such as codecs, codec settings, networkdata and keys for secure transmission.
This depends on the network configuration.
If at least one of the peers is not behind a NAT firewall, the peer that is directly on the internet acts as server, and the signalling server is no longer used after the connection is established.
If both peers are behind a NAT appliance, under certain circumstances it might be possible to negociate a client server connection between the peers, and the data is again sent directly between the two peers.
If both peers are behind a NAT firewall that is locked down, all the traffic between the peers passes through the signalling server.
Notice also that in the first two cases, a STUN server is used to establish the connection. If the full data is relayed through the server, a TURN server is used.
Look at a good explanation in the article an video on html5rocks. They claim only about 14% of all connexions need TURN, which seems a really low number to me (This corresponds to only 37% of all clients are behind a locked down NAT router).

Is STUN server absolutely necessary for webrtc when I have a socket.io based signaling server?

My understanding about STUN server for webrtc is that when the clients are behind the NAT (in most cases, if not all), the STUN server will help the webrtc clients to identify their addresses and ports. And I also read some article saying that a signaling server is needed for webrtc clients. The signaling server could be a web server, socket.io, or even emailing a url. My first question would be: is the STUN server the signaling server?
Actually now I built a very simple socket.io based service which broadcasts client's session descriptions to all other clients. So I believe the socket.io based server should have enough knowledge about the clients' addresses and ports information. If this is the case, why do we bother to have another STUN server?
The STUN server is NOT the signalling server.
The purpose of the signalling server is to pass information between the peers at the start up of the session(how can they send an offer without knowing who to send to?). This information includes the SDPs that are created on the offers and the answers and also any Ice Candidates that are created by either party.
The reason to have a STUN server is so that the two peers can send the media to each other. The media streams will not hit your signalling server but instead will go straight to the other party(the definition of a peer-to-peer connection), the exception to this would be the case when a TURN server is used.
Media cannot magically go through a NAT or a firewall because the two parties do not have direct access to each other(like they would if they were on the same LAN).
In short STUN server is needed the large majority of the time when the two parties are not on the same network(to get valid connection candidates for peer-to-peer media streaming) and a signalling server is ALWAYS needed(whether they are on different networks or not) so that the negotiation and connection build up can take place. Good explanation of the connection and streaming process
STUN is used to implement the ICE protocol, which tries to find a working network path between the two clients. ICE will also use TURN relay servers (if configured in the RTCPeerConnection) for cases where the two clients (due to NAT/Firewall restrictions) can't make a direct peer-to-peer connection.
STUN servers are used to identify the external address used by the computer on the internet (the outside-the-NAT address) and to attempt to set up a port mapping usable by the peer (if the NAT isn't "symmetric") -- contacting the STUN server will tell you the external IP and port to try to use in ICE. These are the ICE candidates included in the SDP or in the trickle-ICE messages.
For almost-guaranteed connectivity, a server should have TURN servers (preferably supporting UDP and TCP TURN, though UDP is far preferred). Note that unlike STUN, TURN can use appreciable bandwidth, and so can cost money to host. Luckily, most connections succeed without needing to use a TURN server (i.e. they run peer-to-peer)
NAT(Network Address Transformation) is used to translate "Private IP', which is valid only in LAN into "Public IP" which is valid in WAN.
The problem is that "Public IP" is only visible from outside, so we need STUN or TURN server to send back "Public IP" to you.
This process enables a WebRTC peer to get a publicly accessible address for itself, and then pass that on to another peer via a signaling mechanism
A STUN server is used to get an external network address.
TURN servers are used to relay traffic if direct (peer to peer) connection fails.
for more you can also refer from below link: https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/#what-is-signaling
In your case, you need STUN. Most clients will be behind NAT, so you need STUN to get the clients public IP. But if both your clients were not behind NAT, then you wouldn't need STUN. More generally, no, a STUN server is not strictly required. I know this because I successfully connected 2 WebRTC peers without a stun server. I used the example code from aiortc, a python WebRTC/ ORTC library where both clients were running locally on my laptop. The signalling channel used my manual copy-pasting. I literally copied the SD (session description) from the one peer to the other. Then, copied the SD from the 2nd peer to the 1st peer once again.
From the ICE RFC (RFC8445), which WebRTC uses
An ICE agent SHOULD gather server-reflexive and relayed candidates.
However, use of STUN and TURN servers may be unnecessary in certain
networks and use of TURN servers may be expensive, so some
deployments may elect not to use them.
It's not clear that STUN is a requirement for ICE, but the above says it may be unnecessary.
However, signalling has nothing to do with it. This question actually stems from not understanding what STUN does, and how STUN interplays with signalling. I would argue the other 3 answers here do not actually answer these 2 concerns.
Pre-requisite: Understand the basic concepts of NAT. STUN is a tool to go around NAT, so you have to understand it.
Signalling: Briefly, in WebRTC you need to implement your own signalling strategy. You can manually type the local session description created by one peer in the other peer, use WebSockets, socket.io, or any other methods (I saw a joke that smoke signals can be used, but how are you going to pass the following session description (aka. SDP message) through a smoke signal...). Again, I copy pasted something very similar to below:
v=0
o=alice 2890844526 2890844526 IN IP4 host.anywhere.com
s=
c=IN IP4 host.anywhere.com
t=0 0
m=audio 49170 RTP/AVP 0
a=rtpmap:0 PCMU/8000
m=video 51372 RTP/AVP 31
a=rtpmap:31 H261/90000
m=video 53000 RTP/AVP 32
a=rtpmap:32 MPV/90000
When both peers are not behind NAT, you don't need a STUN server, as the IP addresses located in the session description (the c= field above, known as connection data) generated by each peer would be enough for each peer to send datagrams or packets to each other. In the example above, they've provided the domain name instead of IP address, host.anywhere.com, but this can be resolved to an A record. (Study DNS for more information).
Why don't you need a STUN server in this case? From RFC8445:
There are different types of candidates; some are derived from physical or logical network interfaces, and others are discoverable via STUN and TURN.
If you're not using NAT, the client already knows the IP address which peers can directly address, so the additional ICE candidates that STUN would generate would not be helpful (it would just give you the same IP address you already know about).
But when a client is behind a NAT, the IP they think they won't help a peer contact them. Its like telling you my ip address is 192.168.1.235, it really is, but its my private IP. The NAT might be on the router, and your client may have no way of asking for the public IP. So STUN is a tool for dealing with this. Specifically,
It provides a means for an endpoint to determine the IP address and port allocated by a NAT that corresponds to its private IP address and port.
STUN basically lets the client find out what the IP address. If you were hosting a Call of Duty server from your laptop, and port forwarded a port to your machine in the router settings, you still had to look up your public IP address from a website like https://whatismyipaddress.com/. STUN lets a client do this for itself, without you accessing a browser.
Finally, how does STUN interplay with signalling?
The ICE candidates are generated locally and with the help of STUN (to get client public IP addresses when they're behind NAT) and even TURN. Session descriptions are sent to the peer using the signalling channel. If you don't use STUN, you might find that the ICE candidates generated that is tried by ICE all fail, and a connection (other than the signalling channel) does not successfully get created.

How does WebRTC work?

I'm interested in Peer-to-Peer connections in the browser. Since this seems to be possible with WebRTC, I'm wondering how it works exaclty.
I've read some explanations and saw diagrams about it and now it's clear to me, that the connection establishmet works over the server. The server seems to exchange some data between the client that are willing to connect to each other, so that they can start a direct connection, that is independent of the server.
But that's exaclty what I don't understand. Until now, I thought the only way to create connections is to listen on a port on computer A and connect to that port from computer B. But this does not seem to be the case in WebRTC. I think none of the clients starts to listen on a port. Somehow, they can create a connection without listening on ports and accepting connections. Neither client A, nor client B starts acting as a server.
But how? What data is exchanged over the WebRTC server, that the clients can use to connect to each other?
Thanks for your explanations for this :)
Edit
I found this article. It's not related to WebRTC, but I think it answers a part of my question. I'm not sure, tough. It still would be cool, if someone could explain it to me and give me some additional links.
WebRTC gives SDP Offer to the client JS app to send (however the JS app wants) to the other device, which uses that to generate an SDP Answer.
The trick is that the SDP includes ICE candidates (effectively "try to talk to me at this IP address and this port"). ICE works to punch open ports in the firewalls; though if both sides are symmetric NATs it won't be possible generally, and an alternative candidate (on a TURN server) can be used.
Once they're talking directly (or via TURN, which is effectively a packet-mirror), they can open a DTLS connection and use it to key the SRTP-DTLS media streams, and to send DataChannels over DTLS.
Edit:
Acronyms here: http://blog.1click.io/10-jargons-abbreviations-for-webrtc-fans/ for the rest, there is Google. Most of these are defined by the IETF (http://ietf.org/)
Edit 2:
Firefox and Chrome (and the spec) have moved to using "trickle" for ICE candidates, so the ICE candidates are generally added after-the-face to the PeerConnection and exchanged independently of the initial SDP (though you can wait until the initial candidates are ready before sending an offer, and bundle them together).
See https://webrtcglossary.com/trickle-ice/ and https://datatracker.ietf.org/doc/draft-ietf-ice-trickle/
How WebRTC Works
This document provides a quick and abstract introduction to WebRTC. In order to get more information about WebRTC please look at the Further Reading section at the end of this document.
WebRTC
WebRTC(Web Real-Time Communication) is a set of technologies that is developed for peer to peer duplex real-time communication between browsers. As its name mentions it is compatible with Web and it is a standard in W3C One of the important feature of WebRTC is that it works even behind NAT addresses.
WebRTC uses several technologies to provide real-time peer to peer communication between browsers. These technologies are
SDP (Session Description Protocol)
ICE (Interactivity Connection Establishment)
RTP (Real Time Protocol)
There is one more thing which is Signalling Server is needed for running WebRTC. However, there is no defined standart in implementing signalling server. Each implementation creates its own style. There will give some more information about Signalling Server later in this section.
Let's give some quick info about technologies above.
SDP (Session Description Protocol)
SDP is a simple protocol and it is used for which codecs are supported in browsers. For instance, assume that there are two peers(Client A and Client B) which will be connected through WebRTC. Client A and Client B create SDP strings that defines which codecs they support. For example, Client A may support H264, VP8 and VP9 codecs for video, Opus and PCM codecs for audio. Client B may support only H264 for video and only Opus codec for audio. For this case, the codecs that will be used between Client A and Client B are H264 and Opus. If there are no common codecs between peers, peer to peer communication cannot be established.
You may have a question about how these SDP strings are sent between each others. This is where Signalling Server takes place.
ICE (Interactivity Connection Establishment)
ICE is the magic that establishes connection between peers even if they are behind NAT. Let's assume again Client A and Client B will get connected and take a look at how ICE is used for that.
Client A finds out their local address and public Internet address by using STUN server and sends these address to Client B through Signalling Server. Each addresses received from STUN server is called ICE candidate
In the image above, there are two servers. One of them is STUN and other of them is TURN server.
STUN server is used to let Client A learn its all addresses. Let me give an example for this, our computers generally has one local address in the 192.168.0.0 network and there is a second address we see when we connect to www.whatismyip.com, this IP address is actually the Public IP address of our Internet Gateway(modem, router, etc.) so let's define STUN server; STUN servers lets peers know theirs Public and Local IP addresses. Btw, Google provides free STUN server(stun.l.google.com:19302).
There is a one more server, TURN Server, in the image. TURN Server is used when peer to peer connection cannot be established between peers. TURN server just relays the data between peers.
Client B does the same, gets local and public IP addresses from STUN server and sends these addresses to Client A through Signalling Server.
Client A receives Client B's addresses and tries each IP addresses by sending special pings in order to create connection with Client B. If Client A receives response from any IP addresses, it puts that address in a list with its response time and other performance credentials. At last Client A choose the best addresses according to its performance.
Client B does the same in order to connect to Client A
RTP (Real Time Protocol)
RTP is a mature protocol for transmitting real-time data. It is based on UDP. Audio and Video are transmitted with RTP in WebRTC. There is a sister protocol of RTP which name is RTCP(Real-time Control Protocol) which provides QoS in RTP communication. RTP is also used in RTSP(Real-time Streaming Protocol)
Signalling Server
The last part is the Signalling Server which is not defined in WebRTC. As mentioned above, Signalling Server is used to send SDP strings and ICE Candidates between Client A and Client B. Signalling Server also decides which peers get connected to each other. WebSocket technology is generally used in Signalling Servers for communication.
Compatibility
In the last one year, all browsers including Safari, Edge have released new versions supporting WebRTC. Chrome, Firefox and Opera have already supported WebRTC for a while. The video codec that is common to browsers are H264. For the audio, Opus is common in browsers. PCM can also be used for audio codec but AAC is not used even if AAC is supported in all browsers because of licensing issues. IP Cameras generally support H264 for video codec and PCM or AAC for audio codec.
Further Reading and References
WebRTC Samples
ICE Wikipedia
SDP
RTP RFC
Getting Started with WebRTC
WebRTC.org
STUN Server
RTSP Wikipedia
Btw, I am developer at Ant Media Server which supports scalable one-to-many WebRTC and peer to peer WebRTC connection
Establishing a p2p WebRTC connection has 3 steps (10.000 feet overview) :
Step 1: Signaling: both peers connect to a signaling server (using websockets over 80/443, comet, SIP,etc..) and exchange information (about their media capabilities, public IP:port pairs when they become available, etc.)
Step 2: Discovery: Devices connected to LAN or mobile networks are not aware of their public IP (and port) where they can be reached at so they use STUN/TURN servers located on the public Internet to discover their ip:port pair (ICE candidates). In the process they punch a hole through the NAT/router which is used in step3:
Step 3: P2P connection: once the ICE candidates are exchanged through the initial signaling channel each peer is aware of each other's ip:port (and holes have been punched in NATs/routers) so a peer to peer UDP connection can be established.
The scheme above explains the process with 2 devices connected to local networks. It's part of an article I wrote that deals with troubleshooting connection issues but it does a good job of explaining how WebRTC works.
A very good explanation can be found in this book "High Performance Browser Networking (O'Reilly)" http://chimera.labs.oreilly.com/books/1230000000545/ch03.html#STUN_TURN_ICE
which provides the fundamentals on how WebRTC uses ICE technology.
In particular assuming the IP address of the STUN server is known, the WebRTC application first sends a binding request to the STUN server. The STUN server replies with a response that contains the public IP address and port of the client as seen from the public network.
Now the application discovers its public IP and port tuple which can send to the other peer through SDP. (note that SDP are sent over an external signalling channel, f.i. websocket established through a web service)
With this mechanism in place, whenever two peers want to talk to each other over UDP, they can then use the established public IP and port tuples to exchange data.
Unfortunately, in some cases UDP may be blocked by a firewall. To address this issue, whenever STUN fails, we can use the Traversal Using Relays around NAT (TURN) protocol as a fallback, which can run over UDP and switch to TCP if all else fails.
WebRTC connection starts with WebRTC offer.
Caller creates the WebRTC offer and posts it to the Signaling server which will pass the offer to the callee. Users actually passing their SDP (Session Description Protocol) information each other.
Then we need to exchange the internet connection details. it allows clients to discover their public IP address and the type of NAT they are behind. this information is used to establish the media connection. This is handled by STUN server. this process is also known as getting the ICE Candidates. This data is also exchanged via the Signalling Server.
Final step is to exchange audio and video streams true TURN (Traversal Using Relay NAT) server. This ensures the connection even thought users are behind the firewall. this server process a lot of heavy calculations so its cost is high. when you test your app in dev with different browsers, you are directly connecting each other, you are not using TURN server