I am trying to develop a live video chat app using flash and rtmfp protocol but I have doubts as to
how rtmfp guarantees to connect peers especially when the peers are located in different networks.
RTMFP relies on a central rendezvous server to "introduce" 2 clients that are in different networks and potentially behind firewalls. To navigate through firewalls it uses NAT traversal techniques which essentially amounts to:
2 clients (Joe, and Mary) connect to central rendezvous server
Adobe's public RTMFP (non-commercial) rendezvous server rtmfp://p2p.rtmfp.net/
Or you can host your own using the GPL Open Sourced Cumulus or ArcusNode
Joe shares his peer ID with Mary or they can use a shared NetGroup ID
Mary communicates with the central rendezvous server which then uses a variety of NAT and Firewall traversal techniques to establish a peer 2 peer UDP connection
There is no guarantee that any 2 clients networks/firewalls are compatible for RTMFP P2P connections (you can diagnose with the RTMFP connection tester), that's why Adobe provides fallback solutions through their LiveCycle Collaboration Service or Adobe Flash Media Server to a central relay service (basically all network traffic would pass through a Server that both clients can publicly access) if a direct P2P connection can't be established.
Adobe's RTMFP is their proprietary bundling of Peer 2 Peer network rendevous service, as well as providing some other higher level P2P network mesh features.
If you're interested in a more open standard P2P protocol you should look into WebRTC. Essentially WebRTC is the same concept of allowing clients to connect P2P over UDP but geared towards being adopted as a web browser standard, and can also be implemented on a variety of native devices (e.g. Android, iPhone etc) under the hood it uses standard NAT and firewall traversal technology using STUN, ICE, TURN, RTP-over-TCP and support for proxies. I believe WebRTC is a standardization of some of the work done in the libjingle P2P Google Talk library.
Related
I want to make/experiment with a P2P network running on browsers (WebSocket and WebRTC for inter-browser communication) as well as on servers for signaling using WebSockets.
I read online that the max WebRTC connection is theoretically 256 on Chromium, but only around 6 practically. Most resources mentioned WebRTC with video calls, therefore, what is the practical limit using RTCDataChannel which consumes less bandwidth, at least in my use case?
We are building a data transmission backend from IOT device to our backend system built in backend and hosted on cloud. Found this W3 working draft that mentions the use case.
Does WebRTC have any advantage over traditional API data push from IOT device?
Most of use cases covered in explanations on internet are for peer to peer communication for which Webrtc is perfect fit.
I see a lot of developers using WebRTC for their IoT projects! Recently it has been our biggest group of contributors to Pion.
The nice thing about using DataChannels (vs other APIs)
Support in the browser (no need for a backend to bridge protocols)
You have different delivery options (Out-of-order or lossy for better performance)
Mandatory security. WebRTC it always over DTLS, many other protocols is optional
Available in lots of languages (C, C++, Python, Java, Go...) these aren't just FFI implementations, but first-class and pleasant to use.
can anyone help me what is the technical difference between WebRTC communication and the VoIP communication?
The question doesn't exactly make sense because it makes the assumption that VoIP is a technical stack, but it's not - it's a concept. The concept of sending Voice (V) over (o) Internet Protocols (IP). This means that different technology stacks can be used for accessing/capturing the media, establishing connections, negotiating streams, and transmitting streams.
WebRTC is one such stack (set of APIs, methods, and standards) for VoIP.
VOIP - Voice over Internet Protocol was a concept which came with popularity of internet. This involved using the internet to route voice telephony data, basically using existing IP infrastructure to transport audio streams without having dedicated circuit switched telephony. Over the time popular VOIP applications like Skype, Vonage and many in enterprise telephony came in.
VOIP had two parts one signalling , basically controller part and other actual media.
Actual media usually but not necessarily followed RTP (Real Time) protocol. RTP could carry both voice and video. Problem with RTP has been that browsers don't support it natively and it is not secure. You usually needed some sort of plugin to have VOIP work inside browser.
With WebRTC now popular browsers like FF, Chrome and Opera support a variation of RTP which is secure and can be natively invoked. Using WebRTC and browser Javascript you can send Voice, Video and Screen (it's video only) data to any other browser, which is cool.
VOIP : Voice over internet protocol uses DSL/Cable Modem voice over Wi-Fi/3G (VoWiFi/3G), voice over LTE (VoLTE), and Rich Communication Suite (RCS). VoIP is cloud-based, calls are sent as digital data and no cables are needed to send the call so any kind of Internet connection can be used to make calls and from a plethora of devices.
Web RTC: Web Real time communication use only OS browsers to communicate.
WebRTC requires the use of two main component JavaScript APIs.
WebRTC is an extension of VoIP to the browser world. It can reuse the existing VoIP infrastructure with incremental upgrades. This is good news for VoIP, as adoption of WebRTC only serves to increase overall VoIP proliferation.
Also, WebRTC is ideal for low-cost browser-based contact center applications. VoIP can serve embedded operator-driven VoLTE applications. Consequently, between WebRTC and VoIP, they can support wide range of consumer and enterprise applications.
I'm testing Audio/Video P2P connection between mobile devices.
Studying WebRTC, I've noticed NAT traversal(uses STUN server) and UDP-hole-punching is the key to make P2P possible.
On the other hand, I've noticed HLS(HttpLiveStreaming) on iOS devices is very optimized for A/V live streaming, and widely available even with Android4.x(3.x unstable)
So, here is my question if I use HLS for mobile P2P:
a) HLS is a protocol on TCP(HTTP) not UDP, so isn't there a performance drawback?
See: TCP vs UDP on video stream
b) How about NAT traversal? Will it be easier since HLS is HTTP(port:80)?
I have read wikipedia http://en.wikipedia.org/wiki/HTTP_Live_Streaming
Since its requests use only standard HTTP transactions, HTTP Live
Streaming is capable of traversing any firewall or proxy server that
lets through standard HTTP traffic, unlike UDP-based protocols such as
RTP. This also allows content to be delivered over widely available
CDNs.
c) How about android device compatibility? Is there a lot of problems to invoke StreamingLive distribution?
Thanks.
The reason why firewalls are not an issue for HLS is that it's a client-server protocol where all requests are done via HTTP on port 80. If you are implementing a P2P application, you won't be able to attach it to a port below 1024 unless you have root privileges.
This means that exchanging data via HLS (port 80) won't work for P2P. Unless you have a translation server in the middle, which defeats the purpose of P2P.
Comparing HTTP Live Streaming to P2P video streaming over UDP/RTP is almost like comparing apples and oranges. More like oranges and tangerines... read on.
HTTP Live Streaming was designed as client-server protocol without P2P or NAT traversal consideration. The idea being that the streaming server is already over HTTP/TCP and accessible from the public internet as if it was just an ordinary web server. The key features of HLS is its ability to dynamically switch the bitrate based on how well the client receives the stream. If the client connection to the server hiccups trying to stream down a 1080p video, it can transparently switch to sending a lower bitrate video (and likely switch back to streaming at higher bitrate if network conditions improve). Good example: Netflix.
WebRTC and ICE were designed to stream real time video bidirectionally between devices that might both behind NATs. As such, traversing a NAT through UDP is much easier than TCP. UDP lends itself to real-time (less latency) than TCP. Most video-chat clients (ala Skype) have dynamic bandwidth adjustments built in to their codecs and protocols to achieve something similar to what HLS does.
I suppose you could combine TCP NAT traveral and HLS together. Doing HLS over UDP implies that you build a TCP like reliability layer on top of your UDP stream.
Hope this helps
http://www.garymcgath.com/streamingprotocols.html
HTTP Live Streaming
The new trend in streaming is the use of HTTP with protocols that
support adaptive bitrates. This is theoretically a bad fit, as HTTP
with TCP/IP is designed for reliable delivery rather than keeping up a
steady flow, but with the prevalence of high-speed connections these
days it doesn't matter so much. Apple's entry is HTTP Live Streaming,
aka HLS or Cupertino streaming. It was developed by Apple for iOS and
isn't widely supported outside of Apple's products. Long Tail Video
provides a testing page to determine whether a browser supports HLS.
Its specification is available as an Internet Draft. The draft
contains proprietary material, and publishing derivative works is
prohibited.
The only playlist format allowed is M3U Extended (.m3u or .m3u8), but the format of the streams is restricted only by the implementation.
I could achieve P2P on top of HLS using WebRTC on a Android with a Mozilla Firefox browser and two others desktop browsers (Chrome and Firefox) on the same swarm.
Here's a screenshot of a presentation that I've made on the University: https://www.dropbox.com/s/zyfgs4o8al9ovd0/Screenshot%202014-07-17%2019.58.15.png
This screenshot was made by acessing http://bem.tv/demo.html.
If you want to know more about, this is my masters project and I'm publishing my advances on http://bem.tv and http://github.com/bemtv.
I want to communication between two different android phone via USB. I look at Google SDK guide I don't know how can i do.Somebody can give me some suggests ? Thank you very much !
The USB-Standard requires to work as host-client-mechanism. This means in particular, that you usually have a host-controller (for example inside your PC) which a client (USB-drive, MP3-player, mobile phone) can connect to.
The host is responsible for negotiating and establishing a connection. If you want to connect two clients with each other, one of them must Support USB-On-The-Go to serve as a host with limited capabilities.
From Wikipedia
The design architecture of USB is asymmetrical in its topology, consisting of a host, a multitude of downstream USB ports, and multiple peripheral devices connected in a tiered-star topology. Additional USB hubs may be included in the tiers, allowing branching into a tree structure with up to five tier levels. A USB host may implement multiple host controllers and each host controller may provide one or more USB ports. Up to 127 devices, including hub devices if present, may be connected to a single host controller.[20][21] USB devices are linked in series through hubs. One hub—built into the host controller—is the root hub.