Currently, I'm using open source code from OBS multiplatform for my video streaming application. But OBS uses RTMP for streaming, which only works on TCP. I want to use UDP since this is a live streaming scenario where I don't desire latency.
What is the best way to move to UDP? I read that protocols like RTP support UDP. Is this the best option to use? How about writing a simple UDP socket code to send and receive video data? Please guide.
The reason for trying to move to UDP is to ensure minimum latency between client and server. I'm looking for a quickest way to move to a udp based streaming than the current RTMP which uses TCP. I'm not very experienced in networking related coding. Will a simple socket program suffice to stream a video data continuously or should I implement a protocol like RTP? Or any other alternative? Ultimately, streaming should be smooth with minimum latency.
Related
I'm trying to develop a client/server realtime audio processing application. The client sends streaming audio to the server, the server sends the modified streaming audio to the client. A small delay is important to me, so the TCP is not suitable as a transport protocol.
Right now I'm using gRPC, but the delay is big. I tried to find a project similar to gRPC, but with UDP, but didn't find it. I don't have the resources to make a good framework using UDP. Then I learned about RTSP (RTP), but didn't understand if it can be used in bidirectional streaming. Help with the right direction for me. Recommend open source projects (or not) if they exist. Thanks.
I was looking all over the internet to find a way to capture incoming packets from a certain network interface, then I came across PCAP, TCPDUMP, I believe the most commonly used networking library out there is Boost/Asio, so I wanted to use this library in order to capture traffic, but apparently there is no example for using Raw sockets or other classes to listen for incoming packets to a certain NI, I would appreciate any help or examples on this.
We eventually found out the best option for sniffing incoming packets was Libtins.
libtins is a high-level, multiplatform C++ network packet sniffing and crafting library.
Its main purpose is to provide the C++ developer an easy, efficient,
platform and endianness-independent way to create tools which need to
send, receive and manipulate network packets.
It uses a BSD-2 license and it's hosted at github.
I'm trying to sent some audio stream from my browser to some server(udp, also try websockets).
I'm recording audio stream with webrtc , but I have problems with transmitting data from a nodeJS client to the my server.
Any idea? is it possible to send audio stream to the server using webrtc(openwebrtc)?
To get audio from the browser to the server, you have a few different possibilities.
Web Sockets
Simply send the audio data over a binary web socket to your server. You can use the Web Audio API with a ScriptProcessorNode to capture raw PCM and send it losslessly. Or, you can use the MediaRecorder to record the MediaStream and encode it with a codec like Opus, which you can then stream over the Web Socket.
There is a sample for doing this with video over on Facebook's GitHub repo. Streaming audio only is conceptually the same thing, so you should be able to adapt the example.
HTTP (future)
In the near future, you'll be able to use a WritableStream as the request body with the Fetch API, allowing you to make a normal HTTP PUT with a stream source from a browser. This is essentially the same as what you would do with a Web Socket, just without the Web Socket layer.
WebRTC (data channel)
With a WebRTC connection and the server as a "peer", you can open a data channel and send that exact same PCM or encoded audio that you would have sent over Web Sockets or HTTP.
There's a ton of complexity added to this with no real benefit. Don't use this method.
WebRTC (media streams)
WebRTC calls support direct handling of MediaStreams. You can attach a stream and let the WebRTC stack take care of negotiating a codec, adapting for bandwidth changes, dropping data that doesn't arrive, maintaining synchronization, and negotiating connectivity around restrictive firewall environments. While this makes things easier on the surface, that's a lot of complexity as well. There aren't any packages for Node.js that expose the MediaStreams to you, so you're stuck dealing with other software... none of it as easy to integrate as it could be.
Most folks going this route will execute gstreamer as an RTP server to handle the media component. I'm not convinced this is the best way, but it's the best way I know of at the moment.
I've got WebRTC peer to peer working but when I want to broadcast a single camera to multiple clients obviously peer to peer isn't suitable.
I've found solutions like
http://lynckia.com
and
http://www.medooze.com/products/mcu/webrtc-support.aspx
But the first I can't get setup (and it seems to have cross browser issues)
the second just feels like we're hitting a nail with a nuclear missile.
All I need is a relay, I don't need to decode / recode streams.
I just need
The Broadcaster to connect to the server (peer to peer)
The clients to connect to the server (peer to peer)
The server to relay the stream from the broadcaster to the clients.
Is there any software out there that offers this solution that I've missed? is there an alternative working and scalable alternative?
Thanks
Jitsi Video Bridge works pretty much exactly how you describe.
On your server you can run Janus, to which your broadcaster can provide a stream via RTP.
Have a look at an example configuration file.
After writing a configuration file which defines how the server receives stream from the broadcaster, you should be able to launch janus in the background via a command line interface tool:
$ janus --daemon --config=config_file.conf
Also, see streaming test demo.
Note: I have not tested this thoroughly.
Have a look at this github-repo inspired from muaz khan's WebRTC p2p scalable broadcast. This can work great on LAN. On internet, I am not sure how well it can work as of now though we are improving it on the go.
If you just want to broadcast from a peer to a set of peers, if they don't care about the latency, the best solution is to covert WebRTC to live streaming, without transcoding just muxing:
Peer(Publisher) ---WebRTC--> Server --RTMP/HLS/DASH--> Peers/Players
If this works good for you, SRS is able to covert WebRTC to live streaming.
Because live streaming allows you to use CDN or TCP to deliver the streams, and the latency is about 3~5s, so this solution is only available when Peers/Players never need to communicate to the Peer(Publisher).
If you want all those peers to talk to each other, it's very complex and need a WebRTC SFU cluster to do this, there will be a huge number of streams. For example, if allows 100 peers to talk to each other, there will be 100x100 = 10k streams.
It's too complicated, so I don't think there is good open-source solution right now(at 2022.02).
I'm testing Audio/Video P2P connection between mobile devices.
Studying WebRTC, I've noticed NAT traversal(uses STUN server) and UDP-hole-punching is the key to make P2P possible.
On the other hand, I've noticed HLS(HttpLiveStreaming) on iOS devices is very optimized for A/V live streaming, and widely available even with Android4.x(3.x unstable)
So, here is my question if I use HLS for mobile P2P:
a) HLS is a protocol on TCP(HTTP) not UDP, so isn't there a performance drawback?
See: TCP vs UDP on video stream
b) How about NAT traversal? Will it be easier since HLS is HTTP(port:80)?
I have read wikipedia http://en.wikipedia.org/wiki/HTTP_Live_Streaming
Since its requests use only standard HTTP transactions, HTTP Live
Streaming is capable of traversing any firewall or proxy server that
lets through standard HTTP traffic, unlike UDP-based protocols such as
RTP. This also allows content to be delivered over widely available
CDNs.
c) How about android device compatibility? Is there a lot of problems to invoke StreamingLive distribution?
Thanks.
The reason why firewalls are not an issue for HLS is that it's a client-server protocol where all requests are done via HTTP on port 80. If you are implementing a P2P application, you won't be able to attach it to a port below 1024 unless you have root privileges.
This means that exchanging data via HLS (port 80) won't work for P2P. Unless you have a translation server in the middle, which defeats the purpose of P2P.
Comparing HTTP Live Streaming to P2P video streaming over UDP/RTP is almost like comparing apples and oranges. More like oranges and tangerines... read on.
HTTP Live Streaming was designed as client-server protocol without P2P or NAT traversal consideration. The idea being that the streaming server is already over HTTP/TCP and accessible from the public internet as if it was just an ordinary web server. The key features of HLS is its ability to dynamically switch the bitrate based on how well the client receives the stream. If the client connection to the server hiccups trying to stream down a 1080p video, it can transparently switch to sending a lower bitrate video (and likely switch back to streaming at higher bitrate if network conditions improve). Good example: Netflix.
WebRTC and ICE were designed to stream real time video bidirectionally between devices that might both behind NATs. As such, traversing a NAT through UDP is much easier than TCP. UDP lends itself to real-time (less latency) than TCP. Most video-chat clients (ala Skype) have dynamic bandwidth adjustments built in to their codecs and protocols to achieve something similar to what HLS does.
I suppose you could combine TCP NAT traveral and HLS together. Doing HLS over UDP implies that you build a TCP like reliability layer on top of your UDP stream.
Hope this helps
http://www.garymcgath.com/streamingprotocols.html
HTTP Live Streaming
The new trend in streaming is the use of HTTP with protocols that
support adaptive bitrates. This is theoretically a bad fit, as HTTP
with TCP/IP is designed for reliable delivery rather than keeping up a
steady flow, but with the prevalence of high-speed connections these
days it doesn't matter so much. Apple's entry is HTTP Live Streaming,
aka HLS or Cupertino streaming. It was developed by Apple for iOS and
isn't widely supported outside of Apple's products. Long Tail Video
provides a testing page to determine whether a browser supports HLS.
Its specification is available as an Internet Draft. The draft
contains proprietary material, and publishing derivative works is
prohibited.
The only playlist format allowed is M3U Extended (.m3u or .m3u8), but the format of the streams is restricted only by the implementation.
I could achieve P2P on top of HLS using WebRTC on a Android with a Mozilla Firefox browser and two others desktop browsers (Chrome and Firefox) on the same swarm.
Here's a screenshot of a presentation that I've made on the University: https://www.dropbox.com/s/zyfgs4o8al9ovd0/Screenshot%202014-07-17%2019.58.15.png
This screenshot was made by acessing http://bem.tv/demo.html.
If you want to know more about, this is my masters project and I'm publishing my advances on http://bem.tv and http://github.com/bemtv.