I am trying to find the best way to broadcast a camara and send the stream to 200 connections.
If I use web-rtc, I am limited with the CPU power. I've tried to use a server as a gateway, but the number connection maximum I can perform is 60. And 120 with 2 servers.
I can't use web socket to send stream because, the TCP protocol create latency.
Last solution : use RTMP protocol, but there is 5s-10s of latency.
My question: Is there a solution to stream a camera to many clients (200/300) in real-time ?
Just using webrtc would not work as I assume the device the the camera will need a huge bandwidth. The best way is to use an SFU. This will send the video to to the server to then broadcast it to every peer. It is normally able to handle 200 connections if only video is used.
I've implemented such a server using mediasoup. It also allows you to balance the load over several cpu's and multiple servers.
Here is a simple project where this library is used.
There are also other solutions like Janus gateway or kurento server. Although I haven't used them.
SECOND SOLUTION
I found This github repository which allows video forwarding peer to peer even for large audiences. Basically forwarding the stream to other peers which will also forward their received stream. I assume that there will be a little more latency as the video could be relayed through many peers.
I would like to create an audio (mic) and video (camera) chat room with 12 people using webRTC. I understand signalling and the need for external services like ICE & STUN to help peers connect with each other.
But I don't want to use a full mesh architecture where everyone connects to everyone else because it is less efficient. I don't want to use expensive TURN relay services. I want the swarm to propagate the streams automatically so that if a direct connection isn't possible, the network routes the stream packets via peers automatically using encapsulation.
I don't want to use star architecture because I don't want a bottleneck peer.
I would like a peer to connect to maybe 2 or 3 other peers (max) and broadcast their media across the network without worrying about the relaying between peers. The routing would obviously need to be controlled by some service but I can't see that it's possible to do the stream encapsulation with RTCPeerConnection. Since WebRTC only allows for a RTCPeerConnection object (1 peer to 1 peer) and a peer would need to distinguish where the incoming stream is coming from and whether it needs to be relayed to another peer.
Is there a technology that extends WebRTC to allow for this more bandwidth efficient architecture?
From what I understand IMAP requires a connection per each user. I'm writing an IMAP client (currently just gmail) that supports many (100s, 1000s maybe 10000s+) users at a time. Obviously cutting down the number of open connections would be great. I'm wondering if it's possible to use thread pooling on my side to connect to gmail via IMAP or if that simply isn't supported by the IMAP protocol.
IMAP typically uses SSL over TCP/IP. And a TCP/IP connection will need to be maintained per IMAP client connection, meaning that there will be many simultaneous open connections.
These multiple simultaneous connections can easily be maintained in a non-threaded (single thread) implementation without affecting the state of the TCP connections. You'll have to have some sort of a flow concept per IMAP TCP/IP connection, and store all of the flows in a container (a c++ STL map for instance) using the TCP/IP five-tuple (or socketFd) as a key. For each data packet received, lookup the flow and handle the packet accordingly. There is nothing about this approach that will affect the TCP nor IMAP connections.
Considering that this will work in a single-thread environment, adding a thread pool will only increase the throughput of the application, since you can handle data packets for several flows simultaneously (assuming its a multi-core CPU) You will just need to make sure that 2 threads dont handle data packets for the same flow at the same time, which could cause the packets to be handled out of order. An approach could be to have a group of flows per thread, maybe using IP pools or something similar.
I'm almost completely done with and iOS client for my REST service. The only thing I'm missing is the ability for the client to listen on the network for a UDP broadcast that receives the host display name and base URL for uploads. There could be multiple servers on the network broadcasting and waiting for uploads.
Asynchronous is preferred. The servers will be displayed to the user as the device discovers them and I want the user to be able to select a server at any point in time.
The broadcaster is sending to 255.255.255.255 and does not expect any data back.
I am a beginner in objective c so something simple and easy to use is best.
I recommend looking at CocoaAsyncSocket. It can handle UDP sockets well. I haven't tried listening to a broadcast with it, but it's probably your best bet.
This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.