I'm almost completely done with and iOS client for my REST service. The only thing I'm missing is the ability for the client to listen on the network for a UDP broadcast that receives the host display name and base URL for uploads. There could be multiple servers on the network broadcasting and waiting for uploads.
Asynchronous is preferred. The servers will be displayed to the user as the device discovers them and I want the user to be able to select a server at any point in time.
The broadcaster is sending to 255.255.255.255 and does not expect any data back.
I am a beginner in objective c so something simple and easy to use is best.
I recommend looking at CocoaAsyncSocket. It can handle UDP sockets well. I haven't tried listening to a broadcast with it, but it's probably your best bet.
Related
I am trying to find the best way to broadcast a camara and send the stream to 200 connections.
If I use web-rtc, I am limited with the CPU power. I've tried to use a server as a gateway, but the number connection maximum I can perform is 60. And 120 with 2 servers.
I can't use web socket to send stream because, the TCP protocol create latency.
Last solution : use RTMP protocol, but there is 5s-10s of latency.
My question: Is there a solution to stream a camera to many clients (200/300) in real-time ?
Just using webrtc would not work as I assume the device the the camera will need a huge bandwidth. The best way is to use an SFU. This will send the video to to the server to then broadcast it to every peer. It is normally able to handle 200 connections if only video is used.
I've implemented such a server using mediasoup. It also allows you to balance the load over several cpu's and multiple servers.
Here is a simple project where this library is used.
There are also other solutions like Janus gateway or kurento server. Although I haven't used them.
SECOND SOLUTION
I found This github repository which allows video forwarding peer to peer even for large audiences. Basically forwarding the stream to other peers which will also forward their received stream. I assume that there will be a little more latency as the video could be relayed through many peers.
So I'm sort of lost and need some direction for research.
I want to build a real-time audio chat environment with as low latency as possible.
I've done a little research and it looks like I have to use the UDP over TCP protocol, but I'm unsure of how to do this.
If I had a dedicated server running lamp, would I run a separate application listen for and serve UDP packets?
Any direction woudl be appreciated.
The best way when it comes to audio or video is to always use the UDP protocol, this protocol does not provide error control, UDP is connectionless protocol that tells me that when you send a data over UDP you don't know if it'll get there, can occur corruption while transferring, to audio or video this can be a nice idea !
You can create dedicated server listening in UDP port to receive all data and pass to the corresponding client!
You need learn about socket programming, choose your preferred language and learn how use socket, this is the way.
If I was to implement a server to handle multiple clients connecting simultaneously would it be better to use TCP?
Not taking efficiency into account (I know know UDP is quicker, but unreliable).
In UDP you can't have sockets for each client connection?
Because in UDP the socket is identified by only the destination port number (right?).
In Java, I know it is easy to create a multi-threaded server to handle multiple clients at the same time in TCP. But can it be done in UDP? I imagine that it would be very complicated.
I'm just trying to get an understanding of UDP here (I don't want to actually implement anything).
It depends on what kind of server you are developing. If you need your clients to stay connected and ready to receive data from server(for example a push service) you should implement it using TCP. If you want to implement a simple request-response service, then UDP is better choice.
This morning, there were big problems at work because an SNMP trap didn't "go through" because SNMP is run over UDP. I remember from the networking class in college that UDP isn't guaranteed delivery like TCP/IP. And Wikipedia says that SNMP can be run over TCP/IP, but UDP is more common.
I get that some of the advantages of UDP over TCP/IP are speed, broadcasting, and multicasting. But it seems to me that guaranteed delivery is more important for network monitoring than broadcasting ability. Particularly when there are serious high-security needs. One of my coworkers told me that UDP packets are the first to be dropped when traffic gets heavy. That is yet another reason to prefer TCP/IP over UDP for network monitoring (IMO).
So why does SNMP use UDP? I can't figure it out and can't find a good reason on Google either.
UDP is actually expected to work better than TCP in lossy networks (or congested networks). TCP is far better at transferring large quantities of data, but when the network fails it's more likely that UDP will get through. (in fact, I recently did a study testing this and it found that SNMP over UDP succeeded far better than SNMP over TCP in lossy networks when the UDP timeout was set properly). Generally, TCP starts behaving poorly at about 5% packet loss and becomes completely useless at 33% (ish) and UDP will still succeed (eventually).
So the right thing to do, as always, is pick the right tool for the right job. If you're doing routine monitoring of lots of data, you might consider TCP. But be prepared to fall back to UDP for fixing problems. Most stacks these days can actually use both TCP and UDP.
As for sending TRAPs, yes TRAPs are unreliable because they're not acknowledged. However, SNMP INFORMs are an acknowledged version of a SNMP TRAP. Thus if you want to know that the notification receiver got the message, please use INFORMs. Note that TCP does not solve this problem as it only provides layer 3 level notification that the message was received. There is no assurance that the notification receiver actually got it. SNMP INFORMs do application level acknowledgement and are much more trustworthy than assuming a TCP ack indicates they got it.
If systems sent SNMP traps via TCP they could block waiting for the packets to be ACKed if there was a problem getting the traffic to the receiver. If a lot of traps were generated, it could use up the available sockets on the system and the system would lock up. With UDP that is not an issue because it is stateless. A similar problem took out BitBucket in January although it was syslog protocol rather than SNMP--basically, they were inadvertently using syslog over TCP due to a configuration error, the syslog server went down, and all of the servers locked up waiting for the syslog server to ACK their packets. If SNMP traps were sent over TCP, a similar problem could occur.
http://blog.bitbucket.org/2012/01/12/follow-up-on-our-downtime-last-week/
Check out O'Reilly's writings on SNMP: https://library.oreilly.com/book/9780596008406/essential-snmp/18.xhtml
One advantage of using UDP for SNMP traps is that you can direct UDP to a broadcast address, and then field them with multiple management stations on that subnet.
The use of traps with SNMP is considered unreliable. You really should not be relying on traps.
SNMP was designed to be used as a request/response protocol. The protocol details are simple (hence the name, "simple network management protocol"). And UDP is a very simple transport. Try implementing TCP on your basic agent - it's considerably more complex than a simple agent coded using UDP.
SNMP get/getnext operations have a retry mechanism - if a response is not received within timeout then the same request is sent up to a maximum number of tries.
Usually, when you're doing SNMP, you're on a company network, you're not doing this over the long haul. UDP can be more efficient. Let's look at (a gross oversimplification of) the conversation via TCP, then via UDP...
TCP version:
client sends SYN to server
server sends SYN/ACK to client
client sends ACK to server - socket is now established
client sends DATA to server
server sends ACK to client
server sends RESPONSE to client
client sends ACK to server
client sends FIN to server
server sends FIN/ACK to client
client sends ACK to server - socket is torn down
UDP version:
client sends request to server
server sends response to client
generally, the UDP version succeeds since it's on the same subnet, or not far away (i.e. on the company network).
However, if there is a problem with either the initial request or the response, it's up to the app to decide. A. can we get by with a missed packet? if so, who cares, just move on. B. do we need to make sure the message is sent? simple, just redo the whole thing... client sends request to server, server sends response to client. The application can provide a number just in case the recipient of the message receives both messages, he knows it's really the same message being sent again.
This same technique is why DNS is done over UDP. It's much lighter weight and generally it works the first time because you are supposed to be near your DNS resolver.
Suppose a client sends a number of datagrams to a server through my application. If my application on the server side stops working and cannot receive any datagrams, but the client still continues to send more data grams to the server through UDP protocol, where are those datagrams going? Will they stay in the server's OS data buffer (or something?)
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
Thanks
I ask this question because I want to know that if a client send 1000 datagrams (1K each) to a PC over the internet, will those 1000 datagrams go through the internet (consuming the bandwidth) even if no one is listening to those data?
Yes
If the answer is Yes, how should I stop this happening? I mean if a server stops functioning, how should I use UDP to get to know the fact and stops any further sending?
You need a protocol level control loop i.e. you need to implement a protocol to take care of this situation. UDP isn't connection-oriented so it is up to the "application" that uses UDP to account for this failure-mode.
UDP itself do not provide facilities to determine if message is successfully received by a client or not. You need you TCP to establish reliable connection and after it sends data over UDP.
The lowest overhead solution would be a keep-alive type thing like jdupont suggested. You can also change to use tcp, which provides this facility for you.