Maximum Payload and Validity of a UDP Server - udp

I have Created A UDP Server-Client application. There is only single thread at Server's side which continuously executes recvfrom().
If I run 3 Clients Simultaneously from 3 different machines, and send some data, the Server is able to read the data from each of the client.
But how can I test the reliability of this application?
How would I know how many Maximum number of Clients can this Server handle at a time?
Also what is the maximum Payload?

But how can I test the reliability of this application?
Run as many clients as you can. The more clients you can run and send data, the better. Try to run many clients of different machines, and on each machine try to run as many clients as you can, and keep sending data automatically.
Make the clients send data in a loop, without waiting for input, and put a delay between each call to send. A few seconds of delay is fine, then you can lower the delay later and see how your server is handling it.
How would I know how many Maximum number of Clients can this Server handle at a time?
You can't. You are using a UDP server, and UDP is connectionless. Clients do not need to connect to the server to send data, they just send it. Usually it is limited by available resources (memory, etc.) on your server.
Also what is the maximum Payload?
The maximum payload of what? A UDP message? You can read more about the UDP packet structure.

Related

How to test a UDP server limit?

A server listening on a UDP port, many clients can connect to it, there are many groups of clients connected to it. In a group one client is sending message and the server needs to route the message to the rest in the group. Like this many groups could be running simultaneously. How can we test what is the maximum number of connections the server can handle without inducing a visible lag in the response time ?
Firstly, let me desrcibe your network topology again. There is a server and many clients, clients are divided into several groups. A client sends a message to the server, and then the server sends something to the other clients in that group.
If the topology is like what I describe above, is the connections limitation you want to reach about how many clients the server can send to at the same time? Or do you want to know how many clients can send to server at the same time?
The way to test these two different circumstances may be using multi-thread or go routine if you can write by go. But they need to set different judge to give out the limitation.

Losing data with UDP over WiFi when multicasting

I'm currently working a network protocol which includes a client-to-client system with auto-discovering of clients on the current local network.
Right now, I'm periodically broadsting over 255.255.255.255 and if a client doesn't emit for 30 seconds I consider it dead (then offline). The goal is to keep an up-to-date list of clients runing. It's working well using UDP, but UDP does not ensure that the packets have been sucessfully delivered. So when it comes to the WiFi parts of the network, I sometimes have "false postivives" of dead clients. Currently I've reduced the time between 2 broadcasts to solve the issue (still not working well), but I don't find this clean.
Is there anything I can do to keep a list of "online" clients without this risk of "false positives" ?
To minimize the false positives, due to dropped packets you should alter a little bit the logic of your heartbeat protocol.
Rather than relying on a single packet broadcast per N seconds, you can send a burst 3 or more packets immediately one after the other every N seconds. This is an approach that ping and traceroute tools follow. With this method you decrease significantly the probability of a lost announcement from a peer.
Furthermore, you can specify a certain number of lost announcements that your application can afford. Also, in order to minimize the possibility of packet loss using wireless network, try to minimize as much as possible the size of the broadcast UDP packet.
You can turn this over, so you will broadcast "ServerIsUp" message
and every client than can register on server. When client is going offline it will unregister, otherwise you can consider it alive.

Using MSMQ to Store Data before Processing

I am new to the formum and new to MSMQ.
I have been asked to do research on it, to see if it will help out business, but still not sure if and how it works. Here is a short Summary.
We have a service provider that will receive messages via a mobile phone, to which they will pass certain info(Such as the cell number, text in message, etc.) to a URL that we have given them, which is an app we created, which will then process the data and store into our Database etc etc.
However, as we at times receive between a few hundred to a few thousand at any given time(Spread out or at once) - We get timeouts.
What I would like to know, is it possible to get this info stored into a que using MSMQ, before it hits our URL(That we provided to our service provider), so that we can avoid timeouts ?
I hope this makes sense and that someone can help!
Thank you!
MSMQ is a transport protocol. It is designed to get data from A to B as fast and as reliably as possible. As it uses store-and-forward, it can be used as a data buffer at the sender. In your case, though, that's not important. I would recommend:
ISP sends MSMQ message over HTTP(S) to your server.
Message is stored on server's local queue.
Your app reads messages from queue.
Your app writes to database.
Goto 3
So you would be buffering the messages AFTER they reach your server.
The messages would also be buffered at the ISP in case of network outage between them and you.

NServicebus - Stopping a long running process?

Here is my application I'm attempting to put together using NServiceBus:
I have a 1000 files that need to be processed by a service. So far I'm thinking I'd have one endpoint, the client, find all of those files and send them out on the bus to be processed
My other endpoint, the server that does the processing, would listen for these client messages, when one comes in process the file, and return the results.
Client takes the results, marks the file as processed, and waits for the next 999 files to be processed. Client doesn't care the order of the messages that come back, just as long as they all get processed at some point. (In reality the client is going to do something more with the data after it is processed that can't be done by the server, so I can't just fire and forget the request for processing.)
Since processing a single message can take over an hour I would scale out the application to have multiple servers all attempting to eat through the 1000 files that need to be processed.
Conceptually, its like building a personal SETI at home service to run on all of my servers.
The issues I'm having is, how do I stop midway through processing the 1000 files?
I want to keep all of my servers working as much as they can on my data, so when the client starts does it publish a 1000 commands for the 1000 files to process and then sit back and wait? And if it does this, and decides to stop, how can it clear the bus of all of those commands to process files?
If my client only pushes one or two messages on the bus at a time I could easily stop sending messages if I decide to stop on the client, but then I have two other problems
The servers could be underutilized and I'd end up with idle servers.
How do I stop the servers that are loaded up and processing data? Send them a second command of a different message format?
Thoughts, ideas? Am I approaching this problem using the right tool/right methodology?
One of things you might want to think about is how you are going to correlate the message processing. I would use a saga for this and have the client generate some kind of batch id which is attached to all the files to be processed. This allows your client to be able to send a CancelProcessing message to the saga, the handler for which could then stop the processing / sending of messages to the file processing endpoints and perform any clean-up operations such as completing the saga and removing data from the database.
So you would have client endpoint, saga endpoint and one or more file processing endpoints (which would sit behind a distributor). Your client would be responsible for initiating / sending the files to the saga. The saga manages the file correlation and processing activities, while your processing endpoints focus doing the work.
Remember that the processing endpoints don't necessarily have to be physical endpoints. You can have many of these on one server if you wanted to and use monitoring tools to determine whether or not you need to add or remove nodes.

multiple UDP ports

I have situation where I have to handle multiple live UDP streams in the server.
I have two options (as I think)
Single Socket :
1) Listen at single port on the server and receive the data from all clients on the same port and create threads for each client to process the data till the client stop sending.
Here only one port is used to receive the data and number of threads used to process the data.
Multiple Sockets :
2) Client will request open port from the server to send the data and the application will send the open port to the client and opens a new thread listening at the port to receive and process the data.Here for each client will have unique port to send the data.
I already implemented a way to know which packet is coming from which client in UDP.
I have 1000+ clients and 60KB data per second I am receiving.
Is there any performance issues using the above methods
or Is here any efficient way to handle this type of task in C ?
Thanks,
Raghu
With that many clients, having one thread per client is very inefficient since lots and lots of context switches must be performed.
Also, the number of ports you can open per IP is limited (port is a 16 bit number).
Therefore "Single Socket" will be far more efficient. But you can also use "Multipe Sokets" with just a single thread using the asynchronous API. If you can identify the client using the package's payload, then there is no need to have a port per client.