i have a TCP server (listener) software written in C#. Many devices (approximately 5000) will connect to server asynchronously and send/receive messages to/from server. Now, i have 2 questions.
I have to send reply messages to every received message. Which way should i use? Asynchronous (asap when message received) or synchronous (sending replies using a reply task).
How can i strain test my server? I can communicate with 1-2 computers successfully but i don't know that my software works fine for 5000 devices.
Judging from what your saying, your server or listener is expected to be available to respond to multiple requests at any given time. The key is how has it been implemented ? Does the server support multi client response, in other words can it fulfill requests of multiple clients at the same time ? May be using multiple threads etc ! Or does it use a queue to keep track of all requests and then serve them in a orderly fashion, or does it use some other method to serve requests !
Related
Currently, we plan to upgrade our product to use MQ(RabbitMQ or ActiveMQ) for message transfer between server and client. And now we are using a network lib(evpp) for doing so.
Because I don't use MQ before, so excpet for a lot of new features of MQ, I can't figure out the essential difference between them, and don't know exactly when and where should we use MQ or just use network library is fine.
And the purpose that we want to use MQ is that we want to solve the unreliability of communication, such as message loss or other problems caused by unstable network environment.
Hope there is someone familiar with both of them could release my confusion. Thanks for advance.
Message queuing systems (MQ, Qpid, RabbitMQ, Kafka, etc.) are higher-layer systems purpose-built for handling messages reliably and flexibly.
Network programming libraries/frameworks (ACE, asio, etc.) are helpful tools for building message queueing (and many other types of) systems.
Note that in the case of ACE, which encompasses much more than just networking, you can use a message queuing system like the above and drive it with a program that also uses ACE's classes for thread management, OS abstraction, event handling, etc.
Like in any network-programming, when a client sends a request to the server, the server responds with a response. But for this to happen the following conditions must be met
The server must be UP and running
The client should be able to make some sort of connection between them
The connection should not break while the server is sending the response to the client or vice-versa
But in case of a message queue, whatever the server wants to tell the client, the message is placed in a message-queue i.e., separate server/instance. The client listens to the message-queue and processes the message. On a positive acknowledgement from the client, the message is removed from the message queue. Obviously a connection has to made by the server to push a message to the message-queue instance. Even if the client is down, the message stays in the queue.
I have an existing system consisting of two nodes, a client/server model.
I want to exchange messages between them using RabbitMQ. I.e. The client would send all its requests to RabbitMQ and the server would listen to the queue indefinitely, consume any messages that arrives and then act upon it.
I can change the server as needed, but my problem is, I cannot change the client's behavior. How can I send back the response to the client?
The client node understands HTTP request/response, what shall I do after configure the other application server to RabbitMQ instead of my app directly.
You can use RPC model or some internal convention, like storing result in database (or cache) with known id and polling your storage for that result in a cycle
You will have to use a proxy server in between that will seem to node 1 (the client you cannot change) as the actual server while it just inject requests into the queuing server. You will also have to use 2 queues.
For clarity, let's enumerate the system players:
The client
The proxy server, a server that offers the same API offered by the actual (but it doesn't do any work)
The actual server, the server that does the actual work
The input queue, the queue where clients requests go into (proxy server does that)
The output queue, the queue where server responses go into (actual server does that)
A possible working scenario:
A client sends a request to the proxy server
The proxy server puts the request in input queue
The actual server (listening to the input queue) will fetch the request
The actual server process the message
The actual server sends the response to the output queue
The proxy server (listening to the output queue) will fetch the response
The proxy server returns the response to the client
This might work, but few problems could happen, e.g. because the proxy server doesn't know when the actual server will response, and, it cannot be sure of the order of responses in the output queue, it may have to re-inject the messages it finds not relevant to the output queue until it finds the correct message.
Or, the proxy server might need to feed the response to the client later via an HTTP request to the client. That is, rather than a response to the client's request, the client will expect no response for the request it sent knowing that it will be get the answer later via a request from the proxy server.
I'm not aware of the situation at your end, but this might work!
I'm trying to get some feedback on the recommendations for a service 'roster' in my specific application. I have a server app that maintains persistant socket connections with clients. I want to further develop the server to support distributed instances. Server "A" would need to be able to broadcast data to the other online server instances. Same goes for all other active instances.
Options I am trying to research:
Redis / Zookeeper / Doozer - Each server instance would register itself to the configuration server, and all connected servers would receive configuration updates as it changes. What then?
Maintain end-to-end connections with each server instance and iterate over the list with each outgoing data?
Some custom UDP multicast, but I would need to roll my own added reliability on top of it.
Custom message broker - A service that runs and maintains a registry as each server connects and informs it. Maintains a connection with each server to accept data and re-broadcast it to the other servers.
Some reliable UDP multicast transport where each server instance just broadcasts directly and no roster is maintained.
Here are my concerns:
I would love to avoid relying on external apps, like zookeeper or doozer but I would use them obviously if its the best solution
With a custom message broker, I wouldnt want it to become a bottleneck is throughput. Which would mean I might have to also be able to run multiple message brokers and use a load balancer when scaling?
multicast doesnt require any external processes if I manage to roll my own, but otherwise I would need to maybe use ZMQ, which again puts me in the situation of depends.
I realize that I am also talking about message delivery, but it goes hand in hand with the solution I go with.
By the way, my server is written in Go. Any ideas on a best recommended way to maintain scalability?
* EDIT of goal *
What I am really asking is what is the best way to implement broadcasting data between instances of a distributed server given the following:
Each server instance maintains persistent TCP socket connections with its remote clients and passes messages between them.
Messages need to be able to be broadcasted to the other running instances so they can be delivered to relavant client connections.
Low latency is important because the messaging can be high speed.
Sequence and reliability is important.
* Updated Question Summary *
If you have multiple servers / multiple end points that need to pub/sub between each other, what is a recommended mode of communication between them? One or more message brokers to re-pub messages to a roster of the discovered servers? Reliable multicast directly from each server?
How do you connect multiple end points in a distributed system while keeping latency low, speed high, and delivery reliable?
Assuming all of your client-facing endpoints are on the same LAN (which they can be for the first reasonable step in scaling), reliable UDP multicast would allow you to send published messages directly from the publishing endpoint to any of the endpoints who have clients subscribed to the channel. This also satisfies the low-latency requirement much better than proxying data through a persistent storage layer.
Multicast groups
A central database (say, Redis) could track a map of multicast groups (IP:PORT) <--> channels.
When an endpoint receives a new client with a new channel to subscribe, it can ask the database for the channel's multicast address and join the multicast group.
Reliable UDP multicast
When an endpoint receives a published message for a channel, it sends the message to that channel's multicast socket.
Message packets will contain ordered identifiers per server per multicast group. If an endpoint receives a message without receiving the previous message from a server, it will send a "not acknowledged" message for any messages it missed back to the publishing server.
The publishing server tracks a list of recent messages, and resends NAK'd messages.
To handle the edge case of a server sending only one message and having it fail to reach a server, server can send a packet count to the multicast group over the lifetime of their NAK queue: "I've sent 24 messages", giving other servers a chance to NAK previous messages.
You might want to just implement PGM.
Persistent storage
If you do end up storing data long-term, storage services can join the multicast groups just like endpoints... but store the messages in a database instead of sending them to clients.
Why can't the application server send messages directly to the application? Why do you need the C2DM service in the middle?
To send a message from the server side you have two possibilities:
The client polls for new messages in certain intervals. Downside: Not a real-time solution. If you poll too frequently it will drain battery, consume your quota (if you don't have an unlimited package). Generally you do a lot of unnecessary work and traffic as most polls will return no messages.
Stay connected all the time. Downside: hard to deliver technically as phones can close connections when going to sleep mode. (At least nothing guarantees that they won't). Also you are running a background application 24/7.
The current state of C2DM will give you:
The ability to get messages even when your application is not running as Android will start your application (the part of it you configured, not necessarily the whole UI) when a message arrives.
A central, shared channel to deliver such messages. If 10 applications need real-time notifications on your phone this is one single facility, not 10 applications running and polling in parallel.
The promise: As this is the sanctioned API by Google for push messaging you can expect it to be optimized in the future. One improvement can be carrier-level messaging to initiate a C2DM session. That would mean you can put 100% of the "smart" part of your phone asleep.
Because the application can't (or isn't supposed to) act as a server.
If you would like to send messages to your app directly, then your application would need to have some sort of server listening in some port. This is bad because:
connections are usually firewalled, you cant just listen in some port,
your device can be turned off or without connectivity (then you app sever would need to retry),
the app server would need to know the address of your device,
app would need to be running (at least the server module) all the time, this isn't battery friendly.
I have a lot of client programs and one service.
This Client programs communicate with the server with http channel with WCF.
The clients have dynamic IP.
They are online 24h/day.
I need the following:
The server should notify all the clients in 3 min interval. If the client is new (started in the moment), is should notify it immediately.
But because the clients have dynamic IP and they are working 24h/day and sometimes the connection is unstable, is it good idea to use wcf duplex?
What happens when the connection goes down? Will it automatically recover?
Is is good idea to use remote MSMQ for this type of notification ?
Regards,
WCF duplex is very resource hungry and per rule of thumb you should not use more than 10. There is a lot of overhead involved with duplex channels. Also there is not auto-recover.
If you know the interval of 3 minutes and you want the client to get information when it starts why not let the client poll the information from the server?
When the connection goes down the callback will throw an exception and the channel will close.
I am not sure MSMQ will work for you unless each client will create an MSMQ queue for you and you push messages to each one of them. Again with an unreliable connection it will not help. I don't think you can "push" the data if you loose the connection to a client, client goes off-line or changes an IP without notifying your system.