I'm currently running microservices in my company. They are not api servers, just processes that communicate with each other. So the implementation to communicate with each other is RabbitMQ.
Now I'm trying to implement a health checker to know if a server has restarted or crashed.
But I'm only familiar using a health check by calling a specific api in the server. But our services aren't an api server so they don't have any ports to imply. And I also don't wan't to add an api server for just to implement a health checker.
So I'm searching about any use cases about implementing health checks by sending messages (health check signals) to the health checker by a message broker such as RabbitMQ instead of using APIs.
Does anyone have some ideas?
Sounds like an obvious and easy mechanism for a system like yours that already relies on message queuing. Implement any architecture you want from publishing specific messages to each service - either on a single exchange where every service (as client) looks for himself as the topic, or on an exchange-per-service - or you could simply have an exchange that's read from by the health-check service and all services emit messages periodically (dead man style) to that exchange - and that service just makes sure it hears from anyone once in awhile.
Consider also using rabbit event exchange at your health-check service - so it'll be able to keep track of service connect/disconnects from the channel the service is talking to the exchange with. Channels are suppose to stay up all the time, so a disconnect indicates trouble of some kind - especially if it wasn't preceded by a service (as client) sent message indicating a normal going-down event. In other words, as a health "protocol" - instead of getting polled by a health service, each microservice would be proactive about sending "coming up", "ready", "healthy" (periodically), and "going down" messages to the health service.
As a general comment: In my opinion message queues are very much underutilized. There are many use cases they're more appropriate for than other techniques (e.g., more popular techniques like REST over HTTP). They provide distinct benefits which are built-in to the message queuing/message broker concept which you might very well otherwise need to provide for yourself for your use case (or use a "framework" which has provided it). I'd always consider the role - all the roles! - of a message broker in a system architecture and use it where it fits.
Related
Consider a small chat server. In this server, the actual processing of messages is done by nodes of a service called "chat". Communications of this service along with a "user" service are then aggregated via a "gateway" service in front that is the only service that actually communicates with the users and is in charge of passing requests received to other services via the RabbitMQ channel they share.
In a system designed like this, each user is connected to one of the instances of the "gateway" service and when sending and receiving messages indirectly communicates with the private "chat" or "user" services behind. To load balance this, we have an Nginx reverse-proxy on the edge that tries to distribute requests to different "gateway" instances. But since WebSocket connection is real-time, "chat" instances should also be able to send messages to the right instance of the "gateway" in charge of that specific user for user-specific messages and to all "gateway" instances for site-wide messages. This is a problem since with RabbitMQ I don't believe we can target a specific subscriber and even if we could, we don't know to which instance that specific user is connected right now.
Therefore, since we are using Socket.io for WebSocket connection, I am thinking of adding a new Redis node to the stack to allow this communication between different instances of the "gateway" service. This is directly supported by Socket.io and works alright and removes all sorts of limitations imposed by the RabbitMQ, however, we are still using RabbitMQ to route a message from a "chat" instance to a "gateway" instance that then will propagate through the Redis service and when the right "gateway" instance having access to the user is found, delivered to them.
This adds unnecessary lag to user-specific outbound messages. So here I am asking if anyone has a better idea of how this problem should be approached and how to decrease this lag.
Personally, I have this idea of adding Socket.io to "chat" services (with no client access) and use its backend to send the message directly to the Redis store so that the instance of the "gateway" connected to it can route it directly to the user, going over the whole RabbitMQ thing for this type of messages.
It might be important to mention that none of these services are here just to do this specific thing, RabbitMQ is heavily used for communication between different services acting as the message broker and the "gateway" service works with multiple other services for data aggregation, authentication and data validation and transformation. The above example was a simplified version of the problem at hand with the minimum number of moving parts that I could easily describe here.
Edit: To send messages directly to socket.io redis store, the following library can be used apparently not to load the whole socket.io library:
https://github.com/socketio/socket.io-redis-emitter
Currently, we plan to upgrade our product to use MQ(RabbitMQ or ActiveMQ) for message transfer between server and client. And now we are using a network lib(evpp) for doing so.
Because I don't use MQ before, so excpet for a lot of new features of MQ, I can't figure out the essential difference between them, and don't know exactly when and where should we use MQ or just use network library is fine.
And the purpose that we want to use MQ is that we want to solve the unreliability of communication, such as message loss or other problems caused by unstable network environment.
Hope there is someone familiar with both of them could release my confusion. Thanks for advance.
Message queuing systems (MQ, Qpid, RabbitMQ, Kafka, etc.) are higher-layer systems purpose-built for handling messages reliably and flexibly.
Network programming libraries/frameworks (ACE, asio, etc.) are helpful tools for building message queueing (and many other types of) systems.
Note that in the case of ACE, which encompasses much more than just networking, you can use a message queuing system like the above and drive it with a program that also uses ACE's classes for thread management, OS abstraction, event handling, etc.
Like in any network-programming, when a client sends a request to the server, the server responds with a response. But for this to happen the following conditions must be met
The server must be UP and running
The client should be able to make some sort of connection between them
The connection should not break while the server is sending the response to the client or vice-versa
But in case of a message queue, whatever the server wants to tell the client, the message is placed in a message-queue i.e., separate server/instance. The client listens to the message-queue and processes the message. On a positive acknowledgement from the client, the message is removed from the message queue. Obviously a connection has to made by the server to push a message to the message-queue instance. Even if the client is down, the message stays in the queue.
We usually use message passing to send messages to decoupled services. This makes service discovery a non-issue, because (with AMQP in RabbitMQ for instance) you can use the broker's routing capability to dispatch messages to the right queues that feed the correct services. Load balancing is also handled by the message broker.
Enter kubernetes.
The use case that is usually laid out when talking about service replication and re-spawning failing services, is when your clients use some active protocol like http to contact a service, even if this service handles requests asynchronously. In this context, it is a natural fit to have replication controllers, that manage a group of services and a single entry point to load balance between them.
I like kubernetes' intuitive concepts, like rolling deployments, but how to you control this beasts that don't have an http interface ?
UPDATE:
I am not trying to set up a cluster of message brokers. I am looking at message consumers as services. Service clients don't connect directly to the services, they send messages to the message broker. The message broker acts as a load balancer in a way, and dispatches the messages to the subscribed queue consumers. These consumers implement the service.
My question gravitates around the fact that most usage patterns in demos handle services that are called via http, and kubernetes does a good job here to create a service proxy for these services, and a replication controller. Is it possible to create replication controllers for my kind of service, which does not have a http interface per se, and have all the benefits of rolling updates, and minimum instances?
I'm not sure I entirely understand the question. Are you asking how to use RabbitMQ with Kubernetes? Or how to set up a RabbitMQ cluster: https://www.rabbitmq.com/clustering.html? Or how rolling updates interact with RabbitMQ? Or something else?
I think you should be able to create one service and one replication controller per server, and then use the service DNS names in the cluster configuration file. This is the current approach used to run Zookeeper, also. We have a long-standing TODO to make this less verbose (https://github.com/GoogleCloudPlatform/kubernetes/issues/260), but the current approach should be straightforward. You do lose the ability to use a single kubectl rolling-update command to update the cluster, but it's also straightforward to update the instances individually.
(Ha! see what I did there?)
I have a system whereby a server pushes information from a central DB out to many client DBs (cross-domain via internet), and periodically they call services on the server. This has to withstand intermittent connections, ie queue messages.
I've created a development version using duplex MSMQ that I'm trying to apply transport security. From the reading I've done, it appears that:
MSMQ uses AD Windows Security, which is irrelevant cross-domain.
Due to the nature of duplex, each client is effectively a server as well. That means I need to pay $1200 every time I install the system with another client if I want to use SSL.
Are these facts correct? Am I really the only person who needs to secure services that are queued AND cross-domain AND duplex?
"MSMQ uses AD Windows Security, which is irrelevant cross-domain."
No, MSMQ uses Windows security which includes local accounts and, if available, domain accounts. MSMQ also uses certificates, if available.
"Due to the nature of duplex, each client is effectively a server as
well."
MSMQ doesn't use a client-server model. All MSMQ machines are effectively peers, sending messages between each other. For the $1,200 payment, are you referring to the certificate needed by the web service for sending MSMQ over HTTPS?
This is the first time I've seen anyone want to push secure messages over HTTPS to multiple destinations.
You may, in fact, be the only person in the world right now who wants to do this.
Let me embellish.
Not many companies are using MSMQ (in the grand scheme of things).
Of those that are, the vast majority are using only private queues, a small minority only use public queues.
Of those that are, only a handful are using it across the internet.
Of those that are, perhaps one is using it to exchange messages in both directions (that would be yours).
But that aside, it seems to me your main challenge will be using MSMQ as a secure transport layer over the internet. Although I have never had to do this, here are a couple of articles:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms701477(v=vs.85).aspx
http://msdn.microsoft.com/en-us/magazine/cc164041.aspx
Sorry couldn't be of more help.
I have a back end system that drops events to my system. It is critical that these events don't get lost (I work for a health care company and lost info can impact a patient's care).
I would like to make this system drop it's data into NServiceBus so that it can be published to subscribers that need it. However, my server that is dropping these messages is an AIX machine, so it can't run .NET Code.
This system can send the messages via a lot of standard protocol and communication types (TCP, WSDL Based Services, Call A Database Sproc, etc).
One option I have considered is to setup a WCF service that the AIX mainframe will call. I can then have my WCF service make the call to NServiceBus.
But the events sent per minute of this back end service can at times be fairly high (about 500 messages per minute). I am worried that WCF is not up to this, while NService bus says it can handle 1000 messages per second. Am also worried about data loss in the event of a downtime. NserviceBus claims it is not going to loose any data.
Am I wrong? Is WCF going to be just fine? Or am I making a weak link in the chain?
Is there a way I can use an established protocol to add items directly to an NServiceBus Queue?
Or should I just write my own .NET app that will allow NServiceBus to use a TCP connection?
Note: Because these messages are critical, the message must be acknowledged or the server will keep sending it.
I would take a look at the WCF integration that comes right out of the box. The WCF service is contained within the same host as NSB. The integration does nothing more than just push the message onto the queue, so I don't think you'll have a throughput issue. Seeing that this is critical data, I would suggest clustering the service. The other option would be to install 2 or more instances of the service on different machines and load balance the HTTP calls across both. In essence you would have 1 logical Publisher with 2 physical components doing the publishing.