I'm trying to figure out what is the best solution to work with rabbitmq cluster via wcf.
Current setup:
2 IIS web servers (act as message produces and post messages to queue via amqp wcf client).
2 servers with rabbitmq broker (clustered with mirrored queue, rabbit1 and rabbit2)
Windows service ( worker) with hosted amqp wcf service that listens to incoming messages.
Web role posts messages to rabbit1 node and worker listens to rabbit1 node too. If rabbit1 node fails system(both web and worker) should switch to rabbit2. And that's the question, how to implement this in more elegant way rather than handling connection failures in application code.
First and the only approach I see now is to use wcf4 routing backup endpoints feature. This way solves problem on client side(web role) only but doesn't solve problem on wcf service side(worker role).
One way is to create a wrapper around your service host, used for storing a list of connection strings (which can come from config).
Add a handler to the service faulted event, where you can close and reopen the host with a different connection string.
Related
We have been migrating to .net core console app microservices. currently, each microservice works in a chain and puts messages in rabbitmq, then the next service picks a message from rabbitmq, processes it, then puts in another rabbitmq....we have around 9 services.
We are seeing issues where services fail and have no idea why, but often see problems with rabbitmq connections or network issues hitting the next server (some vm's have all services hosted on the same box, others are distributed between boxes)
I've been looking at envoy proxy as it deals with the circuit breaker etc stuff and claims to have observability
However, I cannot find anywhere online that has anyone using envoy proxy with rabbitmq
Can envoy proxy be used with rabbitmq in this manner?
Or does envoy proxy act as the queue?
We deal with about 4,000 messages a sec currently, and we need to process in near as real-time as possible.
Envoy does not act as the queue, so it can't replace your message-based communication system. It can, however, proxy traffic to/from the RabbitMQ servers to give you some bits of what you're looking for.
What you'd do is use the TCP Proxy capability to setup TCP reverse proxies to RabbitMQ. Then your servers should all connect to the Envoy proxy rather than directly to the message queue. Envoy's built in stats will then output metrics on the TCP connections (all the RabbitMQ protocols seem to be TCP) that it handles. It also does intrinsically support circuit breakers, timeouts, retries, etc, so you'll get all those. But you'll definitely have to tweak those to your particular deployment.
We've done this pattern multiple times at my company, just with Kafka rather than RabbitMQ. However, since they're both TCP based it should work similarly.
I am looking in for the best way to implement the RabbitMQ consumer by using .Net Client which should be run as windows service.
I referred the RabbitMQ documentation and found the way to consume messages by using .Net client (https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html).
My current scenario is like, RabbitMQ is installed in AWS VM machine. I have to install dotnet client consumer service resides in On-premise network which should consume messages.
Which one is the best way, to always listen the Queue (AMQP protocol) or HTTP API which should get messages on demand (https://pulse.mozilla.org/api/).
Please advise.
Thanks,
Vinoth
I believe the answer is "neither." You should have your message queue as a back-end service behind the firewall, and expose your application functionality through a set of carefully-specified web services. The web services, which are exposed through the firewall but can communicate to services behind the firewall, would produce messages that would be transmitted to the server. Any services needing to produce or consume messages would need to do so via the web services, which would perform safety/security checking prior to forwarding the request on to the AMQP server.
If you need to expose AMQP directly to clients (i.e. that is the purpose of your app), then the recommendation is to do so via STOMP. I think a valid use case for exposing AMQP directly over the internet would be a rare thing to come across. The security implications of doing so would be immense.
We usually use message passing to send messages to decoupled services. This makes service discovery a non-issue, because (with AMQP in RabbitMQ for instance) you can use the broker's routing capability to dispatch messages to the right queues that feed the correct services. Load balancing is also handled by the message broker.
Enter kubernetes.
The use case that is usually laid out when talking about service replication and re-spawning failing services, is when your clients use some active protocol like http to contact a service, even if this service handles requests asynchronously. In this context, it is a natural fit to have replication controllers, that manage a group of services and a single entry point to load balance between them.
I like kubernetes' intuitive concepts, like rolling deployments, but how to you control this beasts that don't have an http interface ?
UPDATE:
I am not trying to set up a cluster of message brokers. I am looking at message consumers as services. Service clients don't connect directly to the services, they send messages to the message broker. The message broker acts as a load balancer in a way, and dispatches the messages to the subscribed queue consumers. These consumers implement the service.
My question gravitates around the fact that most usage patterns in demos handle services that are called via http, and kubernetes does a good job here to create a service proxy for these services, and a replication controller. Is it possible to create replication controllers for my kind of service, which does not have a http interface per se, and have all the benefits of rolling updates, and minimum instances?
I'm not sure I entirely understand the question. Are you asking how to use RabbitMQ with Kubernetes? Or how to set up a RabbitMQ cluster: https://www.rabbitmq.com/clustering.html? Or how rolling updates interact with RabbitMQ? Or something else?
I think you should be able to create one service and one replication controller per server, and then use the service DNS names in the cluster configuration file. This is the current approach used to run Zookeeper, also. We have a long-standing TODO to make this less verbose (https://github.com/GoogleCloudPlatform/kubernetes/issues/260), but the current approach should be straightforward. You do lose the ability to use a single kubectl rolling-update command to update the cluster, but it's also straightforward to update the instances individually.
I have a JMS service targeted at a migratable target (using an Auto-Migrate Exactly-Once policy) in a cluster which consists of 2 managed servers, at any point of time the service is hosted at one of them and the consumer (which is targeted at the cluster) is supposed to receive messages seamlessly no matter where the service is hosted.
When I manually switch the host of the migratable target (clicking migrate), without turning the hosting managed server off, the consumer fails to receive messages sent to the queues, unless I turn off the previous hosting managed server forcing the consumer to the new host.
I can rule out sender problems, I can see the messages in the queue right after them being sent.
I'll be grateful if anyone can advice on how to configure either the consumer or the migratable service to work seamlessly when migration happens.
I think that may just be a misunderstanding of how migration works. The docs state Auto-Migrate Exactly-Once:
indicates that if at least one Managed Server in the candidate list
is running, then the JMS service will be active somewhere in the
cluster if servers should fail or are shut down (either gracefully or
forcibly). For example, a migratable target hosting a path service
should use this option so if its hosting server fails or is shut down,
the path service will automatically migrate to another server and so
will always be active in the cluster. Note that this value can lead to
target grouping. For example, if you have five exactly-once migratable
targets and only one server member is started, then all five
migratable targets will be activated on that server member.
The docs also state:
Manual Service Migration—the manual migration of pinned JTA and
JMS-related services (for example, JMS server, SAF agent, path
service, and custom store) after the host server instance fails
Your server/service has neither failed or shut down, you are forcing it to migrate with a healthy host still running, so it has not met the criteria for migration.
See more here as well.
I have some experience that sounds reminiscent of what you're looking at. There was some WLS-specific capability around recognizing reconfiguration in JMS destinations as part of their clustered server design.
In one case I had to call a WLS-specific method: weblogic.jms.extensions.WLSession.setExceptionListener(). This was on their implementation of the JMS Session interface. This is analogous to the standard JMS Connection.setExceptionListener().
With this WLS-specific capability, the WLSession.setExceptionListener() callback would occur at a point where the consuming client should tear down and re-establish the connection / session / consumer in reaction to a reconfiguration (migration) that had happened.
As I asked here, I have an orchestration that is started by a public port published as a web service. Everytime this service is called the orchestration starts
I need to start the orchestration every 30 minutes too.
I ended up using the Scheduled Task Adapter to call my own port. I created a scheduled receive port that creates messages every given time, and a send port that with a filter, receives messages from the port and send them to the web service port
Orchestation starts correctly, but there is an error:
System.ServiceModel.CommunicationException: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
After researching, I found out that Biztalk doesn't like one-way web services (even if this web-service was generated by "Biztalk Web Service Publishing Wizard")
I found solutions like a WCF-proxy, but I was wondering if I could just configure the orchestration webservice to be two-way (in the wizard you can force it) and then call it the way i'm doing now. I'm trying but still receiving similar errors
Anyone had a similar issue?
Thanks
Add a Listen shape to the start of your Orchestration, you can then have 2 (or more) parallel Activating Receive shapes.
Connect the secondary Receive shape to a new one-way logical port (Specify-later)
Once deployed, hook your Scheduled Task Adapter up to the one-way port, so it receives the regularly scheduled message.
As always with BizTalk, there is more than one way to de-fur a feline, but this was the first to come to mind.