Given the following scenario:
I have two servers, each of them has RabbitMQ queuing installed and they form a cluster. I have configured them for HA queues using mirroring.
Node A (has master queue)
Node B (has slave queue)
We use NServiceBus as messaging framework. We have a Service A (load balanced WCF service) which should publish messages to RabbitMQ exchange and Service B (clustered) which should dequeue messages and process them. The problem is how should I configure NServicebus on both nodes. I cannot specify single host names for connectionstring like this:
<connectionStrings>
<add name="NServiceBus/Transport" connectionString="host=nodeA, nodeB" />
</connectionStrings>
This is because the feature has been deprecated in current NServiceBus release. It makes sense. I cannot specify cluster name either.
<connectionStrings>
<add name="NServiceBus/Transport" connectionString="host=clustername" />
</connectionStrings>
This option does not work.
I tried also localhost which works for Node A, but not for Node B (which has the slave queue).
What should I define as host to make it work (on both services, A and B)? What it is needed for Node B to dequeue messages from master queue?
There might be things I do not understand but help me out, please.
RabbitMQ docs gives advice about connecting to a cluster from client: it's not a RabbitMQ concern but you have to use other technologies like a load balancer.
Generally, it's not advisable to bake in node hostnames or IP addresses into client applications: this introduces inflexibility and will require client applications to be edited, recompiled and redeployed should the configuration of the cluster change or the number of nodes in the cluster change. Instead, we recommend a more abstracted approach: this could be a dynamic DNS service which has a very short TTL configuration, or a plain TCP load balancer, or some sort of mobile IP achieved with pacemaker or similar technologies.
NServiceBus follows this suggestion: v 3.x of RabbitMQ transport drops the facility to specify multiple hostnames in the connection string as detailed here
You need to put localhost in the connectionstring like this:
<connectionStrings>
<add name="NServiceBus/Transport" connectionString=" host=localhost" />
</connectionStrings>
Then it works :)
Related
The setup
I have a WCF service hosted in IIS/AppFabric running on Windows Server 2012R2.
The service is bound to a local transactional MSMQ queue via netMsmqBinding. My operations are decorated with TransactionScopeRequired = true.
The service operations recieves calls from a BizTalk server, handles them and send responses back to a remote queue (on the same BizTalk Server), also via a netMsmqBinding.
<endpoint name="Outbound" address="net.msmq://int01test.mydomain.com/private/queue.name" binding="netMsmqBinding" bindingConfiguration="QueueBindingConfigurationOutbound" contract="My.Outbound.Contract" />
<netMsmqBinding>
<binding name="QueueBindingConfigurationOutbound">
<security>
<transport msmqAuthenticationMode="WindowsDomain" msmqProtectionLevel="Sign" />
</security>
</binding>
</netMsmqBinding>
In the testing environment this works as intended.
Physical setup in testing environment:
Server int01test.mydomain.com hosts a BizTalk server and my inbound queue. This runs under service account mydomain\inttestuser.
Server app01test.mydomain.com hosts my application (IIS/AppFabric), my database (SQL server) and my outbound queue. This runs under service account mydomain\apptestuser.
The problem
When this solution is promoted to the acceptance testing environment, calls are still handled, but the responses are blocked with error message:
System.ServiceModel.EndpointNotFoundException: An error occurred while
opening the queue:Unrecognized error -1072824317 (0xc00e0003). The
message cannot be sent or received from the queue. Ensure that MSMQ is
installed and running. Also ensure that the queue is available to open
with the required access mode and authorization. --->
System.ServiceModel.MsmqException: An error occurred while opening the
queue:Unrecognized error -1072824317 (0xc00e0003). The message cannot
be sent or received from the queue. Ensure that MSMQ is installed and
running. Also ensure that the queue is available to open with the
required access mode and authorization.
Differences
In the testing environment, my service and my database is running on a single server instance. (The BizTalk Server and it's queue, the target of my outbound messages, is on another server though)
In the acceptance testing environment, my solution is deployed on two load balanced servers and the database is on a separate cluster.
There are also more strict external firewall rules to mimic the production environment.
Even the BizTalk server is clustered, though we communicate machine-to-machine rather than cluster-to-cluster right now.
So setup in QA Environment is:
Server int01qa.mydomain.com (clustered with int02qa.mydomain.com) hosts a BizTalk server and my inbound queue. This runs under service account mydomain\intqauser.
Server app01qa.mydomain.com (clustered with app02qa.mydomain.com) hosts my application (IIS/AppFabric) and my outbound queue. This runs under service account mydomain\appqauser.
Server db01qa.mydomain.com hosts my database.
What we've already tried
We have disabled authentication on the remote queue.
We have granted full control to the account which my service is running under as well as to "everyone".
We have, successfully, sent msmq messages manually between the two servers.
I have configured my service to send responses to a local private queue, same error.
The problem turned out to be that MSMQ couldn't find a certificate for the app pool user. That is, the
0xc00e0003, MQ_ERROR_QUEUE_NOT_FOUND
was really caused by a
0xC00E002F, MQ_ERROR_NO_INTERNAL_USER_CERT
Changing security settings to
<transport msmqAuthenticationMode="None" msmqProtectionLevel="None" />
enabled messages to be sent.
The real solution, of course, is not to disable security but to ensure that the app pool users cerificate is installed in msmq.
We came across this issue and didn't want to disable authentication. We tried a number of different approaches, but it was something to do with the User Certificate not existing we think.
We went to the App Pool of the client application (which calls the WCF endpoint via MSMQ) and changed the Load Profile property to True. The call then worked. As an aside, changing it back to false continued to work - presumably because it had already sorted the certificate issue out.
I'm new to active MQ.
I have a requirement to create a local Active MQ and connect it to a remote IBM MQ.
Can anyone help me on how to connect to Distributed Queue manager and Queues .
You can use Apache Camel to bridge between the two providers. The routes can be run from within the broker, pull from the ActiveMQ queue and push to the WMQ Queue (or the other way around). The concept is almost like the concept of a Channel in WMQ pulling from a transmit queue and pushing it to the appropriate destination on the remote queue manager.
Assuming you are using WMQ V7+ for all QMgrs and Clients, its simply a matter of learning how to set up the route and configure the connection factories. Older versions of WMQ and you may have to understand how to deal with RFH2 headers for native WMQ clients if they are the consumers.
The most simple route configured in spring would look like:
<route id="amq-to-wmq" >
<from uri="amq:YOUR.QUEUE" />
<to uri="wmq:YOUR.QUEUE" />
</route>
The "wmq" and "amq" would point to beans where the JMS components are configured. This is where you would set up you connection factories to each provider and how the clients behave (transacted or not for example), so I'll hold off on giving an example on that.
This would go in the camel.xml (or whatever you name it) and get imported from your broker's XML. ActiveMQ comes with several examples you can use to get you started using Camel JMS components. Just take a look at the default camel.xml that comes with a normal install.
Current Setup
We have a UI (well more than 1 UI, but that is not relevant), and we have 2 load balanced app servers. Such the UI will talk to an alias, behind which are the 2 load balancer app servers.
The app servers are also self hosting NServiceBus endpoints. The app server (this could be either App Server 1 or App Server 2 ) that is dealing with the current request is capable of doing the following using the self hosted NServiceBus:
Send a message locally (this is a calculation that can be run at any
time, and it doesn’t matter who triggers it, it is just a trigger to
do the calculation)
Send a command to the publisher on the Ancillary
Service Box (the publisher pushes new event to Worker 1 and Worker 2)
Send a command to Worker 1 directly on the Ancillary Services Box
Send a command to Worker 2 directly on the Ancillary Services Box
The "App Server(s)" current App.Config
As such the App.Config for each app server has something like this
<UnicastBusConfig ForwardReceivedMessagesTo="audit">
<MessageEndpointMappings>
<add Assembly="Messages" Type="PublisherCommand" Endpoint="Publisher" />
<add Assembly="Messages" Type=" Worker1Command" Endpoint="Worker1" />
<add Assembly="Messages" Type=" Worker2Command" Endpoint="Worker2" />
<!-- This one is sent locally only -->
<add Assembly=" Messages" Type="RunCalculationCommand" Endpoint="Dealing" />
</MessageEndpointMappings>
</UnicastBusConfig>
The “Publisher” current App.Config
Currently the “Publisher” App.Config
<UnicastBusConfig ForwardReceivedMessagesTo="audit">
<MessageEndpointMappings>
</MessageEndpointMappings>
</UnicastBusConfig>
The “Worker(s)” current App.Config
Currently the worker App.Configs at the moment only have to subscribe to one other endpoint the “Publisher”, their config files looks like this:
<UnicastBusConfig ForwardReceivedMessagesTo="audit">
<MessageEndpointMappings>
<add Assembly="Messages" Type="SomeEvent" Endpoint="Publisher" />
</MessageEndpointMappings>
</UnicastBusConfig>
All other messages to the workers right now come directly from one of the app servers, as shown in the App.Config above for the app servers.
This is all working correctly.
Thing is we have a single point of failure, if the “Ancillary Services Box” dies, we are stuffed.
So we are wondering if we could make use of multiple “Ancillary Services Boxes (each with a Publishers/Worker1/Worker2)”. Ideally they would work exactly as described above, and as shown in the diagram above. Where if “Ancillary Services Box 1” is available it is used, otherwise we use “Ancillary Services Box 2”
I have read about the distributor (but not used it), which if I have it correct, we may be able to use in either the AppServer(s) themselves, where we treat each AppServer as a Distributor and a worker (for the case where we need to do the SendLocal command (RunCalculationCommand) we need to run).
Where the “Ancillary Services Box” would have to use the Distributor for each of the contained endpoints:
So we may end up with something like this:
Could someone help me to know if I am even thinking about this the right way, or whether I am way off.
Essentially what I want to know is:
Is the distributor the correct approach to use?
What would the worker / publisher configs look like, they would have to change somehow to point to distributor no? As I state right now the app servers send a message directly to the workers, to the app server config has the worker end point address, and the worker is only setup to point to the publisher
What would the app servers config look like? Would this stop sending directly to the publisher / workers?
What would the publisher config look like? Should this point to the distributor?
The distributor is a good approach here, but it comes at a cost of increased infrastructure complexity. To avoid introducing another single point of failure, the distributor and it's queues must be run on a Windows Failover Cluser. Meaning both MSMQ and DTC must be configured as clustered services. This can be oh so much fun.. :D
I've renamed what you call "worker" to endpoints, from Worker1 to Endpoint1 and Worker2 to Endpoint2. This is because "worker" is very clearly defined as something specific when you introduce the distributor. An actual physical endpoint on a machine that is receiving messages from a distributor is a worker. So Endpoint1#ServicesMachine01, Endpoint2#ServicesMachine02 etc. are all workers. Workers get work from the distributor.
Scenario 01
In the first scenario you see the app server gets a request from the load balancer and sends it to
Endpoint1#Cluster01 or Endpoint2#Cluster01 queue on the distributor, depending on the command. The
distributor then finds a ready worker for message in that queue and send the command along to it.
So for WorkerCommand1 EITHER Endpoint1#ServicesBox01 OR Endpoint1#ServicesBox02 ends up getting
the command from the distributor and process it as normal.
Scenario 02
In scenario two it's pretty much the same. The PublishCommand is sent to Endpoint3#Cluster01. It
picks one of the ready Endpoint3s, in this case Endpoint3#ServicesBox02, and gives it the command.
ServiceBox02 processes the message and publishes the SomeEvent to Endpoint01#Cluster01 and
Endpoint02#Cluster01. These are picked up by the distributor and in this case sent to
Endpoint1#ServiceBox01 and Endpoint2#ServiceBoxN.
Notice how the messages ALWAYS flow THROUGH the distributor and the queues on Cluster01. This is
actual load balancing of MSMQ.
Config for app server changes to makes sure the commands go through the cluster.
<UnicastBusConfig ForwardReceivedMessagesTo="audit">
<MessageEndpointMappings>
<add Assembly="Messages" Type="PublisherCommand" Endpoint="Endpoint3#Cluster01" />
<add Assembly="Messages" Type="Worker1Command" Endpoint="Endpoint1#Cluster01" />
<add Assembly="Messages" Type="Worker2Command" Endpoint="Endpoint2#Cluster01" />
<!-- This one is sent locally only -->
<add Assembly=" Messages" Type="RunCalculationCommand" Endpoint="Dealing" />
</MessageEndpointMappings>
</UnicastBusConfig>
ServicesBox config changes slightly to make sure subscriptions go through the distributor as well.
<UnicastBusConfig ForwardReceivedMessagesTo="audit">
<MessageEndpointMappings>
<add Assembly="Messages" Type="SomeEvent" Endpoint="Endpoint3#Cluster01" />
</MessageEndpointMappings>
</UnicastBusConfig>
No changes for the publisher config. It doesn't need to point to anything. The subscribers will tell it where to publish.
I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.
What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.
I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.
I have configured network of brokers with the topology as below.
Producer(P1) connected to Broker(B1) and Producer(P2) connected to Broker(B2)
Broker(B1) and Broker(B2) are connected as network of Brokers and are laod balancing
Consumer(C1) connected to Broker(B1) and Consumer(C2) connected to Broker(B2)
Clients are configured to use the failover as:
Consumer-1 = failover:tcp://localhost:61616,tcp://localhost:61615?randomize=false
Consumer-2 = failover:tcp://localhost:61615,tcp://localhost:61616?randomize=false
Once Channel-2 goes down P2 and C2 shifts to Channel-1 which is the desired behaviour for failover.
I want to understand the behavior when Chaneel-2 come back?
I have noticed it is only Channel-1 which continues to serve all
the connections even after Channel-2 has recovered and thus losing load balancing between Channels.
I want to know if it is possible once Channel-2 is back, load balancing will start automatically between channelsand respective Producer-2, Consumers-2 shifts to Channel-2 and thus giving full load balancing and full failover?
I have came across an article 'Combining Fault Tolerance with Load Balancing' on
http://fusesource.com/docs/broker/5.4/clustering/index.html is this recommended for combining Fault Tolerance and Load Balancing?
Regards,
-Amber
On both of your brokers, you need to setup your transportConnector to enable updateClusterClients and rebalanceClusterClients.
<transportConnectors>
<transportConnector name="tcp-connector" uri="tcp://192.168.0.23:61616" updateClusterClients="true" rebalanceClusterClients="true" />
</<transportConnectors>
Specifically, you should want rebalanceClusterClients. From the docs at http://activemq.apache.org/failover-transport-reference.html it states that:
if true, connected clients will be asked to rebalance across a cluster
of brokers when a new broker joins the network of brokers
You must be using ActiveMQ 5.4 or greater to have these options available.
As an answer to your follow up question:
"Is there a way of logging Broker URI as discussed in the article ?"
In order to show what client is connected to what broker,
modify the client's Log4j configuration as follow:
<log4j:configuration debug="true"
xmlns:log4j="http://jakarta.apache.org/log4j/">
...
<logger name="org.apache.activemq.transport.failover.FailoverTransport">
<level value="debug"/>
</logger>
...
</log4j:configuration>