Can't read from remote transactional private queue using WCF in workgroup mode (can do using System.Messaging !) - wcf

I have spent days reading MSDN, forums and article about this, and cannot find a solution to my problem.
As a PoC, I need to consume a queue from more than one machine since I need fault tolerance on the consumers side. Performance is not an issue since less than 100 messages a day should by exchanged.
I have coded two trivial console application , one as client, the other one as server. Using Framework 4.0 (tested also on 3.5). Messages are using transactions.
Everything runs fines on a single machine (Windows 7), even when running multiple consumers application instance.
Now I have a 2012 and a 2008 R2 virtual test servers running in the same domain (but don't want to use AD integration anyway). I am using IP address or "." in endpoint address attribute to prevent from DNS / AD resolution side effects.
Everything works fine IF the the queue is hosted by the consumer and the producer is submitting messages on the remote private queue. This is also true if I exchange the consumer / producer role of the 2012 and 2008 server.
But I have NEVER been able to make this run, using WCF, when the consumer is reading from remote queue and the producer is submitting messages localy. Submition never fails, my problem is on the consumer side.
My wish is to make this run using netMsmqBinding, but I also tried using msmqIntegrationBinding. For each test, I adapted code and configuration, then confirmed this was running ok when the consumer was consuming from the local queue.
The last test I have done is using WCF (msmqIntegrationBinding) only on the producer (local queue) and System.Messaging.MessageQueue on the consumer (remote queue) : It works fine ! => My goal is to make the same using WCF and netMsmqBinding on both sides.
In my point of view, I have proved this problem is a WCF issue, not an MSMQ one. This has nothing to do with security, authentication, firewall, transport, protocol, MSMQ version etc.
Errors info using MS Service Trace Viewer :
Using msmqIntegrationBinding when receiving the message (openning queue was ok) : An error occurred while receiving a message from the queue: The transaction specified cannot be imported. (-1072824242, 0xc00e004e). Ensure that MSMQ is installed and running. Make sure the queue is available to receive from.
Using netMsmqBinding, on opening the queue : An error occurred when converting the '172.22.1.9\private$\Test' queue path name to the format name: The queue path name specified is invalid. (-1072824300, 0xc00e0014). All operations on the queued channel failed. Ensure that the queue address is valid. MSMQ must be installed with Active Directory integration enabled and access to it is available.
If someone can help to find why my configuration cannot be handled by WCF, a much elegant and configurable way than Messaging, I would greatly appreciate !
Thank you.

You may need to post you consumer code and config to give more of an idea but it could be the construction of the queue name - e.g.
FormatName:DIRECT=TCP:192.168.0.2\SomeQueue
There are several different ways to connect to a queue and it changes when you are remote or local as well.
I have found this article in the past to help:
http://blogs.msdn.com/b/johnbreakwell/archive/2009/02/26/difference-between-path-name-and-format-name-when-accessing-msmq-queues.aspx
Also, MessageQueue Constructor on MSDN...
http://msdn.microsoft.com/en-us/library/ch1d814t.aspx

Related

Weblogic migratable JMS consumer doesn't follow the service to the new managed server if the old server remains running

I have a JMS service targeted at a migratable target (using an Auto-Migrate Exactly-Once policy) in a cluster which consists of 2 managed servers, at any point of time the service is hosted at one of them and the consumer (which is targeted at the cluster) is supposed to receive messages seamlessly no matter where the service is hosted.
When I manually switch the host of the migratable target (clicking migrate), without turning the hosting managed server off, the consumer fails to receive messages sent to the queues, unless I turn off the previous hosting managed server forcing the consumer to the new host.
I can rule out sender problems, I can see the messages in the queue right after them being sent.
I'll be grateful if anyone can advice on how to configure either the consumer or the migratable service to work seamlessly when migration happens.
I think that may just be a misunderstanding of how migration works. The docs state Auto-Migrate Exactly-Once:
indicates that if at least one Managed Server in the candidate list
is running, then the JMS service will be active somewhere in the
cluster if servers should fail or are shut down (either gracefully or
forcibly). For example, a migratable target hosting a path service
should use this option so if its hosting server fails or is shut down,
the path service will automatically migrate to another server and so
will always be active in the cluster. Note that this value can lead to
target grouping. For example, if you have five exactly-once migratable
targets and only one server member is started, then all five
migratable targets will be activated on that server member.
The docs also state:
Manual Service Migration—the manual migration of pinned JTA and
JMS-related services (for example, JMS server, SAF agent, path
service, and custom store) after the host server instance fails
Your server/service has neither failed or shut down, you are forcing it to migrate with a healthy host still running, so it has not met the criteria for migration.
See more here as well.
I have some experience that sounds reminiscent of what you're looking at. There was some WLS-specific capability around recognizing reconfiguration in JMS destinations as part of their clustered server design.
In one case I had to call a WLS-specific method: weblogic.jms.extensions.WLSession.setExceptionListener(). This was on their implementation of the JMS Session interface. This is analogous to the standard JMS Connection.setExceptionListener().
With this WLS-specific capability, the WLSession.setExceptionListener() callback would occur at a point where the consuming client should tear down and re-establish the connection / session / consumer in reaction to a reconfiguration (migration) that had happened.

Masstransit: Can it use central msmq server? (Or should I start w/RabbitMQ from the start?)

I set up the Masstransit sample apps, and all was great. Local operation, msmq, looks good.
Now I am starting to put masstransit in my real app. In my real app, I have jobs coming from four servers, and processing happening on two worker systems.
It seems that masstransit always wants to push to:
msmq://localhost/...
But I thought I would set up a single, central, msmq server: msmq:///...
It appears (I may be missing something! Please correct me if I am off!) that when using msmq, that I need to set up msmq on multiple machines, and configure msmq to route from machine to machine.
Am I missing something?
Should I skip msmq and jump to rabbitmq right off, (which appears to solve for this)?
Is there some fundamental msmq knowledge (that is perhaps not in the masstransit docs) ?
thank you!
First off I always suggest people use RabbitMQ over MSMQ unless you MUST use DTC for some reason. And even then, I'd suggest you rethink using DTC.
But given you have some constraint you can't fight. You're welcomed to use a central MSMQ server but it doesn't provide a ton of value. Each server sending messages must have MSMQ installed locally because of how it works. Messages actually end up in an outgoing queue before they are sent over the other machine in question. If you have multi-machine MSMQ setups, in the past for me it's been like:
Core Machine runs MassTransit.RuntimeServices at /mt_subscriptions, and maybe one service at /service_1
Other processing machine runs a specific heavy load service at /service_2 and it's configuration references msmq://coremachine/mt_subscriptions for the subscription service.
Yet another processing machine with similar setup
So with those 3 machines, the only thing you don't have msmq://localhost/ is the reference to the subscription service in configuration.

NserviceBus - What happens to a message if the server is offline

I went thought NServiceBus documentation including the durable messaging one. What I understand is that when the server is offline the messages continue to go into the server's input queue which get picked up when server comes back online.
But what if the server is completely down and the input queue is not accessible?
I'm using Bus.Send from the client.
It depends on what transport you're using.
In the case of a brokered message queue, like Azure Service Bus, as long as that service is available, the fact the machine that will eventually retrieve the messages is offline is irrelevant, as that machine is simply asking the external queuing service for messages. The same goes for a transport like SQL Server.
In the case of a transport like MSMQ, which is a store a forward style queue, the messages will remain in a local outgoing queue until the remote machine becomes available.
Can you double check that you are looking in the correct spot? If you aren't getting an error out of NServiceBus when you Send, then MSMQ is installed. If it can't be reached or the service is stopped you should get errors.
The Outbound queues are in a different place as illustrated here:
http://blogs.msdn.com/cfs-filesystemfile.ashx/__key/communityserver-components-postattachments/00-09-06-31-16/outgoingempty.JPG
As RMD indicated, this is an advantage of the store and forward MSMQ transport.. the local outbound queue should just stack these up until the remote server is available.
Thx.
Joe

Connect NServiceBus with an AIX Mainframe

I have a back end system that drops events to my system. It is critical that these events don't get lost (I work for a health care company and lost info can impact a patient's care).
I would like to make this system drop it's data into NServiceBus so that it can be published to subscribers that need it. However, my server that is dropping these messages is an AIX machine, so it can't run .NET Code.
This system can send the messages via a lot of standard protocol and communication types (TCP, WSDL Based Services, Call A Database Sproc, etc).
One option I have considered is to setup a WCF service that the AIX mainframe will call. I can then have my WCF service make the call to NServiceBus.
But the events sent per minute of this back end service can at times be fairly high (about 500 messages per minute). I am worried that WCF is not up to this, while NService bus says it can handle 1000 messages per second. Am also worried about data loss in the event of a downtime. NserviceBus claims it is not going to loose any data.
Am I wrong? Is WCF going to be just fine? Or am I making a weak link in the chain?
Is there a way I can use an established protocol to add items directly to an NServiceBus Queue?
Or should I just write my own .NET app that will allow NServiceBus to use a TCP connection?
Note: Because these messages are critical, the message must be acknowledged or the server will keep sending it.
I would take a look at the WCF integration that comes right out of the box. The WCF service is contained within the same host as NSB. The integration does nothing more than just push the message onto the queue, so I don't think you'll have a throughput issue. Seeing that this is critical data, I would suggest clustering the service. The other option would be to install 2 or more instances of the service on different machines and load balance the HTTP calls across both. In essence you would have 1 logical Publisher with 2 physical components doing the publishing.

Advice on disconnected messages with WCF through firewalls

All,
I'm looking for advice over the following scenario:
I have a component running in one part of the corporate network that sends messages to an application logic component for processing. These components might reside on the same server, different servers in the same network (LAN ot WAN) or live outside in the cloud. The application server should be scalable and resilient.
The messages are related in that the sequence they arrive is important. They are time-stamped with the client timestamp.
My thinking is that I'll get the clients to use WCF basicHttpBinding (some are based on .NET CF which only has basic) to send messages to the Application Server (this is because we can guarantee port 80/443 will be open for outgoing connections). Server accepts these, and writes these into a queue. This queue can be scaled out if needed over multiple machines.
I'm hesitant to use MSMQ for the queue though as to properly scale out we are going to have to install seperate private queues on each application server and round-robin monitor the queues. I'm concerned though that we could lose a message on a server that's gone down until the server is restored, and we could end up processing a later message from a different server and disrupt the sequence.
What I'd prefer is a central queue (e.g. a database table) that all application servers monitor.
With this in mind, what I'd like to do is to create a custom WCF binding, similar to netMsmqBinding, but that uses the DB table instead but I'm confused as to whether I can simply create a custom transport or a I need a full binding, and whether the binding will allow the client to send over HTTP. I've looked around the internet but I'm a little confused as to where to start.
I could not bother with the custom WCF binding but it seems a good way to introduce scalability if I do need to seperate the servers.
Any suggestions please would be helpful, including alternatives.
Many thanks
I would start with MSMQ because it is exactly for this purpouse. Use single transactional queue on clustered machine and let application servers to take messages for processing from this queue. Each message processing has to be part of distributed transaction (MSDTC).
This scenario will ensure:
clustered queue host will ensure that if one cluster node fails the other will still be able to handle requests
sending each message as recoverable - it means that message will be persisted on hard drive (not only in memory) so in critical failure of the whole cluster you will still have all messages.
transactional queue will ensure that all message transport operations will be atomic - moving message from outgoing queue to destination queue will be processed as transaction. It means that original message from outgoing queue will be kept in queue until ack from destination queue arrives. Transactional processing can ensure in order delivery.
Distributed transaction will allow application servers consuming messages in transaction. Message will not be deleted from queue until application server commits transaction or transaction time outs.
MSMQ is also available on .NET CF so you can send messages directly to queue without intermediate non-reliable web service layer.
It should be possible to configure MSMQ over HTTP (but I have never used it so I'm not sure how it cooperates with previous mentioned features).
Your proposed solution will be pretty hard. You will end up in building BizTalk's MessageBox. But if you really want to do it, check Omar's post about building database queue table.