Not being an expert on MSMQ or WCF, I have read up a fair bit about it and it sounds and looks great.
I am trying to develop something, eventually but first some theory, which needs to be robust and durable.
MSMQ I guess will be hosted on a seperate server.
There will be 2 WCF services. One for incoming messages and the other for outgoing messages (takes a message, does some internal processing/validation then places it on the outgoing messages queue or maybe sending an email/text message/whatever)
I understand with the right configuration, we can have the system so that it can be transactional (no messages are ever lost) and can be sent exactly once, so no chance of duplication of messages.
The applications/services will be multithreaded to process messages, which there will be hundreds and thousands of them.
BUT during the processing of a message or through the services lifetime, what if the server crashes? What if the server reboots? What if the service throws an exception for whatever reason? How is it possible to not lose that message but some how to put it back on the queue waiting for it to be processed again?
Also how is it possible to make sure that the service is robust in such a way that it will spawn itself again?
I'd appreciate any advice and details here. There is quite alot to take in and WCF/MSMQ exposes quite alot of options.
Your assumption:
MSMQ I guess will be hosted on a seperate server.
is incorrect. MSMQ is installed on all machines which want to participate in message queuing.
There will be 2 WCF services. One for incoming messages and the other
for outgoing messages
In the most typical configuration, the destination queues are local to the listening service.
For example, your ServiceA would have a local queue from which it reads. ServiceB also has a local queue from which it reads. If ServiceA wants to call ServiceB it will put a message into ServiceB's local queue.
I understand with the right configuration, we can have the system so
that it can be transactional (no messages are ever lost)
This is correct. This is because MSMQ uses a messaging pattern called store-and-forward. See here for an explanation.
Essentially the reason it is safe to assume no message loss is because the transmission of a message from one machine to another actually takes place under three distinct transactions.
The first transaction: ServiceA writes to it's own temporary local queue. If this fails the transaction rolls back and ServiceA can handle the exception.
Second transaction: Queue manager on ServiceA machine transmits message to Queue manager on ServiceB machine. If failure then message remains on temporary queue.
Third transaction: ServiceB reads the message off local queue. If ServiceB message handler method throws exception then transaction rolls message back to local queue.
The applications/services will be multithreaded to process messages
This is fine except if you require order to be preserved in the message processing chain. If you need ordered processing then you cannot have multiple threads without implementing a re-sequencer to reapply order.
I thought that MSMQ can be hosted seperately and have x servers share
that queue?
All servers which want to participate in the exchange of messages have MSMQ installed. Each server can then write to any queue on any other server.
The reason for my thinking was because what if the server goes down?
Then how will the messages get sent/received into MSMQ
If the queues are transactional then that means messages on them are persisted to disk. If the server goes down then when it comes back up the messages are still there. While a server is down it obviously cannot participate in the exchange of messages. However, messages can still be "sent" to that server - they just remain local to the sender (in a temporary queue) until the destination server comes back on-line.
so by having one central MSMQ server (and having it mirrored/failover)
then there will be guarentee of uptime
The whole point of using message queueing is it's a fault-tolerant transport, so you don't need to guarantee uptime. If you have a 100% availability then there would be little reason to use message queuing.
how will WCF be notified of messages that are incoming?
Each service will listen on its own local queue. When a message arrives, the WCF runtime causes the handling method to be called and the message to be handled.
how will the service be notified of failures of sending messages
If ServiceA fails to transmit a message to ServiceB then ServiceB will never be notified of that failure. Nor should it be. ServiceA will handle the failure to transmit, not ServiceB. Your expectation in this instance creates a hard coupling between the services, something which message queueing is supposed to remove.
MSMQ can store messages even if temporary shutdown the service or reboot computer.
Main goal of WCF is transport message from source to destination. Doesn't matter what is the transport. In your case MSMQ is transport for WCF and not obvious to have online / available both client and service simultaneously. But when message is received, it's your responsibility to correctly process it, despite what transport was used to send message.
Related
I'm doing a test to see how the flow control behaves. I created a fast producer and slow consumers and set my destination queue policy highwater mark to 60 percent..
the queue did reach 60% so messages now went to the store, now the store is full and blocking as expected..
But now i cannot get my consumer to connect and pull from the queue.. Seem that blocking is also blocking the consumer from getting in to start pulling from the queue..
Is this the correct behavior?
The consumer should not be blocked by flow-control. Otherwise messages could not be consumed to free up space on the broker for producers to send additional messages.
So this issues surfaced when I was using a on demand jms service. The service will queue or dequeue via a REST services. The consumers are created on demand.. If the broker is being blocked as im my case being out of resource, then you cannot create a new consumer.
I've since modified the jms service to use a consumer pool(implemented a object pool pattern). The consumer pool is initialized when the application starts and this resolved the blocking issue
I am using BeginPeek() /no params/ to subscribe to messages coming in to my private queue. This is being done in a service hosted in NServiceBus host. When NServiceBus encounters transport connection timeout exception (i'm seeing circuit breaker armed logs and timeout exception logs), the peek event subscription seems get lost. When database connectivity becomes stable and new messages come in to my queue, the service is no longer notified.
Any ideas or suggestions on how to address this?
I know that once a message has been delivered to the MSMQ by a WCF client, the netmsmqbinding provides retries out of the box in case the service faults.
But if my client fails to put the message in the MSMQ in the first place, is there an out of the box client retry available in WCF or do I have to implement a client queue and retry logic in my client code?
Thanks
It's a highly unlikely scenario that your messages sent to the service will not even be placed in the client queue in the first place, if you have MSMQ server running on the client station and the MSMQ listener service is up and running you should have nothing to worry about. I don't think MSMQ offers anything to check this for you, you should code some method on your client to periodically Peek() the local queue and send an acknowledgment receipt for every message that has reached the queue, this is feasible since you can easily access your local queues in code and also every message sent via MSMQ from a client to a service will always go trhough the local queue. You can also tell that the message reaches the queue if your Send() method desn't return an error. But I don't think you really need to worry about message son the client not reaching the local queue first.
I've got some trouble understanding the confirm of RabbitMQ, I see the following explanation from RabbitMQ:
Notes
The broker loses persistent messages if it crashes before said
messages are written to disk. Under certain conditions, this causes
the broker to behave in surprising ways. For instance, consider this
scenario:
a client publishes a persistent message to a durable queue
a client consumes the message from the queue (noting that the message is persistent and the queue durable), but doesn't yet ack it,
the broker dies and is restarted, and
the client reconnects and starts consuming messages.
At this point, the client could reasonably assume that the message
will be delivered again. This is not the case: the restart has caused
the broker to lose the message. In order to guarantee persistence, a
client should use confirms. If the publisher's channel had been in
confirm mode, the publisher would not have received an ack for the
lost message (since the consumer hadn't ack'd it and it hadn't been
written to disk).
Then I am using this http://hg.rabbitmq.com/rabbitmq-java-client/file/default/test/src/com/rabbitmq/examples/ConfirmDontLoseMessages.java to do some basic test and verify the confirm, but get some weird results:
The waitForConfirmsOrDie method doesn't block the producer, which is different from my expectation, I suppose the waitForConfirmsOrDie will block the producer until all the messages have been ack'd or one of them is nack'd.
I remove the channel.confirmSelect() and channel.waitForConfirmsOrDie() from publisher, and change the consumer from auto ack to manual ack, I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
Since I am new to RabbitMQ, can anyone tells me where is my problem of the confirm understanding?
My understanding is that "Channel Confirmation" is for Broker confirms it successfully got the message from producer, regardless of consumer ack this message or not. Depending on the queue type and message deliver mode, see http://www.rabbitmq.com/confirms.html for details,
the messages are confirmed when:
it decides a message will not be routed to queues
(if the mandatory flag is set then the basic.return is sent first) or
a transient message has reached all its queues (and mirrors) or
a persistent message has reached all its queues (and mirrors) and been persisted to disk (and fsynced) or
a persistent message has been consumed (and if necessary acknowledged) from all its queues
Old question but oh well..
I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
This is actually how it should work, IF the persistence is enabled. If the server crashes or something else goes wrong, the messages cannot be confirmed, and thus, won't be removed from the queue.
Messages will only be removed from the queue if they are confirmed to be handled, or the broker didn't yet write it to memory or disk before the server crashed.
Confirming and acknowledging can be set off if wanted, and the producer won't be waiting for the acks. I cannot find the exact command for it right now, but it does exist.
More on the acks and confirms: https://www.rabbitmq.com/reliability.html
All,
I'm looking for advice over the following scenario:
I have a component running in one part of the corporate network that sends messages to an application logic component for processing. These components might reside on the same server, different servers in the same network (LAN ot WAN) or live outside in the cloud. The application server should be scalable and resilient.
The messages are related in that the sequence they arrive is important. They are time-stamped with the client timestamp.
My thinking is that I'll get the clients to use WCF basicHttpBinding (some are based on .NET CF which only has basic) to send messages to the Application Server (this is because we can guarantee port 80/443 will be open for outgoing connections). Server accepts these, and writes these into a queue. This queue can be scaled out if needed over multiple machines.
I'm hesitant to use MSMQ for the queue though as to properly scale out we are going to have to install seperate private queues on each application server and round-robin monitor the queues. I'm concerned though that we could lose a message on a server that's gone down until the server is restored, and we could end up processing a later message from a different server and disrupt the sequence.
What I'd prefer is a central queue (e.g. a database table) that all application servers monitor.
With this in mind, what I'd like to do is to create a custom WCF binding, similar to netMsmqBinding, but that uses the DB table instead but I'm confused as to whether I can simply create a custom transport or a I need a full binding, and whether the binding will allow the client to send over HTTP. I've looked around the internet but I'm a little confused as to where to start.
I could not bother with the custom WCF binding but it seems a good way to introduce scalability if I do need to seperate the servers.
Any suggestions please would be helpful, including alternatives.
Many thanks
I would start with MSMQ because it is exactly for this purpouse. Use single transactional queue on clustered machine and let application servers to take messages for processing from this queue. Each message processing has to be part of distributed transaction (MSDTC).
This scenario will ensure:
clustered queue host will ensure that if one cluster node fails the other will still be able to handle requests
sending each message as recoverable - it means that message will be persisted on hard drive (not only in memory) so in critical failure of the whole cluster you will still have all messages.
transactional queue will ensure that all message transport operations will be atomic - moving message from outgoing queue to destination queue will be processed as transaction. It means that original message from outgoing queue will be kept in queue until ack from destination queue arrives. Transactional processing can ensure in order delivery.
Distributed transaction will allow application servers consuming messages in transaction. Message will not be deleted from queue until application server commits transaction or transaction time outs.
MSMQ is also available on .NET CF so you can send messages directly to queue without intermediate non-reliable web service layer.
It should be possible to configure MSMQ over HTTP (but I have never used it so I'm not sure how it cooperates with previous mentioned features).
Your proposed solution will be pretty hard. You will end up in building BizTalk's MessageBox. But if you really want to do it, check Omar's post about building database queue table.