In WSO2 ESB, I created a proxy service that pushs the incoming messages to a message store. Then I define a message processor on that message store, to consume messages and send them to a sequence. To be able to define a sequence in my message processor, I have to use sampling processor implementation.
But I also want to manage faults on the sequence. Unfortunately, the sampling processor implementation does not provide fault management through a fault sequence. Only forwarding processor implementation allows me to manage faults, but it only manages forwarding messages to an endpoint.
How can I have both incoming messages sequence and fault sequence in a message processor ? Must I implement my own message processor ?
You can use the forwarding processor and configure your endpoint to send the message to a proxy inside the ESB.
Inside this proxy, you define the mediation you want to apply to the message and send back a response to commit or rollback the transaction (commit = message removed from store ; rollback = message stay in the store)
responses with HTTP Satus code 200 or 500 (SOAP Fault) will be considered as good responses and will commit the transaction (the reply sequence will be executed)
responses with others HTTP Status code will be considered as fault responses and will rollback the transaction (the fault sequence will be executed)
Related
I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.
I need to implement a logic on Retry. Inbound endpoint pushes the messages to Rest (Outbound). If the REST is unavailable, I need to retry for 1 time and put it in the queue. But the second upcoming messages should not do any retry, it has to directly put the messages in to queue until the REST service is available.
Once the service is available, I need to pushes all the messages from QUEUE to REST Service (in ordering) via batch job.
Questions:
How do I know the service is unavailable for my second message? If I use until Successful, for every message it do retry and put in queue. Plm is 2nd message shouldn't do retry.
For batch, I thought of using poll, but how to tell to poll, when the service becomes available to begin the batch process. (bcz,Poll is more of with configuring timings to run batch)?
Other ticky confuses me is - Here ordering has to be preserved. once the service is available. Queue messages ( i,e Batch) has to move first to REST Services then with real time. I doubt whether Is it applicable.
It will be very helpful for the quick response to implement the logic.
Using Mule: 3.5.1
I could try something like below: using flow controls
process a message; if exception or bad response code, set a variable/property like serviceAvailable=false.
subsequent message processing will first check the property serviceAvailable to process the messages. if property is false, en-queue the messages to a DB table with status=new/unprocessed
create a flow/scheduler to process the messages from DB sequentially, but it will not check the property serviceAvailable and call the rest service.
If service throws exception it will not store the messages in db again but if processes successfully change the property serviceAvailable=true and de-queue the messages or change the status. Add another property and set it to true if there are more messages in db table like moreDBMsg=true.
New messages should not be processed/consumed until moreDBMsg=false
once moreDBMsg=false and serviceAvailable=true start processing the messages from queue.
For the timeout I would still look at the response code and catch time-outs to determine if the call was successful or requires a retry. Practically you normally do multi threading anyway, so you have multiple calls in parallel anyway. Or simply one call starts before the other ends.
That is just quite normal.
But you can simply retry calls in a queue that time out. And after x amounts of time-outs you "skip" or defer the retry.
But all of this has been done using actual Mule flow components like either:
MEL http://www.mulesoft.org/documentation/display/current/Mule+Expression+Language+Reference
Or flow controls: http://www.mulesoft.org/documentation/display/current/Choice+Flow+Control+Reference
Or for example you reference a Spring Bean and do it in native Java code.
One possibility for the queue would be to persist it in a database. Mule has database connector that has a "poll" feature, see: http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-PollingTransport
We have a WCF service that listens for messages on a queue (MSMQ). It sends a request to our web server (REST API), which returns an HTTP status code.
If the status code falls within the 400 range, we are throwing away the message. The idea is a 400 range error can never succeed (unauthorized, bad request, not found, etc.) and so we don't want keep retrying.
For all other errors (e.g., 500 - Internal Server Error), we have WCF configured to put the message on a "retry" queue. Messages on the retry queue get retried after a certain amount of time. The idea is that the server is temporarily down, so wait and try again.
The way WCF is set up, if we throw a FaultException in the service contract, it will automatically put the message on the retry queue.
When a message causes a 400 range error, we are just swallowing the error (we just log it). This prevents the retry mechanism from firing; however, it would be better to move the message to a dead-letter queue. This way we can react to the error by sending an email to the user and/or a system administrator.
Is there a way to immediately move these bad messages to a dead-letter queue?
First, I kept referring to the dead-letter queue. At the time when I posted this question, I was unaware that WCF/MSMQ automatically creates what's known as a poison sub-queue. Any message that can't be delivered in the configured number of times is put in the poison sub-queue.
In my situation, I knew that some messages would never succeed, so I wanted to move the message out of the queue immediately.
The solution was to create a second queue that I called "poison" (not to be confused with the poison sub-queue). My catch block would create an instance of a WCF client and forward the message to this poison queue. I could reuse the same client to post to both the original queue and the poison queue; I just had to create a separate client end-point in the configuration file for each.
I had two separate ServiceHost instances running that read the queues. The ServiceHost for the original queue did the HTTP request and forwarded messages to the poison queue when unrecoverable errors occurred. The second ServiceHost would simply send out an email to record that a message was lost.
There was also the issue of temporary errors that exceeded the maximum number of tries. WCF/MSMQ automatically creates a sub-queue called <myqueuename>;poison. You cannot directly write to a sub-queue via WCF, but you can read from it using a ServiceHost. Whenever messages end up in the poison sub-queue, I simply forward the message to the poison queue, with the exact same client I use in the original handler's catch block.
I wanted the ability to include a stack trace in the error emails. Since I was reusing the same client and service contract for all of the handlers, I couldn't just pass along the stack trace as a string (unless I added it to all of my data contracts). Instead, I had the poison handler try to execute the code one more time, which would fail again and spit out the stack trace.
This is what my message queues ended up looking like:
MyQueue
- Queue messages
- Retry
- Poison
MyQueuePoison
- Queue messages
This approach is pretty convoluted. It was strange calling A WCF client from within a WCF service handler. It also meant setting up one more queue on the server and a ton of additional configuration sections for specifying which queue a client should forward messages to.
hopefully I have understood your question and if it is what i think you are saying then yes there is but you obviously need to program it to do this. But you DO need a retry amount set so the MSMQ can retry until it gives up. Or you can create your own custom queue for dead letters/messages
http://msdn.microsoft.com/en-us/library/ms789035(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/ms752268(v=vs.110).aspx
take a look here also:
http://www.michaelfcollins3.me/blog/2012/09/20/wcf-msmq-bad-message-handling.html
How do I handle message failure in MSMQ bindings for WCF
I hope these links help.
Using ActiveMQ v 5.8
I am using javax.jms.MessageProducer.send() to send messages from my producer to ActiveMQ.
I want to know whether this sending is synchronous or asynchronous? And what will be the behavior if I make "useAsyncSend" flag to true ?
Thanks,
Anuj
ActiveMQ sends message in async mode by default in several cases. It is only in cases where the JMS specification required the use of sync sending that we default to sync sending. The cases that we are forced to send in sync mode are when persistent messages are being sent outside of a transaction.
If you are not using transactions and are sending persistent messages, then each send is synch and blocks until the broker has sent back an acknowledgement to the producer that the message has been safely persisted to disk. This ack provides that guarantee that the message will not be lost but it also costs a huge latency penalty since the client is blocked.
See the documentation on this at the ActiveMQ s.
yes, by default a send() is synchronous (for persistent queue/topic, async otherwise) and will block until an ACK has been received...
with useAsyncSend=true will not block...
per http://activemq.apache.org/connection-configuration-uri.html
Async Sends adds a massive performance boost; but means that the send() method will return immediately whether the message has been sent or not which could lead to message loss.
RPC call and cast are two different types of message passing protocol in OpenStack. In case of RPC.call, the invoker (or caller) waits for the reply or ack messsage from the worker (callee).
I am trying to intercept all RPC messages (both Request & Reply Message) passing through rabbitmq system in OpenStack. In OpenStack all request messages pass through a single exchange named "nova". Attaching a new queue to the "nova" exchange, I can capture request Message.
Now, I want to capture reply messages that are sent back to callee. Reply messages can be captured by "direct Consumer" as specified by AMQP and Nova and excerpt as follows
a Direct Consumer comes to life if (an only if) a rpc.call operation is executed; this object is
instantiated and used to receive a response message from the queuing system; Every consumer connects to
a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message
delivery; the exchange and queue identifiers are determined by a *UUID generator*, and are marshaled in
the message sent by the Topic Publisher (only rpc.call operations).
In order to capture reply message, I have tried to connect to a direct exchange with corresponding msg_id or request_id. I am not sure what would be correct exchange id for capturing reply of a specific rpc.call.
Any idea what would be the exchange id what I may use to capture reply from a rpc.call message ? What is the UUID generator as specified in the excerpt I attached ?
I don't know the details of the OpenStack implementation, but when doing RPC over Messaging Systems, usually messages carry a correlation_id identifier that should be used to track requests.
See: http://www.rabbitmq.com/tutorials/tutorial-six-python.html